Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes, but often require collection of additional information may be cumbersome to apply to models that yield a continuous result. Decision curve analysis is a method for evaluating and comparing prediction models that incorporates clinical consequences, requires only the data set on which the models are tested, and can be applied to models that have either continuous or dichotomous results. See the following references for details on the methods: Vickers (2006) <doi:10.1177/0272989X06295361>, Vickers (2008) <doi:10.1186/1472-6947-8-53>, and Pfeiffer (2020) <doi:10.1002/bimj.201800240>.
This package provides functions for computing the density, distribution, and random generation of the Decision Diffusion model (DDM), a widely used cognitive model for analysing choice and response time data. The package allows model specification, including the ability to fix, constrain, or vary parameters across experimental conditions. While it does not include a built-in optimiser, it supports likelihood evaluation and can be integrated with external tools for parameter estimation. Functions for simulating synthetic datasets are also provided. This package is intended for researchers modelling speeded decision-making in behavioural and cognitive experiments. For more information, see Voss, Rothermund, and Voss (2004) <doi:10.3758/BF03196893>, Voss and Voss (2007) <doi:10.3758/BF03192967>, and Ratcliff and McKoon (2008) <doi:10.1162/neco.2008.12-06-420>.
An R interface to the codediff JavaScript library (a copy of which is included in the package, see <https://github.com/danvk/codediff.js> for information). Allows for visualization of the difference between 2 files, usually text files or R scripts, in a browser.
For working with the DataRobot predictive modeling platform's API <https://www.datarobot.com/>.
By systematically aggregating and processing textual reports from earthquakes, floods, storms, wildfires, and other natural disasters, the framework enables a holistic assessment of crisis narratives. Intelligent cleaning and normalization techniques transform raw commentary into structured data, ensuring precise extraction of disaster-specific insights. Collective sentiments of affected communities are quantitatively scored and qualitatively categorized, providing a multifaceted view of societal responses under duress. Interactive geographic maps and temporal charts illustrate the evolution and spatial dispersion of emotional reactions and impact indicators.
Implementation of new discrete statistical distributions. Each distribution includes the traditional functions as well as an additional function called the family function, which can be used to estimate parameters within the gamlss framework.
This package provides a high level API to interface over sources storing distance, dissimilarity, similarity matrices with matrix style extraction, replacement and other utilities. Currently, in-memory dist object backend is supported.
Mechanistically models/predicts the phenology (macro-phases) of 10 crop plants (trained on a big dataset over 80 years derived from the German weather service (DWD) <https://opendata.dwd.de/>). Can be applied for remote sensing purposes, dynamically check the best subset of available covariates for the given dataset and crop.
Draw samples from the direct sampling spatial prior model as described in G. White, D. Sun, P. Speckman (2019) <arXiv:1906.05575>. The basic model assumes a Gaussian likelihood and derives a spatial prior based on thin-plate splines.
Flexibly convert data between long and wide format using just two functions: reshape_toLong() and reshape_toWide().
This package provides a general framework using mixture Weibull distributions to accurately predict biomarker-guided trial duration accounting for heterogeneous population. Extensive simulations are performed to evaluate the impact of heterogeneous population and the dynamics of biomarker characteristics and disease on the study duration. Several influential parameters including median survival time, enrollment rate, biomarker prevalence and effect size are identified. Efficiency gains of biomarker-guided trials can be quantitatively compared to the traditional all-comers design. For reference, see Zhang et al. (2024) <arXiv:2401.00540>.
Given count data from two conditions, it determines which transcripts are differentially expressed across the two conditions using Bayesian inference of the parameters of a bottom-up model for PCR amplification. This model is developed in Ndifon Wilfred, Hilah Gal, Eric Shifrut, Rina Aharoni, Nissan Yissachar, Nir Waysbort, Shlomit Reich Zeliger, Ruth Arnon, and Nir Friedman (2012), <http://www.pnas.org/content/109/39/15865.full>, and results in a distribution for the counts that is a superposition of the binomial and negative binomial distribution.
Re-arranges a dendrogram to optimize visualisation-based cost functions.
This package provides a collection of methods for automated data cleaning where all actions are logged.
The dfmirroR package allows users to input a data frame, simulate some number of observations based on specified columns of that data frame, and then outputs a string that contains the code to re-create the simulation. The goal is to both provide workable test data sets and provide users with the information they need to set up reproducible examples with team members. This package was created out of a need to share examples in cases where data are private and where a full data frame is not needed for testing or coordinating.
Feature selection from high dimensional data using a diploid genetic algorithm with Incomplete Dominance for genotype to phenotype mapping and Random Assortment of chromosomes approach to recombination.
Overload utils::'? to build unary and binary operators from existing functions, piping operators of different precedence, and flexible syntaxes.
Discriminant Non-Negative Matrix Factorization aims to extend the Non-negative Matrix Factorization algorithm in order to extract features that enforce not only the spatial locality, but also the separability between classes in a discriminant manner. It refers to three article, Zafeiriou, Stefanos, et al. "Exploiting discriminant information in nonnegative matrix factorization with application to frontal face verification." Neural Networks, IEEE Transactions on 17.3 (2006): 683-695. Kim, Bo-Kyeong, and Soo-Young Lee. "Spectral Feature Extraction Using dNMF for Emotion Recognition in Vowel Sounds." Neural Information Processing. Springer Berlin Heidelberg, 2013. and Lee, Soo-Young, Hyun-Ah Song, and Shun-ichi Amari. "A new discriminant NMF algorithm and its application to the extraction of subtle emotional differences in speech." Cognitive neurodynamics 6.6 (2012): 525-535.
Implementation of some Deep Learning methods. Includes multilayer perceptron, different activation functions, regularisation strategies, stochastic gradient descent and dropout. Thanks go to the following references for helping to inspire and develop the package: Ian Goodfellow, Yoshua Bengio, Aaron Courville, Francis Bach (2016, ISBN:978-0262035613) Deep Learning. Terrence J. Sejnowski (2018, ISBN:978-0262038034) The Deep Learning Revolution. Grant Sanderson (3brown1blue) <https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi> Neural Networks YouTube playlist. Michael A. Nielsen <http://neuralnetworksanddeeplearning.com/> Neural Networks and Deep Learning.
Create D3 based SVG ('Scalable Vector Graphics') graphics using a simple R API. The package aims to simplify the creation of many SVG plot types using a straightforward R API. The package relies on the r2d3 R package and the D3 JavaScript library. See <https://rstudio.github.io/r2d3/> and <https://d3js.org/> respectively.
This package provides functions for planning clinical trials subject to a delayed treatment effect using assurance-based methods. Includes two shiny applications for interactive exploration, simulation, and visualisation of trial designs and outcomes. The methodology is described in: Salsbury JA, Oakley JE, Julious SA, Hampson LV (2024) "Assurance methods for designing a clinical trial with a delayed treatment effect" <doi:10.1002/sim.10136>, Salsbury JA, Oakley JE, Julious SA, Hampson LV (2024) "Adaptive clinical trial design with delayed treatment effects using elicited prior distributions" <doi:10.48550/arXiv.2509.07602>.
Visualize contact tracing data using a shiny app and estimate the incubation or latency time of an infectious disease respecting the following characteristics in the analysis; (i) doubly interval censoring with (partly) overlapping or distinct windows; (ii) an infection risk corresponding to exponential growth; (iii) right truncation allowing for individual truncation times; (iv) different choices concerning the family of the distribution. For our earlier work, we refer to Arntzen et al. (2023) <doi:10.1002/sim.9726>. A paper describing our approach in detail will follow.
This package provides a Bayesian framework for parameter inference in differential equations. This approach offers a rigorous methodology for parameter inference as well as modeling the link between unobservable model states and parameters, and observable quantities. Provides templates for the DE model, the observation model and data likelihood, and the model parameters and their prior distributions. A Markov chain Monte Carlo (MCMC) procedure processes these inputs to estimate the posterior distributions of the parameters and any derived quantities, including the model trajectories. Further functionality is provided to facilitate MCMC diagnostics and the visualisation of the posterior distributions of model parameters and trajectories.
Assists in finding the most suitable thread count for the various data.table routines that support parallel processing.