Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements Roy's bivariate geometric model (Roy (1993) <doi:10.1006/jmva.1993.1065>): joint probability mass function, distribution function, survival function, random generation, parameter estimation, and more.
Binford's hunter-gatherer data includes more than 200 variables coding aspects of hunter-gatherer subsistence, mobility, and social organization for 339 ethnographically documented groups of hunter-gatherers.
This package provides computational tools to generate efficient blocked and unblocked fractional factorial designs for two-level and three-level factors using the generalized Minimum Aberration (MA) criterion and related optimization algorithms. Methodological foundations include the general theory of minimum aberration as described by Cheng and Tang (2005) <doi:10.1214/009053604000001228>, and the catalogue of three-level regular fractional factorial designs developed by Xu (2005) <doi:10.1007/s00184-005-0408-x>. The main functions dol2() and dol3() generate blocked two-level and three-level fractional factorial designs, respectively, using beam search, optimization-based ranking, confounding assessment, and structured output suitable for complete factorial situations.
Fully Bayesian inference for estimating the number of clusters and related parameters to heterogeneous binary data.
Laplace approximations and penalized B-splines are combined for fast Bayesian inference in latent Gaussian models. The routines can be used to fit survival models, especially proportional hazards and promotion time cure models (Gressani, O. and Lambert, P. (2018) <doi:10.1016/j.csda.2018.02.007>). The Laplace-P-spline methodology can also be implemented for inference in (generalized) additive models (Gressani, O. and Lambert, P. (2021) <doi:10.1016/j.csda.2020.107088>). See the associated website for more information and examples.
The goal of the package is to provide an easy-to-use method for estimating degrees of relatedness (up to the second degree) for extreme low-coverage data. The package also allows users to quantify and visualise the level of confidence in the estimated degrees of relatedness.
Functional gradient descent algorithm for a variety of convex and non-convex loss functions, for both classical and robust regression and classification problems. See Wang (2011) <doi:10.2202/1557-4679.1304>, Wang (2012) <doi:10.3414/ME11-02-0020>, Wang (2018) <doi:10.1080/10618600.2018.1424635>, Wang (2018) <doi:10.1214/18-EJS1404>.
Bindings to the blowfish password hashing algorithm <https://www.openbsd.org/papers/bcrypt-paper.pdf> derived from the OpenBSD implementation.
Derived from the work of Kruschke (2015, <ISBN:9780124058880>), the present package aims to provide a framework for conducting Bayesian analysis using Markov chain Monte Carlo (MCMC) sampling utilizing the Just Another Gibbs Sampler ('JAGS', Plummer, 2003, <https://mcmc-jags.sourceforge.io>). The initial version includes several modules for conducting Bayesian equivalents of chi-squared tests, analysis of variance (ANOVA), multiple (hierarchical) regression, softmax regression, and for fitting data (e.g., structural equation modeling).
Generates different posterior distributions of adjusted odds ratio under different priors of sensitivity and specificity, and plots the models for comparison. It also provides estimations for the specifications of the models using diagnostics of exposure status with a non-linear mixed effects model. It implements the methods that are first proposed in <doi:10.1016/j.annepidem.2006.04.001> and <doi:10.1177/0272989X09353452>.
This package provides a lightweight modelling syntax for defining likelihoods and priors and for computing Bayes factors for simple one parameter models. It includes functionality for computing and plotting priors, likelihoods, and model predictions. Additional functionality is included for computing and plotting posteriors.
This package provides a framework and toolkit to guide R dashboard developers in implementing the Behavioral Insight Design (BID) framework. The package offers functions for documenting each of the five stages (Interpret, Notice, Anticipate, Structure, and Validate), along with a comprehensive concept dictionary. Works with both shiny applications and Quarto dashboards.
This package provides a chemical speciation and toxicity prediction model for the toxicity of metals to aquatic organisms. The Biotic Ligand Model (BLM) engine was originally programmed in PowerBasic by Robert Santore and others. The main way the BLM can be used is to predict the toxicity of a metal to an organism with a known sensitivity (i.e., it is known how much of that metal must accumulate on that organism's biotic ligand to cause a physiological effect in a certain percentage of the population, such as a 20% loss in reproduction or a 50% mortality rate). The second way the BLM can be used is to estimate the chemical speciation of the metal and other constituents in water, including estimating the amount of metal accumulated to an organism's biotic ligand during a toxicity test. In the first application of the BLM, the amount of metal associated with a toxicity endpoint, or regulatory limit will be predicted, while in the second application, the amount of metal is known and the portions of that metal that exist in various forms will be determined. This version of the engine has been re-structured to perform the calculations in a different way that will make it more efficient in R, while also making it more flexible and easier to maintain in the future. Because of this, it does not currently match the desktop model exactly, but we hope to improve this comparability in the future.
Efficient methods for Bayesian inference of state space models via Markov chain Monte Carlo (MCMC) based on parallel importance sampling type weighted estimators (Vihola, Helske, and Franks, 2020, <doi:10.1111/sjos.12492>), particle MCMC, and its delayed acceptance version. Gaussian, Poisson, binomial, negative binomial, and Gamma observation densities and basic stochastic volatility models with linear-Gaussian state dynamics, as well as general non-linear Gaussian models and discretised diffusion models are supported. See Helske and Vihola (2021, <doi:10.32614/RJ-2021-103>) for details.
Analysis workflow for finding geographic boundaries of ecological or landscape traits and comparing the placement of geographic boundaries of two traits. If data are trait values, trait data are transformed to boundary intensities based on approximate first derivatives across latitude and longitude. The package includes functions to create custom null models based on the input data. The boundary statistics are described in: Fortin, Drapeau, and Jacquez (1996) <doi:10.2307/3545584>.
Forecasters predicting the chances of a future event may disagree due to differing evidence or noise. To harness the collective evidence of the crowd, Ville Satopää (2021) "Regularized Aggregation of One-off Probability Predictions" <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3769945> proposes a Bayesian aggregator that is regularized by analyzing the forecasters disagreement and ascribing over-dispersion to noise. This aggregator requires no user intervention and can be computed efficiently even for a large numbers of predictions. The author evaluates the aggregator on subjective probability predictions collected during a four-year forecasting tournament sponsored by the US intelligence community. The aggregator improves the accuracy of simple averaging by around 20% and other state-of-the-art aggregators by 10-25%. The advantage stems almost exclusively from improved calibration. This aggregator -- know as "the revealed aggregator" -- inputs a) forecasters probability predictions (p) of a future binary event and b) the forecasters common prior (p0) of the future event. In this R-package, the function sample_aggregator(p,p0,...) allows the user to calculate the revealed aggregator. Its use is illustrated with a simple example.
Model selection by bootstrapping the stepAIC() procedure.
Estimates VAR and VARX models with Structured Penalties.
This package provides tools to analyze binary graph objects.
This package contains a variety of methods to generate typical causal inference estimates using Bayesian Additive Regression Trees (BART) as the underlying regression model (Hill (2012) <doi:10.1198/jcgs.2010.08162>).
This package provides a recently proposed Bayesian BIN model disentangles the underlying processes that enable forecasters and forecasting methods to improve, decomposing forecasting accuracy into three components: bias, partial information, and noise. By describing the differences between two groups of forecasters, the model allows the user to carry out useful inference, such as calculating the posterior probabilities of the treatment reducing bias, diminishing noise, or increasing information. It also provides insight into how much tamping down bias and noise in judgment or enhancing the efficient extraction of valid information from the environment improves forecasting accuracy. This package provides easy access to the BIN model. For further information refer to the paper Ville A. Satopää, Marat Salikhov, Philip E. Tetlock, and Barbara Mellers (2021) "Bias, Information, Noise: The BIN Model of Forecasting" <doi:10.1287/mnsc.2020.3882>.
Generalization of the Bayesian classification and regression tree (CART) model that partitions subjects into terminal nodes and tailors regression model to each terminal node.
Perform mediation analysis in the presence of high-dimensional mediators based on the potential outcome framework. Bayesian Mediation Analysis (BAMA), developed by Song et al (2019) <doi:10.1111/biom.13189> and Song et al (2020) <doi:10.48550/arXiv.2009.11409>, relies on two Bayesian sparse linear mixed models to simultaneously analyze a relatively large number of mediators for a continuous exposure and outcome assuming a small number of mediators are truly active. This sparsity assumption also allows the extension of univariate mediator analysis by casting the identification of active mediators as a variable selection problem and applying Bayesian methods with continuous shrinkage priors on the effects.
Statistical classification and regression have been popular among various fields and stayed in the limelight of scientists of those fields. Examples of the fields include clinical trials where the statistical classification of patients is indispensable to predict the clinical courses of diseases. Considering the negative impact of diseases on performing daily tasks, correctly classifying patients based on the clinical information is vital in that we need to identify patients of the high-risk group to develop a severe state and arrange medical treatment for them at an opportune moment. Deep learning - a part of artificial intelligence - has gained much attention, and research on it burgeons during past decades: see, e.g, Kazemi and Mirroshandel (2018) <DOI:10.1016/j.artmed.2017.12.001>. It is a veritable technique which was originally designed for the classification, and hence, the Buddle package can provide sublime solutions to various challenging classification and regression problems encountered in the clinical trials. The Buddle package is based on the back-propagation algorithm - together with various powerful techniques such as batch normalization and dropout - which performs a multi-layer feed-forward neural network: see Krizhevsky et. al (2017) <DOI:10.1145/3065386>, Schmidhuber (2015) <DOI:10.1016/j.neunet.2014.09.003> and LeCun et al. (1998) <DOI:10.1109/5.726791> for more details. This package contains two main functions: TrainBuddle() and FetchBuddle(). TrainBuddle() builds a feed-forward neural network model and trains the model. FetchBuddle() recalls the trained model which is the output of TrainBuddle(), classifies or regresses given data, and make a final prediction for the data.