Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Forecasters predicting the chances of a future event may disagree due to differing evidence or noise. To harness the collective evidence of the crowd, Ville Satopää (2021) "Regularized Aggregation of One-off Probability Predictions" <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3769945> proposes a Bayesian aggregator that is regularized by analyzing the forecasters disagreement and ascribing over-dispersion to noise. This aggregator requires no user intervention and can be computed efficiently even for a large numbers of predictions. The author evaluates the aggregator on subjective probability predictions collected during a four-year forecasting tournament sponsored by the US intelligence community. The aggregator improves the accuracy of simple averaging by around 20% and other state-of-the-art aggregators by 10-25%. The advantage stems almost exclusively from improved calibration. This aggregator -- know as "the revealed aggregator" -- inputs a) forecasters probability predictions (p) of a future binary event and b) the forecasters common prior (p0) of the future event. In this R-package, the function sample_aggregator(p,p0,...) allows the user to calculate the revealed aggregator. Its use is illustrated with a simple example.
Fits the Bayesian partial least squares regression model introduced in Urbas et al. (2024) <doi:10.1214/24-AOAS1947>. Suitable for univariate and multivariate regression with high-dimensional data.
Easily launch, track, and control functions as background-parallel jobs. Includes robust utilities for job status, error handling, resource monitoring, and result collection. Designed for scalable workflows in interactive and automated settings (local or remote). Integrates with multiple backends; supports flexible automation pipelines and live job tracking. For more information, see <https://anirbanshaw24.github.io/bakerrr/>.
Bindings to the blowfish password hashing algorithm <https://www.openbsd.org/papers/bcrypt-paper.pdf> derived from the OpenBSD implementation.
Generate ground truth cases for object localization algorithms. Cycle through a list of images, select points around which to generate bounding boxes and assign classifiers. Output the coordinates, and images annotated with boxes and labels. For an example study that uses bounding boxes for image localization and classification see Ibrahim, Badr, Abdallah, and Eissa (2012) "Bounding Box Object Localization Based on Image Superpixelization" <doi:10.1016/j.procs.2012.09.119>.
Perform seasonal adjustment of weekly data. The package provides a user-friendly interface for computing seasonally adjusted estimates of weekly data and includes functions for the creation of country-specific prior adjustment variables, as well as diagnostic tools to assess the quality of the adjustments. The method is described in more detail in Ginker (2023) <doi:10.13140/RG.2.2.12221.44000>.
Fit and simulate a semiparametric bivariate correlated frailty models with proportional hazard structure. Frailty distributions, such as gamma and lognormal models are supported. Bivariate gamma fit is obtained using the approach given in Iachine (1995) and lognormal fit is based on the approach by Ripatti and Palmgren (2000) <doi:10.1111/j.0006-341X.2000.01016.x>.
Bayesian analysis of multivariate receptor modeling. The package consists of implementations of the methods of Park and Oh (2015) <doi:10.1016/j.chemolab.2015.08.021>.The package uses JAGS'(Just Another Gibbs Sampler) to generate Markov chain Monte Carlo samples of parameters.
An implementation of sensitivity and robustness methods in Bayesian networks in R. It includes methods to perform parameter variations via a variety of co-variation schemes, to compute sensitivity functions and to quantify the dissimilarity of two Bayesian networks via distances and divergences. It further includes diagnostic methods to assess the goodness of fit of a Bayesian networks to data, including global, node and parent-child monitors. Reference: M. Leonelli, R. Ramanathan, R.L. Wilkerson (2022) <doi:10.1016/j.knosys.2023.110882>.
Data sets for book "Basic Statistics and Data Analysis" by Larry J. Kitchens.
This package implements bridge models for nowcasting and forecasting macroeconomic variables by linking high-frequency indicator variables (e.g., monthly data) to low-frequency target variables (e.g., quarterly GDP). Simplifies forecasting and aggregating indicator variables to match the target frequency, enabling timely predictions ahead of official data releases. For more on bridge models, see Baffigi, A., Golinelli, R., & Parigi, G. (2004) <doi:10.1016/S0169-2070(03)00067-0>, Burri (2023) <https://www5.unine.ch/RePEc/ftp/irn/pdfs/WP23-02.pdf> or Schumacher (2016) <doi:10.1016/j.ijforecast.2015.07.004>.
This package provides a method to filter correlation and covariance matrices by averaging bootstrapped filtered hierarchical clustering and boosting. See Ch. Bongiorno and D. Challet, Covariance matrix filtering with bootstrapped hierarchies (2020) <arXiv:2003.05807> and Ch. Bongiorno and D. Challet, Reactive Global Minimum Variance Portfolios with k-BAHC covariance cleaning (2020) <arXiv:2005.08703>.
Deals with the braid groups. Includes creation of some specific braids, group operations, free reduction, and Bronfman polynomials. Braid theory has applications in fluid mechanics and quantum physics. The code is adapted from the Haskell library combinat', and is based on Birman and Brendle (2005) <doi:10.48550/arXiv.math/0409205>.
Jointly models the multivariate longitudinal responses and multiple covariates and time using gradient boosting approach.
Reads and plots phylogenetic placements.
This package provides a suite of functions that allow the user to analyze A/B test data in a Bayesian framework. Intended to be a drop-in replacement for common frequentist hypothesis test such as the t-test and chi-sq test.
Several tools for analyzing diagnostic tests and 2x2 contingency tables are provided. In particular, positive and negative predictive values for a diagnostic tests can be calculated from prevalence, sensitivity and specificity values. For contingency tables, relative risk and odds ratio measures are estimated. Furthermore, confidence intervals are provided.
This package contains functions for estimating above-ground biomass/carbon and its uncertainty in tropical forests. These functions allow to (1) retrieve and correct taxonomy, (2) estimate wood density and its uncertainty, (3) build height-diameter models, (4) manage tree and plot coordinates, (5) estimate above-ground biomass/carbon at stand level with associated uncertainty. To cite â BIOMASSâ , please use citation(â BIOMASSâ ). For more information, see Réjou-Méchain et al. (2017) <doi:10.1111/2041-210X.12753>.
This package provides fast and efficient procedures for Bayesian analysis of Structural Vector Autoregressions. This package estimates a wide range of models, including homo-, heteroskedastic, and non-normal specifications. Structural models can be identified by adjustable exclusion restrictions, time-varying volatility, or non-normality. They all include a flexible three-level equation-specific local-global hierarchical prior distribution for the estimated level of shrinkage for autoregressive and structural parameters. Additionally, the package facilitates predictive and structural analyses such as impulse responses, forecast error variance and historical decompositions, forecasting, verification of heteroskedasticity, non-normality, and hypotheses on autoregressive parameters, as well as analyses of structural shocks, volatilities, and fitted values. Beautiful plots, informative summary functions, and extensive documentation including the vignette by Woźniak (2024) <doi:10.48550/arXiv.2410.15090> complement all this. The implemented techniques align closely with those presented in Lütkepohl, Shang, Uzeda, & Woźniak (2024) <doi:10.48550/arXiv.2404.11057>, Lütkepohl & Woźniak (2020) <doi:10.1016/j.jedc.2020.103862>, and Song & Woźniak (2021) <doi:10.1093/acrefore/9780190625979.013.174>. The bsvars package is aligned regarding objects, workflows, and code structure with the R package bsvarSIGNs by Wang & Woźniak (2024) <doi:10.32614/CRAN.package.bsvarSIGNs>, and they constitute an integrated toolset.
High performance principal component analysis routines that operate directly on bigmemory::big.matrix objects. The package avoids materialising large matrices in memory by streaming data through BLAS and LAPACK kernels and provides helpers to derive scores, loadings, correlations, and contribution diagnostics, including utilities that stream results into bigmemory'-backed matrices for file-based workflows. Additional interfaces expose scalable singular value decomposition, robust PCA, and robust SVD algorithms so that users can explore large matrices while tempering the influence of outliers. Scalable principal component analysis is also implemented, Elgamal, Yabandeh, Aboulnaga, Mustafa, and Hefeeda (2015) <doi:10.1145/2723372.2751520>.
Data on multiple individuals through time are often sampled at times that differ between persons. Irregular observation times can severely complicate the statistical analysis of the data. The broken stick model approximates each subjectâ s trajectory by one or more connected line segments. The times at which segments connect (breakpoints) are identical for all subjects and under control of the user. A well-fitting broken stick model effectively transforms individual measurements made at irregular times into regular trajectories with common observation times. Specification of the model requires three variables: time, measurement and subject. The model is a special case of the linear mixed model, with time as a linear B-spline and subject as the grouping factor. The main assumptions are: subjects are exchangeable, trajectories between consecutive breakpoints are straight, random effects follow a multivariate normal distribution, and unobserved data are missing at random. The package contains functions for fitting the broken stick model to data, for predicting curves in new data and for plotting broken stick estimates. The package supports two optimization methods, and includes options to structure the variance-covariance matrix of the random effects. The analyst may use the software to smooth growth curves by a series of connected straight lines, to align irregularly observed curves to a common time grid, to create synthetic curves at a user-specified set of breakpoints, to estimate the time-to-time correlation matrix and to predict future observations. See <doi:10.18637/jss.v106.i07> for additional documentation on background, methodology and applications.
Package for Bayesian Variable Selection and Model Averaging in linear models and generalized linear models using stochastic or deterministic sampling without replacement from posterior distributions. Prior distributions on coefficients are from Zellner's g-prior or mixtures of g-priors corresponding to the Zellner-Siow Cauchy Priors or the mixture of g-priors from Liang et al (2008) <DOI:10.1198/016214507000001337> for linear models or mixtures of g-priors from Li and Clyde (2019) <DOI:10.1080/01621459.2018.1469992> in generalized linear models. Other model selection criteria include AIC, BIC and Empirical Bayes estimates of g. Sampling probabilities may be updated based on the sampled models using sampling w/out replacement or an efficient MCMC algorithm which samples models using a tree structure of the model space as an efficient hash table. See Clyde, Ghosh and Littman (2010) <DOI:10.1198/jcgs.2010.09049> for details on the sampling algorithms. Uniform priors over all models or beta-binomial prior distributions on model size are allowed, and for large p truncated priors on the model space may be used to enforce sampling models that are full rank. The user may force variables to always be included in addition to imposing constraints that higher order interactions are included only if their parents are included in the model. This material is based upon work supported by the National Science Foundation under Division of Mathematical Sciences grant 1106891. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Dose-response modeling for negative-binomial distributed data with a variety of dose-response models. Covariate adjustment and Bayesian model averaging is supported. Functions are provided to easily obtain inference on the dose-response relationship and plot the dose-response curve.
This package provides a suite of Bayesian MI-LASSO for variable selection methods for multiply-imputed datasets. The package includes four Bayesian MI-LASSO models using shrinkage (Multi-Laplace, Horseshoe, ARD) and Spike-and-Slab (Spike-and-Laplace) priors, along with tools for model fitting via MCMC, four-step projection predictive variable selection, and hyperparameter calibration. Methods are suitable for both continuous and binary covariates under missing-at-random or missing-completely-at-random assumptions. See Zou, J., Wang, S. and Chen, Q. (2025), Bayesian MI-LASSO for Variable Selection on Multiply-Imputed Data. ArXiv, 2211.00114. <doi:10.48550/arXiv.2211.00114> for more details. We also provide the frequentist`s MI-LASSO function.