Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Estimate fish length-at-age models using MCMC analysis with rstan models. This package allows a multimodel approach to growth fitting to be applied to length-at-age data and is supported by further analyses to determine model selection and result presentation. The core methods of this package are presented in Smart and Grammer (2021) "Modernising fish and shark growth curves with Bayesian length-at-age models". PLOS ONE 16(2): e0246734 <doi:10.1371/journal.pone.0246734>.
Download stats reported from the BioConductor.org stats website.
This package creates bivariate choropleth maps using Leaflet'. This package provides tools for visualizing the relationship between two variables through a color matrix representation on an interactive map.
Facilitates many of the analyses performed in studies of behavioral economic demand. The package supports commonly-used options for modeling operant demand including (1) data screening proposed by Stein, Koffarnus, Snider, Quisenberry, & Bickel (2015; <doi:10.1037/pha0000020>), (2) fitting models of demand such as linear (Hursh, Raslear, Bauman, & Black, 1989, <doi:10.1007/978-94-009-2470-3_22>), exponential (Hursh & Silberberg, 2008, <doi:10.1037/0033-295X.115.1.186>) and modified exponential (Koffarnus, Franck, Stein, & Bickel, 2015, <doi:10.1037/pha0000045>), and (3) calculating numerous measures relevant to applied behavioral economists (Intensity, Pmax, Omax). Also supports plotting and comparing data.
This package provides tools to calibrate, validate, and make predictions with the General Unified Threshold model of Survival adapted for Bee species. The model is presented in the publication from Baas, J., Goussen, B., Miles, M., Preuss, T.G., Roessing, I. (2022) <doi:10.1002/etc.5423> and Baas, J., Goussen, B., Taenzler, V., Roeben, V., Miles, M., Preuss, T.G., van den Berg, S., Roessink, I. (2024) <doi:10.1002/etc.5871>, and is based on the GUTS framework Jager, T., Albert, C., Preuss, T.G. and Ashauer, R. (2011) <doi:10.1021/es103092a>. The authors are grateful to Bayer A.G. for its financial support.
Implementation of the bunching estimator for kinks and notches. Allows for flexible estimation of counterfactual (e.g. controlling for round number bunching, accounting for other bunching masses within bunching window, fixing bunching point to be minimum, maximum or median value in its bin, etc.). It produces publication-ready plots in the style followed since Chetty et al. (2011) <doi:10.1093/qje/qjr013>, with lots of functionality to set plot options.
Download typicality rating datasets, generate new stereotype-based typicality ratings using large language models via the Inference Providers API (<https://huggingface.co/docs/inference-providers>), and evaluate them against human-annotated validation data. Also includes functions to extract stereotype strength and base-rate items from typicality matrices. For more details see Beucler et al. (2025) <doi:10.31234/osf.io/eqrfu_v1>.
Several implementations of a novel Bayesian hierarchical statistical model of nucleotide recoding RNA-seq experiments (NR-seq; TimeLapse-seq, SLAM-seq, TUC-seq, etc.) for analyzing and comparing NR-seq datasets (see Vock and Simon (2023) <doi:10.1261/rna.079451.122>). NR-seq is a powerful extension of RNA-seq that provides information about the kinetics of RNA metabolism (e.g., RNA degradation rate constants), which is notably lacking in standard RNA-seq data. The statistical model makes maximal use of these high-throughput datasets by sharing information across transcripts to significantly improve uncertainty quantification and increase statistical power. bakR includes a maximally efficient implementation of this model for conservative initial investigations of datasets. bakR also provides more highly powered implementations using the probabilistic programming language Stan to sample from the full posterior distribution. bakR performs multiple-test adjusted statistical inference with the output of these model implementations to help biologists separate signal from background. Methods to automatically visualize key results and detect batch effects are also provided.
Resurrects the standard plot for shapes established by the base and graphics packages. This is suited to workflows that require plotting using the established and traditional idioms of plotting spatially coincident data where it belongs. This package depends on sf and only replaces the plot method.
Stock, Options and Futures Trading Strategies for Traders and Investors with Bullish Outlook are represented here through their Graphs. The graphic indicators, strategies, calculations, functions and all the discussions are for academic, research, and educational purposes only and should not be construed as investment advice and come with absolutely no Liability. Guy Cohen (â The Bible of Options Strategies (2nd ed.)â , 2015, ISBN: 9780133964028). Zura Kakushadze, Juan A. Serur (â 151 Trading Strategiesâ , 2018, ISBN: 9783030027919). John C. Hull (â Options, Futures, and Other Derivatives (11th ed.)â , 2022, ISBN: 9780136939979).
Makes it easy to download financial data from Yahoo Finance <https://finance.yahoo.com/>.
Collect data from and make posts on Bluesky Social via the Hypertext Transfer Protocol (HTTP) Application Programming Interface (API), as documented at <https://atproto.com/specs/xrpc>. This further supports broader queries to the Authenticated Transfer (AT) Protocol <https://atproto.com/> which Bluesky Social relies on. Data is returned in a tidy format and posts can be made using a simple interface.
This package provides an interface to data provided by the Bank for International Settlements <https://www.bis.org>, allowing for programmatic retrieval of a large quantity of (central) banking data.
This package implements z-test, t-test, and normal moment prior Bayes factors based on summary statistics, along with functionality to perform corresponding power and sample size calculations as described in Pawel and Held (2025) <doi:10.1080/00031305.2025.2467919>.
Defines the functions used to compute the bimodal index as defined by Wang et al. (2009) <https://pmc.ncbi.nlm.nih.gov/articles/PMC2730180/>, <doi:10.4137/CIN.S2846>.
Algorithms for computing and generating plots with and without error bars for Bayesian cluster validity index (BCVI) (O. Preedasawakul, and N. Wiroonsri, A Bayesian Cluster Validity Index, Computational Statistics & Data Analysis, 202, 108053, 2025. <doi:10.1016/j.csda.2024.108053>) based on several underlying cluster validity indexes (CVIs) including Calinski-Harabasz, Chou-Su-Lai, Davies-Bouldin, Dunn, Pakhira-Bandyopadhyay-Maulik, Point biserial correlation, the score function, Starczewski, and Wiroonsri indices for hard clustering, and Correlation Cluster Validity, the generalized C, HF, KWON, KWON2, Modified Pakhira-Bandyopadhyay-Maulik, Pakhira-Bandyopadhyay-Maulik, Tang, Wiroonsri-Preedasawakul, Wu-Li, and Xie-Beni indices for soft clustering. The package is compatible with K-means, fuzzy C means, EM clustering, and hierarchical clustering (single, average, and complete linkage). Though BCVI is compatible with any underlying existing CVIs, we recommend users to use either WI or WP as the underlying CVI.
This package provides a tuneable and interpretable method for relaxing the instrumental variables (IV) assumptions to infer treatment effects in the presence of unobserved confounding. For a treatment-associated covariate to be a valid IV, it must be (a) unconfounded with the outcome and (b) have a causal effect on the outcome that is exclusively mediated by the exposure. There is no general test of the validity of these IV assumptions for any particular pre-treatment covariate. However, if different pre-treatment covariates give differing causal effect estimates when treated as IVs, then we know at least some of the covariates violate these assumptions. budgetIVr exploits this fact by taking as input a minimum budget of pre-treatment covariates assumed to be valid IVs and idenfiying the set of causal effects that are consistent with the user's data and budget assumption. The following generalizations of this principle can be used in this package: (1) a vector of multiple budgets can be assigned alongside corresponding thresholds that model degrees of IV invalidity; (2) budgets and thresholds can be chosen using specialist knowledge or varied in a principled sensitivity analysis; (3) treatment effects can be nonlinear and/or depend on multiple exposures (at a computational cost). The methods in this package require only summary statistics. Confidence sets are constructed under the "no measurement error" (NOME) assumption from the Mendelian randomization literature. For further methodological details, please refer to Penn et al. (2024) <doi:10.48550/arXiv.2411.06913>.
Developed for the following tasks. 1- Simulating and computing the maximum likelihood estimator for the Birnbaum-Saunders (BS) distribution, 2- Computing the Bayesian estimator for the parameters of the BS distribution based on reference prior proposed by Xu and Tang (2010) <doi:10.1016/j.csda.2009.08.004> and conjugate prior. 3- Computing the Bayesian estimator for the BS distribution based on conjugate prior. 4- Computing the Bayesian estimator for the BS distribution based on Jeffrey prior given by Achcar (1993) <doi:10.1016/0167-9473(93)90170-X> 5- Computing the Bayesian estimator for the BS distribution under progressive type-II censoring scheme.
Easily launch, track, and control functions as background-parallel jobs. Includes robust utilities for job status, error handling, resource monitoring, and result collection. Designed for scalable workflows in interactive and automated settings (local or remote). Integrates with multiple backends; supports flexible automation pipelines and live job tracking. For more information, see <https://anirbanshaw24.github.io/bakerrr/>.
Estimating the average causal effect based on the Bayesian Adjustment for Confounding (BAC) algorithm.
The bullwhipgame is an educational game that has as purpose the illustration and exploration of the bullwhip effect,i.e, the increase in demand variability along the supply chain. Marchena Marlene (2010) <arXiv:1009.3977>.
In ancient Chinese mythology, Bai Ze is a divine creature that knows the needs of everything. baizer provides data processing functions frequently used by the author. Hope this package also knows what you want!
This package implements optimal matching with near-fine balance in large observational studies with the use of optimal calipers to get a sparse network. The caliper is optimal in the sense that it is as small as possible such that a matching exists. The main functions in the bigmatch package are optcal() to find the optimal caliper, optconstant() to find the optimal number of nearest neighbors, and nfmatch() to find a near-fine balance match with a caliper and a restriction on the number of nearest neighbors. Yu, R., Silber, J. H., and Rosenbaum, P. R. (2020). <DOI:10.1214/19-sts699>.
Create randomizations for block random clinical trials. Can also produce a pdf file of randomization cards.