Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions to perform comparative causal mediation analysis to compare the mediation effects of different treatments via a common mediator. Results contain the estimates and confidence intervals for the two comparative causal mediation analysis estimands, as well as the ATE and ACME for each treatment. Functions provided in the package will automatically assess the comparative causal mediation analysis scope conditions (i.e. for each comparative causal mediation estimand, a numerator and denominator that are both estimated with the desired statistical significance and of the same sign). Results will be returned for each comparative causal mediation estimand only if scope conditions are met for it. See details in Bansak(2020)<doi:10.1017/pan.2019.31>.
This package performs a series of offline and/or online change-point detection algorithms for 1) univariate mean: <doi:10.1214/20-EJS1710>, <arXiv:2006.03283>; 2) univariate polynomials: <doi:10.1214/21-EJS1963>; 3) univariate and multivariate nonparametric settings: <doi:10.1214/21-EJS1809>, <doi:10.1109/TIT.2021.3130330>; 4) high-dimensional covariances: <doi:10.3150/20-BEJ1249>; 5) high-dimensional networks with and without missing values: <doi:10.1214/20-AOS1953>, <arXiv:2101.05477>, <arXiv:2110.06450>; 6) high-dimensional linear regression models: <arXiv:2010.10410>, <arXiv:2207.12453>; 7) high-dimensional vector autoregressive models: <arXiv:1909.06359>; 8) high-dimensional self exciting point processes: <arXiv:2006.03572>; 9) dependent dynamic nonparametric random dot product graphs: <arXiv:1911.07494>; 10) univariate mean against adversarial attacks: <arXiv:2105.10417>.
Data sets used for copula modeling in addition to those in the R package copula'. These include a random subsample from the US National Education Longitudinal Study (NELS) of 1988 and nursing home data from Wisconsin.
This package provides a simple package to grab cheat sheets and save them to your local computer.
Implementation of transductive conformal prediction (see Vovk, 2013, <doi:10.1007/978-3-642-41142-7_36>) and inductive conformal prediction (see Balasubramanian et al., 2014, ISBN:9780124017153) for classification problems.
The currentSurvival package contains functions for the estimation of the current cumulative incidence (CCI) and the current leukaemia-free survival (CLFS). The CCI is the probability that a patient is alive and in any disease remission (e.g. complete cytogenetic remission in chronic myeloid leukaemia) after initiating his or her therapy (e.g. tyrosine kinase therapy for chronic myeloid leukaemia). The CLFS is the probability that a patient is alive and in any disease remission after achieving the first disease remission.
This package provides functions for computing and visualizing generalized canonical discriminant analyses and canonical correlation analysis for a multivariate linear model. Traditional canonical discriminant analysis is restricted to a one-way MANOVA design and is equivalent to canonical correlation analysis between a set of quantitative response variables and a set of dummy variables coded from the factor variable. The candisc package generalizes this to higher-way MANOVA designs for all factors in a multivariate linear model, computing canonical scores and vectors for each term. The graphic functions provide low-rank (1D, 2D, 3D) visualizations of terms in an mlm via the plot.candisc and heplot.candisc methods. Related plots are now provided for canonical correlation analysis when all predictors are quantitative. Methods for linear discriminant analysis are now included.
Concept maps are versatile tools used across disciplines to enhance understanding, teaching, brainstorming, and information organization. This package provides functions for processing and visualizing concept mapping data, involving the sequential use of cluster analysis (for sorting participants and statements), multidimensional scaling (for positioning statements in a conceptual space), and visualization techniques, including point cluster maps and dendrograms. The methodology and its validity are discussed in Kampen, J.K., Hageman, J.A., Breuer, M., & Tobi, H. (2025). "The validity of concept mapping: let's call a spade a spade." Qual Quant. <doi:10.1007/s11135-025-02351-z>.
This package provides a tidied subset of the US College Scorecard dataset, containing institutional characteristics, enrollment, student aid, costs, and student outcomes at institutions of higher education in the United States.
Flexible framework for coalescent analyses in R. It includes a main function running the MCMC algorithm, auxiliary functions for tree rearrangement, and some functions to compute population genetic parameters. Extended description can be found in Paradis (2020) <doi:10.1201/9780429466700>. For details on the MCMC algorithm, see Kuhner et al. (1995) <doi:10.1093/genetics/140.4.1421> and Drummond et al. (2002) <doi:10.1093/genetics/161.3.1307>.
Collective matrix factorization (a.k.a. multi-view or multi-way factorization, Singh, Gordon, (2008) <doi:10.1145/1401890.1401969>) tries to approximate a (potentially very sparse or having many missing values) matrix X as the product of two low-dimensional matrices, optionally aided with secondary information matrices about rows and/or columns of X', which are also factorized using the same latent components. The intended usage is for recommender systems, dimensionality reduction, and missing value imputation. Implements extensions of the original model (Cortes, (2018) <arXiv:1809.00366>) and can produce different factorizations such as the weighted implicit-feedback model (Hu, Koren, Volinsky, (2008) <doi:10.1109/ICDM.2008.22>), the weighted-lambda-regularization model, (Zhou, Wilkinson, Schreiber, Pan, (2008) <doi:10.1007/978-3-540-68880-8_32>), or the enhanced model with implicit features (Rendle, Zhang, Koren, (2019) <arXiv:1905.01395>), with or without side information. Can use gradient-based procedures or alternating-least squares procedures (Koren, Bell, Volinsky, (2009) <doi:10.1109/MC.2009.263>), with either a Cholesky solver, a faster conjugate gradient solver (Takacs, Pilaszy, Tikk, (2011) <doi:10.1145/2043932.2043987>), or a non-negative coordinate descent solver (Franc, Hlavac, Navara, (2005) <doi:10.1007/11556121_50>), providing efficient methods for sparse and dense data, and mixtures thereof. Supports L1 and L2 regularization in the main models, offers alternative most-popular and content-based models, and implements functionality for cold-start recommendations and imputation of 2D data.
Light-weight functions for computing descriptive statistics in different circular spaces (e.g., 2pi, 180, or 360 degrees), to handle angle-dependent biases, pad circular data, and more. Specifically aimed for psychologists and neuroscientists analyzing circular data. Basic methods are based on Jammalamadaka and SenGupta (2001) <doi:10.1142/4031>, removal of cardinal biases is based on the approach introduced in van Bergen, Ma, Pratte, & Jehee (2015) <doi:10.1038/nn.4150> and Chetverikov and Jehee (2023) <doi:10.1038/s41467-023-43251-w>.
This package provides a dashboard supports the usage of cromwell'. Cromwell is a scientific workflow engine for command line users. This package utilizes cromwell REST APIs and provides these convenient functions: timing diagrams for running workflows, cromwell engine status, a tabular workflow list. For more information about cromwell', visit <http://cromwell.readthedocs.io>.
For Bayesian and classical inference and prediction with count-valued data, Simultaneous Transformation and Rounding (STAR) Models provide a flexible, interpretable, and easy-to-use approach. STAR models the observed count data using a rounded continuous data model and incorporates a transformation for greater flexibility. Implicitly, STAR formalizes the commonly-applied yet incoherent procedure of (i) transforming count-valued data and subsequently (ii) modeling the transformed data using Gaussian models. STAR is well-defined for count-valued data, which is reflected in predictive accuracy, and is designed to account for zero-inflation, bounded or censored data, and over- or underdispersion. Importantly, STAR is easy to combine with existing MCMC or point estimation methods for continuous data, which allows seamless adaptation of continuous data models (such as linear regressions, additive models, BART, random forests, and gradient boosting machines) for count-valued data. The package also includes several methods for modeling count time series data, namely via warped Dynamic Linear Models. For more details and background on these methodologies, see the works of Kowal and Canale (2020) <doi:10.1214/20-EJS1707>, Kowal and Wu (2022) <doi:10.1111/biom.13617>, King and Kowal (2022) <arXiv:2110.14790>, and Kowal and Wu (2023) <arXiv:2110.12316>.
This package provides a simple interface for multivariate correlation analysis that unifies various classical statistical procedures including t-tests, tests in univariate and multivariate linear models, parametric and nonparametric tests for correlation, Kruskal-Wallis tests, common approximate versions of Wilcoxon rank-sum and signed rank tests, chi-squared tests of independence, score tests of particular hypotheses in generalized linear models, canonical correlation analysis and linear discriminant analysis.
This package provides a collection of functions to calculate Composite Indicators methods, focusing, in particular, on the normalisation and weighting-aggregation steps, as described in OECD Handbook on constructing composite indicators: methodology and user guide, 2008, Vidoli and Fusco and Mazziotta <doi:10.1007/s11205-014-0710-y>, Mazziotta and Pareto (2016) <doi:10.1007/s11205-015-0998-2>, Van Puyenbroeck and Rogge <doi:10.1016/j.ejor.2016.07.038> and other authors.
This package provides a multi-task learning approach to variable selection regression with highly correlated predictors and sparse effects, based on frequentist statistical inference. It provides statistical evidence to identify which subsets of predictors have non-zero effects on which subsets of response variables, motivated and designed for colocalization analysis across genome-wide association studies (GWAS) and quantitative trait loci (QTL) studies. The ColocBoost model is described in Cao et. al. (2025) <doi:10.1101/2025.04.17.25326042>.
This package produces statistical indicators of the impact of migration on the socio-demographic composition of an area. Three measures can be used: ratios, percentages and the Duncan index of dissimilarity. The input data files are assumed to be in an origin-destination matrix format, with each cell representing a flow count between an origin and a destination area. Columns are expected to represent origins, and rows are expected to represent destinations. The first row and column are assumed to contain labels for each area. See Rodriguez-Vignoli and Rowe (2018) <doi:10.1080/00324728.2017.1416155> for technical details.
This package provides R utilities to build unlevered and levered discounted cash flow (DCF) tables for commercial real estate (CRE) assets. Functions generate bullet and amortising debt schedules, compute credit metrics such as debt coverage ratios (DCR), debt service coverage ratios (DSCR), interest coverage ratios, debt yield ratios, and forward loan-to-value ratios (LTV) based on net operating income (NOI). The toolkit evaluates refinancing feasibility under alternative market scenarios and supports end-to-end scenario execution from a YAML (YAML Ain't Markup Language) configuration file parsed with yaml'. Includes helpers for sensitivity analysis, covenant diagnostics, and reproducible vignettes.
Extends ACER ConQuest through a family of functions designed to improve graphical outputs and help with advanced analysis (e.g., differential item functioning). Allows R users to call ACER ConQuest from within R and read ACER ConQuest System Files (generated by the command `put` <https://conquestmanual.acer.org/s4-00.html#put>). Requires ACER ConQuest version 5.40 or later. A demonstration version can be downloaded from <https://shop.acer.org/acer-conquest-5.html>.
This package provides a lightweight data validation and testing toolkit for R. Its guiding philosophy is that adding code-based data checks to users existing workflow should be both quick and intuitive. The suite of functions included therefore mirror the common data checks many users already perform by hand or by eye. Additionally, the checkthat package is optimized to work within tidyverse data manipulation pipelines.
This package provides a set of functions to manage CRAN'-like repositories efficiently.
An interactive document on the topic of cluster analysis using rmarkdown and shiny packages. Runtime examples are provided in the package function as well as at <https://analyticmodels.shinyapps.io/ClusterAnalysis/>.
CHAP-GWAS (Chromosomal Haplotype-Integrated Genome-Wide Association Study) provides a dynamically adaptive framework for genome-wide association studies (GWAS) that integrates chromosome-scale haplotypes with single nucleotide polymorphism (SNP) analysis. The method identifies and extends haplotype variants based on their phenotypic associations rather than predefined linkage blocks, enabling high-resolution detection of quantitative trait loci (QTL). By leveraging long-range phased haplotype information, CHAP-GWAS improves statistical power and offers a more comprehensive view of the genetic architecture underlying complex traits.