Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements a novel predictive model, Partially Interpretable Estimators (PIE), which jointly trains an interpretable model and a black-box model to achieve high predictive performance as well as partial model. See the paper, Wang, Yang, Li, and Wang (2021) <doi:10.48550/arXiv.2105.02410>.
Supports maximum likelihood inference for the Pearson VII distribution with shape parameter 3/2 and free location and scale parameters. This distribution is relevant when estimating the velocity of processive motor proteins with random detachment.
This package provides a novel pseudo-value regression approach for the differential co-expression network analysis in expression data, which can incorporate additional clinical variables in the model. This is a direct regression modeling for the differential network analysis, and it is therefore computationally amenable for the most users. The full methodological details can be found in Ahn S et al (2023) <doi:10.1186/s12859-022-05123-w>.
Cluster analysis via nonparametric density estimation is performed. Operationally, the kernel method is used throughout to estimate the density. Diagnostics methods for evaluating the quality of the clustering are available. The package includes also a routine to estimate the probability density function obtained by the kernel method, given a set of data with arbitrary dimensions.
Historic Pell grant data as provided by the US Department of Education. This package contains data about how much pell grant was awarded by which institution in which year. This data comes from the US Department of Education. Raw data can be downloaded from here: <https://www2.ed.gov/finaid/prof/resources/data/pell-institution.html>.
Plot both fixed and random effects of linear mixed models, multilevel models in a single spaghetti plot. The package allows to visualize the effect of a predictor on a criterion between different levels of a grouping variable. Additionally, confidence intervals can be displayed for fixed effects. Calculation of predicted values of random effects allows only models with one random intercept and/or one random slope to be plotted. Confidence intervals and predicted values of fixed effects are computed using the ggpredict function from the ggeffects package. Lüdecke, D. (2018) <doi:10.21105/joss.00638>.
This package provides a suite of likelihood ratio test based methods to use in pharmacovigilance. Contains various testing and post-processing functions.
Allows users to find a piecewise linear regression approximation to a given continuous univariate function within a specified error tolerance. Methods based on Warwicker and Rebennack (2025) "Efficient continuous piecewise linear regression for linearising univariate non-linear functions" <doi:10.1080/24725854.2023.2299809>.
Search CRAN metadata about packages by keyword, popularity, recent activity, package name and more. Uses the R-hub search server, see <https://r-pkg.org> and the CRAN metadata database, that contains information about CRAN packages. Note that this is _not_ a CRAN project.
This package provides a thin wrapper over PLINK 2's core libraries which provides an R interface for reading .pgen files. A minimal .pvar loader is also included. Chang et al. (2015) \doi10.1186/s13742-015-0047-8.
This package creates and manages a provenance graph corresponding to the provenance created by the rdtLite package, which collects provenance from R scripts. rdtLite is available on CRAN. The provenance format is an extension of the W3C PROV JSON format (<https://www.w3.org/Submission/2013/SUBM-prov-json-20130424/>). The extended JSON provenance format is described in <https://github.com/End-to-end-provenance/ExtendedProvJson>.
Detecting markers of politeness in English natural language. This package allows researchers to easily visualize and quantify politeness between groups of documents. This package combines prior research on the linguistic markers of politeness. We thank the Spencer Foundation, the Hewlett Foundation, and Harvard's Institute for Quantitative Social Science for support.
Visualizes the coverage depth of a complete plastid genome as well as the equality of its inverted repeat regions in relation to the circular, quadripartite genome structure and the location of individual genes. For more information, please see Gruenstaeudl and Jenke (2020) <doi:10.1186/s12859-020-3475-0>.
Deduplicates datasets by retaining the most complete and informative records. Identifies duplicated entries based on a specified key column, calculates completeness scores for each row, and compares values within groups. When differences between duplicates exceed a user-defined threshold, records are split into unique IDs; otherwise, they are coalesced into a single, most complete entry. Returns a list containing the original duplicates, the split entries, and the final coalesced dataset. Useful for cleaning survey or administrative data where duplicated IDs may reflect minor data entry inconsistencies.
Large-scale gene expression studies allow gene network construction to uncover associations among genes. This package is developed for estimating and testing partial correlation graphs with prior information incorporated.
This package provides functions to simulate point prevalence studies (PPSs) of healthcare-associated infections (HAIs) and to convert prevalence to incidence in steady state setups. Companion package to the preprint Willrich et al., From prevalence to incidence - a new approach in the hospital setting; <doi:10.1101/554725> , where methods are explained in detail.
This uses a mixed integer mathematical programming (MIP) approach for building and solving multi-action planning problems, where the goal is to find an optimal combination of management actions that abate threats, in an efficient way while accounting for spatial aspects. Thus, optimizing the connectivity and conservation effectiveness of the prioritized units and of the deployed actions. The package is capable of handling different commercial (gurobi, CPLEX) and non-commercial (symphony, CBC) MIP solvers. Gurobi optimization solver can be installed using comprehensive instructions in the gurobi installation vignette of the prioritizr package (available in <https://prioritizr.net/articles/gurobi_installation_guide.html>). Instead, CPLEX optimization solver can be obtain from IBM CPLEX web page (available here <https://www.ibm.com/es-es/products/ilog-cplex-optimization-studio>). Additionally, the rcbc R package (available at <https://github.com/dirkschumacher/rcbc>) can be used to obtain solutions using the CBC optimization software (<https://github.com/coin-or/Cbc>). Methods used in the package refers to Salgado-Rojas et al. (2020) <doi:10.1016/j.ecolmodel.2019.108901>, Beyer et al. (2016) <doi:10.1016/j.ecolmodel.2016.02.005>, Cattarino et al. (2015) <doi:10.1371/journal.pone.0128027> and Watts et al. (2009) <doi:10.1016/j.envsoft.2009.06.005>. See the prioriactions website for more information, documentations and examples.
Perform permutation-based hypothesis testing for randomized experiments as suggested in Ludbrook & Dudley (1998) <doi:10.2307/2685470> and Ernst (2004) <doi:10.1214/088342304000000396>, introduced in Pham et al. (2022) <doi:10.1016/j.chemosphere.2022.136736>.
Sample size calculations for practical equivalence trial design with a time to event endpoint.
This package provides a set of concise and efficient tools for statistical production. Can also be used for data management. In statistical production, you deal with complex data and need to control your process at each step of your work. Concise functions are very helpful, because you do not hesitate to use them. The following functions are included in the package. dup checks duplicates. miss checks missing values. tac computes contingency table of all columns. toc compares two tables, spotting significant deviations. chi2_find compares columns within a data.frame, spotting related categories of (a more complex function).
This package implements a unified interface for benchmarking meta-analytic publication bias correction methods through simulation studies (see Bartoš et al., 2025, <doi:10.48550/arXiv.2510.19489>). It provides 1) predefined data-generating mechanisms from the literature, 2) functions for running meta-analytic methods on simulated data, 3) pre-simulated datasets and pre-computed results for reproducible benchmarks, 4) tools for visualizing and comparing method performance.
Read depth data from genotyping-by-sequencing (GBS) or restriction site-associated DNA sequencing (RAD-seq) are imported and used to make Bayesian probability estimates of genotypes in polyploids or diploids. The genotype probabilities, posterior mean genotypes, or most probable genotypes can then be exported for downstream analysis. polyRAD is described by Clark et al. (2019) <doi:10.1534/g3.118.200913>, and the Hind/He statistic for marker filtering is described by Clark et al. (2022) <doi:10.1186/s12859-022-04635-9>. A variant calling pipeline for highly duplicated genomes is also included and is described by Clark et al. (2020, Version 1) <doi:10.1101/2020.01.11.902890>.
Translates beliefs into prior information in the form of Beta and Gamma distributions. It can be used for the generation of priors on the prevalence of disease and the sensitivity/specificity of diagnostic tests and any other binomial experiment.
This package provides a tool, grammar, and standard to represent and exchange R package source code as text files. Converts one or more source packages to a text file and restores the package structures from the file.