Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Using a Gaussian copula approach, this package generates simulated data mimicking a target real dataset. It supports normal, Poisson, empirical, and DESeq2 (negative binomial with size factors) marginal distributions. It uses an low-rank plus diagonal covariance matrix to efficiently generate omics-scale data. Methods are described in: Yang, Grant, and Brooks (2025) <doi:10.1101/2025.01.31.634335>.
Solves quadratic programming problems using Richard L. Dykstra's cyclic projection algorithm. Routine allows for a combination of equality and inequality constraints. See Dykstra (1983) <doi:10.1080/01621459.1983.10477029> for details.
Computational tools for meta-analysis of diagnostic accuracy test. Bootstrap-based computational methods of the confidence interval for AUC of summary ROC curve and some related AUC-based inference methods are available (Noma et al. (2021) <doi:10.1080/23737484.2021.1894408>).
Double constrained correspondence analysis (dc-CA) analyzes (multi-)trait (multi-)environment ecological data by using the vegan package and native R code. Throughout the two step algorithm of ter Braak et al. (2018) is used. This algorithm combines and extends community- (sample-) and species-level analyses, i.e. the usual community weighted means (CWM)-based regression analysis and the species-level analysis of species-niche centroids (SNC)-based regression analysis. The two steps use canonical correspondence analysis to regress the abundance data on to the traits and (weighted) redundancy analysis to regress the CWM of the orthonormalized traits on to the environmental predictors. The function dc_CA() has an option to divide the abundance data of a site by the site total, giving equal site weights. This division has the advantage that the multivariate analysis corresponds with an unweighted (multi-trait) community-level analysis, instead of being weighted. The first step of the algorithm uses vegan::cca(). The second step uses wrda() but vegan::rda() if the site weights are equal. This version has a predict() function. For details see ter Braak et al. 2018 <doi:10.1007/s10651-017-0395-x>. and ter Braak & van Rossum 2025 <doi:10.1016/j.ecoinf.2025.103143>.
This package provides tools for converting and imputing date values to the ISO 8601 standard format and for reconciling differences between two versions of a data set. The package automatically detects date patterns within data frame columns and converts them to consistent ISO-formatted dates, with optional imputation of missing day or month components based on user-defined rules. It also includes functionality to identify inserted, deleted, and updated records, as well as column- and value-level changes, when comparing old and new versions of a data frame. Only one date format may be applied within a single column.
Detection of runs of homozygosity and of heterozygosity in diploid genomes using two methods: sliding windows (Purcell et al (2007) <doi:10.1086/519795>) and consecutive runs (Marras et al (2015) <doi:10.1111/age.12259>).
Plan optimal sample size allocation and go/no-go decision rules for phase II/III drug development programs with time-to-event, binary or normally distributed endpoints when assuming fixed treatment effects or a prior distribution for the treatment effect, using methods from Kirchner et al. (2016) <doi:10.1002/sim.6624> and Preussler (2020). Optimal is in the sense of maximal expected utility, where the utility is a function taking into account the expected cost and benefit of the program. It is possible to extend to more complex settings with bias correction (Preussler S et al. (2020) <doi:10.1186/s12874-020-01093-w>), multiple phase III trials (Preussler et al. (2019) <doi:10.1002/bimj.201700241>), multi-arm trials (Preussler et al. (2019) <doi:10.1080/19466315.2019.1702092>), and multiple endpoints (Kieser et al. (2018) <doi:10.1002/pst.1861>).
Clustered or multilevel data structures are common in the assessment of differential item functioning (DIF), particularly in the context of large-scale assessment programs. This package allows users to implement extensions of the Mantel-Haenszel DIF detection procedures in the presence of multilevel data based on the work of Begg (1999) <doi:10.1111/j.0006-341X.1999.00302.x>, Begg & Paykin (2001) <doi:10.1080/00949650108812115>, and French & Finch (2013) <doi:10.1177/0013164412472341>.
This package provides a deep neural network model with a monotonic increasing single index function tailored for periodontal disease studies. The residuals are assumed to follow a skewed T distribution, a skewed normal distribution, or a normal distribution. More details can be found at Liu, Huang, and Bai (2024) <doi:10.1016/j.csda.2024.108012>.
Implement dynamic linear models outlined in Shumway and Stoffer (2025) <doi:10.1007/978-3-031-70584-7>. Two model structures for data smoothing and forecasting are considered. The specific models proposed will be added once the manuscript is published.
This package provides a systematic biology tool was developed to repurpose drugs via a subpathway crosstalk network. The operation modes include 1) calculating centrality scores of SPs in the context of gene expression data to reflect the influence of SP crosstalk, 2) evaluating drug-disease reverse association based on disease- and drug-induced SPs weighted by the SP crosstalk, 3) identifying cancer candidate drugs through perturbation analysis. There are also several functions used to visualize the results.
This package provides a systematic biology tool was developed to repurpose drugs via a drug-drug functional similarity network. DrugSim2DR first predict drug-drug functional similarity in the context of specific disease, and then using the similarity constructed a weighted drug similarity network. Finally, it used a network propagation algorithm on the network to identify drugs with significant target abnormalities as candidate drugs.
Solves ordinary and delay differential equations, where the objective function is written in either R or C. Suitable only for non-stiff equations, the solver uses a Dormand-Prince method that allows interpolation of the solution at any point. This approach is as described by Hairer, Norsett and Wanner (1993) <ISBN:3540604529>. Support is also included for iterating difference equations.
We provide a list of functions for replicating the results of the Monte Carlo simulations and empirical application of Jiang et al. (2022). In particular, we provide corresponding functions for generating the three types of random data described in this paper, as well as all the estimation strategies. Detailed information about the data generation process and estimation strategy can be found in Jiang et al. (2022) <doi:10.48550/arXiv.2201.13004>.
Tools, methods and processes for the management of analysis workflows. These lightweight solutions facilitate structuring R&D activities. These solutions were developed to comply with Good Documentation Practice (GDP), with ALCOA+ principles as proposed by the U.S. FDA, and with FAIR principles as discussed by Jacobsen et al. (2017) <doi:10.1162/dint_r_00024>.
Distributed Online Mean Tests is a powerful tool designed to efficiently process and analyze distributed datasets. It enables users to perform mean tests in an online, distributed manner, making it highly suitable for large-scale data analysis. By leveraging advanced computational techniques, Domean ensures robust and scalable solutions for statistical analysis, particularly in scenarios where data is dispersed across multiple nodes or sources. This package is ideal for researchers and practitioners working with high-dimensional data, providing a flexible and efficient framework for mean testing. The philosophy of Domean is described in Guo G.(2025) <doi:10.1016/j.physa.2024.130308>.
This package provides a collection of supervised discretization algorithms. It can also be grouped in terms of top-down or bottom-up, implementing the discretization algorithms.
Implementing Function-on-Scalar Regression model in which the response function is dichotomized and observed sparsely. This package provides smooth estimations of functional regression coefficients and principal components for the dichotomized functional response regression (dfrr) model.
Allows the computation of clustering coefficients for directed and weighted networks by using different approaches. It allows to compute clustering coefficients that are not present in igraph package. A description of clustering coefficients can be found in "Directed clustering in weighted networks: a new perspective", Clemente, G.P., Grassi, R. (2017), <doi:10.1016/j.chaos.2017.12.007>.
Functionality for manipulating values of associative maps. The package is a dependency for mvp-type packages that use the STL map class: it traps plausible idiom that is ill-defined (implementation-specific) and returns an informative error, rather than returning a possibly incorrect result. To cite the package in publications please use Hankin (2022) <doi:10.48550/ARXIV.2210.03856>.
DataSHIELD is an infrastructure and series of R packages that enables the remote and non-disclosive analysis of sensitive research data. This DataSHIELD Interface implementation is for analyzing datasets living in the current R session. The purpose of this is primarily for lightweight DataSHIELD analysis package development.
An R package for iterative and batched record linkage, and applying epidemiological case definitions. diyar can be used for deterministic and probabilistic record linkage, or multistage record linkage combining both approaches. It features the implementation of nested match criteria, and mechanisms to address missing data and conflicting matches during stepwise record linkage. Case definitions are implemented by assigning records to groups based on match criteria such as person or place, and overlapping time or duration of events e.g. sample collection dates or periods of hospital stays. Matching records are assigned a unique group ID. Index and duplicate records are removed or further analyses as required.
Using the Theory of Belief Functions for evidence calculus. Basic probability assignments, or mass functions, can be defined on the subsets of a set of possible values and combined. A mass function can be extended to a larger frame. Marginalization, i.e. reduction to a smaller frame can also be done. These features can be combined to analyze small belief networks and take into account situations where information cannot be satisfactorily described by probability distributions.
Estimates latent variables of public opinion cross-nationally and over time from sparse and incomparable survey data. DCPO uses a population-level graded response model with country-specific item bias terms. Sampling is conducted with Stan'. References: Solt (2020) <doi:10.31235/osf.io/d5n9p>.