Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An implementation of algorithms for estimation of the graphical lasso regularization parameter described in Pedro Cisneros-Velarde, Alexander Petersen and Sang-Yun Oh (2020) <http://proceedings.mlr.press/v108/cisneros20a.html>.
This package provides tools for basic and advance cancer statistics and graphics. Groups individual data, merges registry data and population data, calculates age-specific rate, age-standardized rate, cumulative risk, estimated annual percentage rate with standards error. Creates graphics across variable and time, such as age-specific trends, bar chart and period-cohort trends.
Generates polygon straight skeletons and 3D models. Provides functions to create and visualize interior polygon offsets, 3D beveled polygons, and 3D roof models.
Perform robust estimation and inference in platform trials and other master protocol trials. Yuhan Qian, Yifan Yi, Jun Shao, Yanyao Yi, Gregory Levin, Nicole Mayer-Hamblett, Patrick J. Heagerty, Ting Ye (2025) <doi:10.48550/arXiv.2411.12944>.
Robust methods for high-dimensional data, in particular linear model selection techniques based on least angle regression and sparse regression. Specifically, the package implements robust least angle regression (Khan, Van Aelst & Zamar, 2007; <doi:10.1198/016214507000000950>), (robust) groupwise least angle regression (Alfons, Croux & Gelper, 2016; <doi:10.1016/j.csda.2015.02.007>), and sparse least trimmed squares regression (Alfons, Croux & Gelper, 2013; <doi:10.1214/12-AOAS575>).
This package provides functionality for carrying out sample size estimation and power calculation in Respondent-Driven Sampling.
This package implements the methodology of "Cannings, T. I. and Samworth, R. J. (2017) Random-projection ensemble classification, J. Roy. Statist. Soc., Ser. B. (with discussion), 79, 959--1035". The random projection ensemble classifier is a general method for classification of high-dimensional data, based on careful combination of the results of applying an arbitrary base classifier to random projections of the feature vectors into a lower-dimensional space. The random projections are divided into non-overlapping blocks, and within each block the projection yielding the smallest estimate of the test error is selected. The random projection ensemble classifier then aggregates the results of applying the base classifier on the selected projections, with a data-driven voting threshold to determine the final assignment.
The TRUST4 or MiXCR is used to identify the clonotypes. The goal of rTCRBCRr is to process the results from these clonotyping tools, and analyze the clonotype repertoire metrics based on chain names and IGH isotypes. The manuscript is still under preparation for publication for now. The references describing the methods in this package will be added later.
Ensemble model, for classification, regression and unsupervised learning, based on a forest of unpruned and randomized binary decision trees. Each tree is grown by sampling, with replacement, a set of variables at each node. Each cut-point is generated randomly, according to the continuous Uniform distribution. For each tree, data are either bootstrapped or subsampled. The unsupervised mode introduces clustering, dimension reduction and variable importance, using a three-layer engine. Random Uniform Forests are mainly aimed to lower correlation between trees (or trees residuals), to provide a deep analysis of variable importance and to allow native distributed and incremental learning.
Work with the PhyloPic Web Service (<http://api-docs.phylopic.org/v2/>) to fetch silhouette images of organisms. Includes functions for adding silhouettes to both base R plots and ggplot2 plots.
The Echo nest <http://the.echonest.com> is the industry's leading music intelligence company, providing developer with deepest understanding of music content and music fans. This package can be used to access artist's data including songs, blogs, news, reviews etc. Song's data including audio summary, style, danceability, tempo etc can also be accessed.
This package provides a tree bootstrap method for estimating uncertainty in respondent-driven samples (RDS). Quantiles are estimated by multilevel resampling in such a way that preserves the dependencies of and accounts for the high variability of the RDS process.
The Stochastic Dominance (SD) is the classical way of comparing two random prospects, using their distribution functions. Almost Stochastic Dominance (ASD) has also been developed to cover the SD failures due to the extreme utility functions. This package focuses on classical and heuristic methods for testing the first and second SD and ASD methods given the probability mass function (PMF) of the random prospects. The goal is to apply these methods easily, efficiently, and effectively on real-world datasets. For more details see Hanoch and Levy (1969) <doi:10.2307/2296431>, Leshno and Levy (2002) <doi:10.1287/mnsc.48.8.1074.169>, and Tzeng et al. (2012) <doi:10.1287/mnsc.1120.1616>.
Sends texts to the <https://www.receptiviti.com> API to be scored, and facilitates the creation of custom norms and local results databases.
Estimates robust rank-based fixed effects and predicts robust random effects in two- and three- level random effects nested models. The methodology is described in Bilgic & Susmann (2013) <https://journal.r-project.org/archive/2013/RJ-2013-027/>.
Rcmdr menu support for many of the functions in the HH package. The focus is on menu items for functions we use in our introductory courses.
Provide estimation and data generation tools for the quantile generalized beta regression model. For details, see Bourguignon, Gallardo and Saulo <arXiv:2110.04428> The package also provides tools to perform covariates selection.
Parameters estimation and linear regression models for Reliability distributions families reviewed by Almalki & Nadarajah (2014) <doi:10.1016/j.ress.2013.11.010> using Generalized Additive Models for Location, Scale and Shape, aka GAMLSS by Rigby & Stasinopoulos (2005) <doi:10.1111/j.1467-9876.2005.00510.x>.
We generate random variables following general Marchenko-Pastur distribution and Tracy-Widom distribution. We compute limits and distributions of eigenvalues and generalized components of spiked covariance matrices. We give estimation of all population eigenvalues of spiked covariance matrix model. We give tests of population covariance matrix. We also perform matrix denoising for signal-plus-noise model.
Circular / ring buffers in R and C. There are a couple of different buffers here with different implementations that represent different trade-offs.
Modeling and plotting functions for Reliability Growth Analysis (RGA). Models include the Duane (1962) <doi:10.1109/TA.1964.4319640>, Non-Homogeneous Poisson Process (NHPP) by Crow (1975) (No. AMSAATR138), Piecewise Weibull NHPP by Guo et al. (2010) <doi:10.1109/RAMS.2010.5448029>, and Piecewise Weibull NHPP with Change Point Detection based on the segmented package by Muggeo (2024) <https://cran.r-project.org/package=segmented>.
STG is a method for feature selection in neural network. The procedure is based on probabilistic relaxation of the l0 norm of features, or the count of the number of selected features. The framework simultaneously learns either a nonlinear regression or classification function while selecting a small subset of features. Read more: Yamada et al. (2020) <https://proceedings.mlr.press/v119/yamada20a.html>.
Use JSON templates to create folders and files structure for data science projects. Includes customized templates and accepts your own as JSON files.
This package implements two methods of estimating runs scored in a softball scenario: (1) theoretical expectation using discrete Markov chains and (2) empirical distribution using multinomial random simulation. Scores are based on player-specific input probabilities (out, single, double, triple, walk, and homerun). Optional inputs include probability of attempting a steal, probability of succeeding in an attempted steal, and an indicator of whether a player is "fast" (e.g. the player could stretch home). These probabilities may be calculated from common player statistics that are publicly available on team's webpages. Scores are evaluated based on a nine-player lineup and may be used to compare lineups, evaluate base scenarios, and compare the offensive potential of individual players. Manuscript forthcoming. See Bukiet & Harold (1997) <doi:10.1287/opre.45.1.14> for implementation of discrete Markov chains.