Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a collection of functions dedicated to simulating staggered entry platform trials whereby the treatment under investigation is a combination of two active compounds. In order to obtain approval for this combination therapy, superiority of the combination over the two active compounds and superiority of the two active compounds over placebo need to be demonstrated. A more detailed description of the design can be found in Meyer et al. <DOI:10.1002/pst.2194> and a manual in Meyer et al. <arXiv:2202.02182>.
This package provides a tool for causal meta-analysis. This package implements the aggregation formulas and inference methods proposed in Berenfeld et al. (2025) <doi:10.48550/arXiv.2505.20168>. Users can input aggregated data across multiple studies and compute causally meaningful aggregated effects of their choice (risk difference, risk ratio, odds ratio, etc) under user-specified population weighting. The built-in function camea() allows to obtain precise variance estimates for these effects and to compare the latter to a classical meta-analysis aggregate, the random effect model, as implemented in the metafor package <https://CRAN.R-project.org/package=metafor>.
Data analysis often requires coding, especially when data are collected through interviews, observations, or questionnaires. As a result, code counting and data preparation are essential steps in the analysis process. Analysts may need to count the codes in a text (Tokenization, counting of pre-established codes, computing the co-occurrence matrix by line) and prepare the data (e.g., min-max normalization, Z-score, robust scaling, Box-Cox transformation, and non-parametric bootstrap). For the Box-Cox transformation (Box & Cox, 1964, <https://www.jstor.org/stable/2984418>), the optimal Lambda is determined using the log-likelihood method. Non-parametric bootstrap involves randomly sampling data with replacement. Two random number generators are also integrated: a Lehmer congruential generator for uniform distribution and a Box-Muller generator for normal distribution. Package for educational purposes.
Calculates the probabilities of k successes given n trials of a binomial random variable with non-negative correlation across trials. The function takes as inputs the scalar values the level of correlation or association between trials, the success probability, the number of trials, an optional input specifying the number of bits of precision used in the calculation, and an optional input specifying whether the calculation approach to be used is from Witt (2014) <doi:10.1080/03610926.2012.725148> or from Kuk (2004) <doi:10.1046/j.1467-9876.2003.05369.x>. The output is a (trials+1)-dimensional vector containing the likelihoods of 0, 1, ..., trials successes.
This package provides functions to make lifetables and to calculate hazard function estimate using Poisson regression model with splines. Includes function to draw simple flowchart of cohort study. Function boxesLx() makes boxes of transition rates between states. It utilizes Epi package Lexis data.
Calculate various cardiovascular disease risk scores from the Framingham Heart Study (FHS), the American College of Cardiology (ACC), and the American Heart Association (AHA) as described in Dâ agostino, et al (2008) <doi:10.1161/circulationaha.107.699579>, Goff, et al (2013) <doi:10.1161/01.cir.0000437741.48606.98>, and Mclelland, et al (2015) <doi:10.1016/j.jacc.2015.08.035>.
Retrieves historical versions of clinical trial registry entries from <https://ClinicalTrials.gov>. Package functionality and implementation for v 1.0.0 is documented in Carlisle (2022) <DOI:10.1371/journal.pone.0270909>.
Estimates latent class vector-autoregressive models via EM algorithm on time-series data for model-based clustering and classification. Includes model selection criteria for selecting the number of lags and clusters.
This package provides a set of fast tools for converting a textual corpus into a set of normalized tables. Users may make use of the udpipe back end with no external dependencies, or a Python back ends with spaCy <https://spacy.io>. Exposed annotation tasks include tokenization, part of speech tagging, named entity recognition, and dependency parsing.
Maximum likelihood estimation in respondent driven samples.
Play casino games in the R console, including poker, blackjack, and a slot machine. Try to build your fortune before you succumb to the gambler's ruin!
Geospatial data computation is parallelized by grid, hierarchy, or raster files. Based on future (Bengtsson, 2024 <doi:10.32614/CRAN.package.future>) and mirai (Gao et al., 2025 <doi:10.32614/CRAN.package.mirai>) parallel back-ends, terra (Hijmans et al., 2025 <doi:10.32614/CRAN.package.terra>) and sf (Pebesma et al., 2024 <doi:10.32614/CRAN.package.sf>) functions as well as convenience functions in the package can be distributed over multiple threads. The simplest way of parallelizing generic geospatial computation is to start from par_pad_*() functions to par_grid(), par_hierarchy(), or par_multirasters() functions. Virtually any functions accepting classes in terra or sf packages can be used in the three parallelization functions. A common raster-vector overlay operation is provided as a function extract_at(), which uses exactextractr (Baston, 2023 <doi:10.32614/CRAN.package.exactextractr>), with options for kernel weights for summarizing raster values at vector geometries. Other convenience functions for vector-vector operations including simple areal interpolation (summarize_aw()) and summation of exponentially decaying weights (summarize_sedc()) are also provided.
Quantitative characterization of the health impacts associated with exposure to chemical mixtures has received considerable attention in current environmental and epidemiological studies. CompMix package allows practitioners to estimate the health impacts from exposure to chemical mixtures data through various statistical approaches, including Lasso, Elastic net, Bayeisan kernel machine regression (BKMR), hierNet, Quantile g-computation, Weighted quantile sum (WQS) and Random forest. Hao W, Cathey A, Aung M, Boss J, Meeker J, Mukherjee B. (2024) "Statistical methods for chemical mixtures: a practitioners guide". <DOI:10.1101/2024.03.03.24303677>.
This package provides similar functionality to Microsoft Excel CUMPRINC function <https://support.microsoft.com/en-us/office/cumprinc-function-94a4516d-bd65-41a1-bc16-053a6af4c04d>. Returns principal remaining at a given month, principal paid in a month, and accumulated principal paid at a given month based on original loan amount, monthly interest rate, and term of loan.
Method for visualizing proportions between objects of different sizes. The proportions are drawn as circles with different diameters, which makes them ideal for visualizing proportions between planets.
This package provides functions for visualizing, animating, solving and analyzing the Rubik's cube. Includes data structures for solvable and unsolvable cubes, random moves and random state scrambles and cubes, 3D displays and animations using OpenGL', patterned cube generation, and lightweight solvers. See Rokicki, T. (2008) <arXiv:0803.3435> for the Kociemba solver.
Implementation of models to analyse compositional microbiome time series taking into account the interaction between groups of bacteria. The models implemented are described in Creus-Martà et al (2018, ISBN:978-84-09-07541-6), Creus-Martà et al (2021) <doi:10.1155/2021/9951817> and Creus-Martà et al (2022) <doi:10.1155/2022/4907527>.
Computes 138 standard climate indices at monthly, seasonal and annual resolution. These indices were selected, based on their direct and significant impacts on target sectors, after a thorough review of the literature in the field of extreme weather events and natural hazards. Overall, the selected indices characterize different aspects of the frequency, intensity and duration of extreme events, and are derived from a broad set of climatic variables, including surface air temperature, precipitation, relative humidity, wind speed, cloudiness, solar radiation, and snow cover. The 138 indices have been classified as follow: Temperature based indices (42), Precipitation based indices (22), Bioclimatic indices (21), Wind-based indices (5), Aridity/ continentality indices (10), Snow-based indices (13), Cloud/radiation based indices (6), Drought indices (8), Fire indices (5), Tourism indices (5).
This package provides data on countries and their main city or agglomeration and the different distance measures and dummy variables indicating whether two countries are contiguous, share a common language or a colonial relationship. The reference article for these datasets is Mayer and Zignago (2011) <http://www.cepii.fr/CEPII/en/publications/wp/abstract.asp?NoDoc=3877>.
Perform the functional modeling methods of Huang and Wang (2018) <doi:10.1111/biom.12741> to accommodate dependent error in covariates of the proportional hazards model. The adopted measurement error model has minimal assumptions on the dependence structure, and an instrumental variable is supposed to be available.
Calculation of standard deviation scores and percentiles adduced from different standards (WHO, UK, Germany, Italy, China, etc). Also, references for laboratory values in children and adults are available, e.g., serum lipids, iron-related blood parameters, IGF, liver enzymes. See package documentation for full list.
Plots calibration curves and computes statistics for assessing calibration performance. See De Cock Campo (2023) <doi:10.48550/arXiv.2309.08559> and Van Calster et al. (2016) <doi:10.1016/j.jclinepi.2015.12.005>.
Predicts 3 to 12 months prognosis in Chronic Obstructive Pulmonary Disease (COPD) patients hospitalized for severe exacerbations, as described in Almagro et al. (2014) <doi:10.1378/chest.13-1328>.
This package provides a system containing easy-to-use tools to compute the bioequivalence assessment in the univariate framework using the methods proposed in Boulaguiem et al. (2023) <doi:10.1101/2023.03.11.532179>.