Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Extract and monitor price and market cap of Cryptocurrencies from Coin Market Cap <https://coinmarketcap.com/api/>.
This package provides data science tools for conservation science, including methods for environmental data analysis, humidity calculations, sustainability metrics, engineering calculations, and data visualisation. Supports conservators, scientists, and engineers working with cultural heritage preventive conservation data. The package is motivated by the framework outlined in Cosaert and Beltran et al. (2022) "Tools for the Analysis of Collection Environments" <https://www.getty.edu/conservation/publications_resources/pdf_publications/tools_for_the_analysis_of_collection_environments.html>.
Estimation of counterfactual outcomes for multiple values of continuous interventions at different time points, and plotting of causal dose-response curves. Details are given in Schomaker, McIlleron, Denti, Diaz (2024) <doi:10.48550/arXiv.2305.06645>.
This package provides a collection of functions to pre-process amplification curve data from polymerase chain reaction (PCR) or isothermal amplification reactions. Contains functions to normalize and baseline amplification curves, to detect both the start and end of an amplification reaction, several smoothers (e.g., LOWESS, moving average, cubic splines, Savitzky-Golay), a function to detect false positive amplification reactions and a function to determine the amplification efficiency. Quantification point (Cq) methods include the first (FDM) and second approximate derivative maximum (SDM) methods (calculated by a 5-point-stencil) and the cycle threshold method. Data sets of experimental nucleic acid amplification systems ('VideoScan HCU', capillary convective PCR (ccPCR)) and commercial systems are included. Amplification curves were generated by helicase dependent amplification (HDA), ccPCR or PCR. As detection system intercalating dyes (EvaGreen, SYBR Green) and hydrolysis probes (TaqMan) were used. For more information see: Roediger et al. (2015) <doi:10.1093/bioinformatics/btv205>.
This package provides functions for testing if the covariance structure of 2-dimensional data (e.g. samples of surfaces X_i = X_i(s,t)) is separable, i.e. if covariance(X) = C_1 x C_2. A complete descriptions of the implemented tests can be found in the paper Aston, John A. D.; Pigoli, Davide; Tavakoli, Shahin. Tests for separability in nonparametric covariance operators of random surfaces. Ann. Statist. 45 (2017), no. 4, 1431--1461. <doi:10.1214/16-AOS1495> <https://projecteuclid.org/euclid.aos/1498636862> <arXiv:1505.02023>.
In the context of paid research studies and clinical trials, budget considerations and patient sampling from available populations are subject to inherent constraints. We introduce the CDsampling package, which integrates optimal design theories within the framework of constrained sampling. This package offers the possibility to find both D-optimal approximate and exact allocations for samplings with or without constraints. Additionally, it provides functions to find constrained uniform sampling as a robust sampling strategy with limited model information. Our package offers functions for the computation of the Fisher information matrix under generalized linear models (including regular linear regression model) and multinomial logistic models.To demonstrate the applications, we also provide a simulated dataset and a real dataset embedded in the package. Yifei Huang, Liping Tong, and Jie Yang (2025)<doi:10.5705/ss.202022.0414>.
This package provides functions for working with code lists and vectors with codes. These are an alternative for factor that keep track of both the codes and labels. Methods allow for transforming between codes and labels. Also supports hierarchical code lists.
Implementation of the Control Polygon Reduction and Control Net Reduction methods for finding parsimonious B-spline regression models.
Every research team have their own script for data management, statistics and most importantly hemodynamic indices. The purpose is to standardize scripts utilized in clinical research. The hemodynamic indices can be used in a long-format dataframe, and add both periods of interest (trigger-periods), and delete artifacts with deleter-files. Transfer function analysis (Claassen et al. (2016) <doi:10.1177/0271678X15626425>) and Mx (Czosnyka et al. (1996) <doi:10.1161/01.str.27.10.1829>) can be calculated using this package.
In statistical modeling, multiple models need to be compared based on certain criteria. The method described here uses eight metrics from AllMetrics package. â input_dfâ is the data frame (at least two columns for comparison) containing metrics values in different rows of a column (which denotes a particular modelâ s performance). First five metrics are expected to be minimum and last three metrics are expected to be maximum for a model to be considered good. Firstly, every metric value (among first five) is searched in every columns and minimum values are denoted as â MINâ and other values are denoted as â NAâ . Secondly, every metric (among last three) is searched in every columns and maximum values are denoted as â MAXâ and other values are denoted as â NAâ . â output_dfâ contains the similar number of rows (which is 8) and columns (which is number of models to be compared) as of â input_dfâ . Values in â output_dfâ are corresponding â NAâ , â MINâ or â MAXâ . Finally, the column containing minimum number of â NAâ values is denoted as the best column. â min_NA_colâ gives the name of the best column (model). â min_NA_valuesâ are the corresponding metrics values. âBestColumn_metricsâ is the data frame (dimension: 1*8) containing different metrics of the best column (model). â best_column_resultsâ is the final result (a list) containing all of these output elements. In special case, if two columns having equal NA', it will be checked among these two column which one is having least NA in first five rows and will be inferred as the best. More details about AllMetrics can be found in Garai (2023) <doi:10.13140/RG.2.2.18688.30723>.
This package provides a comprehensive reproducibility framework designed for R and bioinformatics workflows. Automatically captures the entire analysis environment including R session info, package versions, external tool versions ('Samtools', STAR', BWA', etc.), conda environments, reference genomes, data provenance with smart checksumming for large files, parameter choices, random seeds, and hardware specifications. Generates executable scripts with Docker', Singularity', and renv configurations. Integrates with workflow managers ('Nextflow', Snakemake', WDL', CWL') to ensure complete reproducibility of computational research workflows.
This package provides tools for connecting to CHILDES', an open repository for transcripts of parent-child interaction. For more information on the underlying data, see <https://langcog.github.io/childes-db-website/>.
Applies the change-in-effect estimate method to assess confounding effects in medical and epidemiological research (Greenland & Pearce (2016) <doi:10.1146/annurev-publhealth-031914-122559> ). It starts with a crude model including only the outcome and exposure variables. At each of the subsequent steps, one variable which creates the largest change among the remaining variables is selected. This process is repeated until all variables have been entered into the model (Wang Z. Stata Journal 2007; 7, Number 2, pp. 183â 196). Currently, the chest package has functions for linear regression, logistic regression, negative binomial regression, Cox proportional hazards model and conditional logistic regression.
Converts customer transaction data (ID, purchase date) into a R6 class called customer. The class stores various customer analytics calculations at the customer level. The package also contains functionality to convert data in the R6 class to data.frames that can serve as inputs for various customer analytics models.
The goal of cvsem is to provide functions that allow for comparing Structural Equation Models (SEM) using cross-validation. Users can specify multiple SEMs using lavaan syntax. cvsem computes the Kullback Leibler (KL) Divergence between 1) the model implied covariance matrix estimated from the training data and 2) the sample covariance matrix estimated from the test data described in Cudeck, Robert & Browne (1983) <doi:10.18637/jss.v048.i02>. The KL Divergence is computed for each of the specified SEMs allowing for the models to be compared based on their prediction errors.
Multiple comparison techniques are typically applied following an F test from an ANOVA to decide which means are significantly different from one another. As an alternative to traditional methods, cluster analysis can be performed to group the means of different treatments into non-overlapping clusters. Treatments in different groups are considered statistically different. Several approaches have been proposed, with varying clustering methods and cut-off criteria. This package implements cluster-based multiple comparisons tests and also provides a visual representation in the form of a dendrogram. Di Rienzo, J. A., Guzman, A. W., & Casanoves, F. (2002) <jstor.org/stable/1400690>. Bautista, M. G., Smith, D. W., & Steiner, R. L. (1997) <doi:10.2307/1400402>.
Supporting functionality to run caret with spatial or spatial-temporal data. caret is a frequently used package for model training and prediction using machine learning. CAST includes functions to improve spatial or spatial-temporal modelling tasks using caret'. It includes the newly suggested Nearest neighbor distance matching cross-validation to estimate the performance of spatial prediction models and allows for spatial variable selection to selects suitable predictor variables in view to their contribution to the spatial model performance. CAST further includes functionality to estimate the (spatial) area of applicability of prediction models. Methods are described in Meyer et al. (2018) <doi:10.1016/j.envsoft.2017.12.001>; Meyer et al. (2019) <doi:10.1016/j.ecolmodel.2019.108815>; Meyer and Pebesma (2021) <doi:10.1111/2041-210X.13650>; Milà et al. (2022) <doi:10.1111/2041-210X.13851>; Meyer and Pebesma (2022) <doi:10.1038/s41467-022-29838-9>; Linnenbrink et al. (2024) <doi:10.5194/gmd-17-5897-2024>; Schumacher et al. (2025) <doi:10.5194/gmd-18-10185-2025>. The package is described in detail in Meyer et al. (2026) <doi:10.1007/978-3-031-99665-8_11>.
This package contains functions for solving commonly encountered problems while programming in R. This package is intended to provide a lightweight supplement to Base R, and will be useful for almost any R user.
This package provides a companion package to cmstatr <https://cran.r-project.org/package=cmstatr>. cmstatr contains statistical methods that are published in the Composite Materials Handbook, Volume 1 (2012, ISBN: 978-0-7680-7811-4), while cmstatrExt contains statistical methods that are not included in that handbook.
Read and manipulate Camera Trap Data Packages ('Camtrap DP'). Camtrap DP (<https://camtrap-dp.tdwg.org>) is a data exchange format for camera trap data. With camtrapdp you can read, filter and transform data (including to Darwin Core) before further analysis in e.g. camtraptor or camtrapR'.
Measuring child development starts by collecting responses to developmental milestones, such as "able to sit" or "says two words". There are many ways to combine such responses into summaries. The package bundles publicly available datasets with individual milestone data for children aged 0-5 years, with the aim of supporting the construction, evaluation, validation and interpretation of methodologies that aggregate milestone data into informative measures of child development.
The Codemeta Project defines a JSON-LD format for describing software metadata, as detailed at <https://codemeta.github.io>. This package provides utilities to generate, parse, and modify codemeta.json files automatically for R packages, as well as tools and examples for working with codemeta.json JSON-LD more generally.
Combines taxonomic classifications of high-throughput 16S rRNA gene sequences with reference proteomes of archaeal and bacterial taxa to generate amino acid compositions of community reference proteomes. Calculates chemical metrics including carbon oxidation state ('Zc'), stoichiometric oxidation and hydration state ('nO2 and nH2O'), H/C, N/C, O/C, and S/C ratios, grand average of hydropathicity ('GRAVY'), isoelectric point ('pI'), protein length, and average molecular weight of amino acid residues. Uses precomputed reference proteomes for archaea and bacteria derived from the Genome Taxonomy Database ('GTDB'). Also includes reference proteomes derived from the NCBI Reference Sequence ('RefSeq') database and manual mapping from the RDP Classifier training set to RefSeq taxonomy as described by Dick and Tan (2023) <doi:10.1007/s00248-022-01988-9>. Processes taxonomic classifications in RDP Classifier format or OTU tables in phyloseq-class objects from the Bioconductor package phyloseq'.
The main objective of the package is to enter a word of at least two letters based on which an Iterated Function System with Probabilities is constructed, and a two-dimensional fractal containing the chosen word infinitely often is generated via the Chaos Game. Additionally, the package allows to project the two-dimensional fractal on several three-dimensional surfaces and to transform the fractal into another fractal with uniform marginals.