Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Automatically creates separate regression models for different spatial regions. The prediction surface is smoothed using a regional border smoothing method. If regional models are continuous, the resulting prediction surface is continuous across the spatial dimensions, even at region borders. Methodology is described in Wagstaff and Bean (2023) <doi:10.32614/RJ-2023-004>.
This package provides functions for linking and deduplicating data sets. Methods based on a stochastic approach are implemented as well as classification algorithms from the machine learning domain. For details, see our paper "The RecordLinkage Package: Detecting Errors in Data" Sariyar M / Borg A (2010) <doi:10.32614/RJ-2010-017>.
Mixture Composer <https://github.com/modal-inria/MixtComp> is a project to build mixture models with heterogeneous data sets and partially missing data management. It includes models for real, categorical, counting, functional and ranking data. This package contains the minimal R interface of the C++ MixtComp library.
Simple, easy to use, and flexible functionality for recoding variables. It allows for simple piecewise definition of transformations.
Create plots and LaTeX tables that look like SPSS output for use in teaching materials. Rather than copying-and-pasting SPSS output into documents, R code that mocks up SPSS output can be integrated directly into dynamic LaTeX documents with tools such as knitr. Functionality includes statistical techniques that are typically covered in introductory statistics classes: descriptive statistics, common hypothesis tests, ANOVA, and linear regression, as well as box plots, histograms, scatter plots, and line plots (including profile plots).
Peaks Over Threshold (POT) or methode du renouvellement'. The distribution for the excesses can be chosen, and heterogeneous data (including historical data or block data) can be used in a Maximum-Likelihood framework.
Computes a variety of statistics for relational event models. Relational event models enable researchers to investigate both exogenous and endogenous factors influencing the evolution of a time-ordered sequence of events. These models are categorized into tie-oriented models (Butts, C., 2008, <doi:10.1111/j.1467-9531.2008.00203.x>), where the probability of a dyad interacting next is modeled in a single step, and actor-oriented models (Stadtfeld, C., & Block, P., 2017, <doi:10.15195/v4.a14>), which first model the probability of a sender initiating an interaction and subsequently the probability of the sender's choice of receiver. The package is designed to compute a variety of statistics that summarize exogenous and endogenous influences on the event stream for both types of models.
This package contains a variety of functions, based around regime shift analysis of paleoecological data. Citations: Rodionov() from Rodionov (2004) <doi:10.1029/2004GL019448> Lanzante() from Lanzante (1996) <doi:10.1002/(SICI)1097-0088(199611)16:11%3C1197::AID-JOC89%3E3.0.CO;2-L> Hellinger_trans from Numerical Ecology, Legendre & Legendre (ISBN 9780444538680) rolling_autoc from Liu, Gao & Wang (2018) <doi:10.1016/j.scitotenv.2018.06.276> Sample data sets lake_data & lake_RSI processed from Bush, Silman & Urrego (2004) <doi:10.1126/science.1090795> Sample data set January_PDO from NOAA: <https://www.ncei.noaa.gov/access/monitoring/pdo/>.
An R package for multiple imputation using chained random forests. Implemented methods can handle missing data in mixed types of variables by using prediction-based or node-based conditional distributions constructed using random forests. For prediction-based imputation, the method based on the empirical distribution of out-of-bag prediction errors of random forests and the method based on normality assumption for prediction errors of random forests are provided for imputing continuous variables. And the method based on predicted probabilities is provided for imputing categorical variables. For node-based imputation, the method based on the conditional distribution formed by the predicting nodes of random forests, and the method based on proximity measures of random forests are provided. More details of the statistical methods can be found in Hong et al. (2020) <arXiv:2004.14823>.
We implement causal mediation analysis using the methods proposed by Hong (2010) and Hong, Deutsch & Hill (2015) <doi:10.3102/1076998615583902>. It allows the estimation and hypothesis testing of causal mediation effects through ratio of mediator probability weights (RMPW). This strategy conveniently relaxes the assumption of no treatment-by-mediator interaction while greatly simplifying the outcome model specification without invoking strong distributional assumptions. We also implement a sensitivity analysis by extending the RMPW method to assess potential bias in the presence of omitted pretreatment or posttreatment covariates. The sensitivity analysis strategy was proposed by Hong, Qin, and Yang (2018) <doi:10.3102/1076998617749561>.
This package provides a tool to calculate Cardiovascular Risk Scores in large data frames as published in Perez-Vicencio, et al (2024) <doi:10.1136/openhrt-2024-002755>. Cardiovascular risk scores are statistical tools used to assess an individual's likelihood of developing a cardiovascular disease based on various risk factors, such as age, gender, blood pressure, cholesterol levels, and smoking. Here we bring together the six most commonly used in the emergency department. Using RiskScorescvd', you can calculate all the risk scores in an extended dataset in seconds. PCE (ASCVD) described in Goff, et al (2013) <doi:10.1161/01.cir.0000437741.48606.98>. EDACS described in Mark DG, et al (2016) <doi:10.1016/j.jacc.2017.11.064>. GRACE described in Fox KA, et al (2006) <doi:10.1136/bmj.38985.646481.55>. HEART is described in Mahler SA, et al (2017) <doi:10.1016/j.clinbiochem.2017.01.003>. SCORE2/OP described in SCORE2 working group and ESC Cardiovascular risk collaboration (2021) <doi:10.1093/eurheartj/ehab309>. TIMI described in Antman EM, et al (2000) <doi:10.1001/jama.284.7.835>. SCORE2-Diabetes described in SCORE2-Diabetes working group and ESC Cardiovascular risk collaboration (2023) <doi:10.1093/eurheartj/ehab260>. SCORE2/OP with CKD add-on described in Kunihiro M et al (2022) <doi:10.1093/eurjpc/zwac176>.
Estimates life tables, specifically (crude) death rates and (raw and graduated) death probabilities, using rolling windows in one (e.g., age), two (e.g., age and time) or three (e.g., age, time and income) dimensions. The package can also be utilised for summarising statistics and smoothing continuous variables through rolling windows in other domains, such as estimating averages of self-positioning ideology in political science. Acknowledgements: The authors wish to thank Ministerio de Ciencia, Innovación y Universidades (grant PID2021-128228NB-I00) and Generalitat Valenciana (grants HIECPU/2023/2, Conselleria de Hacienda, Economà a y Administración Pública, and CIGE/2023/7, Conselleria de Educación, Cultura, Universidades y Empleo) for supporting this research.
The Radiant Basics menu includes interfaces for probability calculation, central limit theorem simulation, comparing means and proportions, goodness-of-fit testing, cross-tabs, and correlation. The application extends the functionality in radiant.data'.
This package performs both classical and robust panel clustering by applying Principal Component Analysis (PCA) for dimensionality reduction and clustering via standard K-Means or Trimmed K-Means. The method is designed to ensure stable and reliable clustering, even in the presence of outliers. Suitable for analyzing panel data in domains such as economic research, financial time-series, healthcare analytics, and social sciences. The package allows users to choose between classical K-Means for standard clustering and Trimmed K-Means for robust clustering, making it a flexible tool for various applications. For this package, we have benefited from the studies Rencher (2003), Wang and Lu (2021) <DOI:10.25236/AJBM.2021.031018>, Cuesta-Albertos et al. (1997) <https://www.jstor.org/stable/2242558?seq=1>.
Perform robust estimation and inference in platform trials and other master protocol trials. Yuhan Qian, Yifan Yi, Jun Shao, Yanyao Yi, Gregory Levin, Nicole Mayer-Hamblett, Patrick J. Heagerty, Ting Ye (2025) <doi:10.48550/arXiv.2411.12944>.
This package provides a fast implementation of the greedy algorithm for the set cover problem using Rcpp'.
This package provides a collection of datasets that accompany the forthcoming book "R for Health Care Research".
Pretty fast implementation of the Ramer-Douglas-Peucker algorithm for reducing the number of points on a 2D curve. Urs Ramer (1972), "An iterative procedure for the polygonal approximation of plane curves" <doi:10.1016/S0146-664X(72)80017-0>. David H. Douglas and Thomas K. Peucker (1973), "Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or its Caricature" <doi:10.3138/FM57-6770-U75U-7727>.
Additional matrix functionality for R including: (1) wrappers for the base matrix function that allow matrices to be created from character strings and lists (the former is especially useful for creating block matrices), (2) better printing of large matrices via the generic "pretty" print function, and (3) a number of convenience functions for users more familiar with other scientific languages like Julia', Matlab'/'Octave', or Python'+'NumPy'.
Implementation of the RESTK algorithm based on Markov's Inequality from Vilardell, Sergi, Serra, Isabel, Mezzetti, Enrico, Abella, Jaume, Cazorla, Francisco J. and Del Castillo, J. (2022). "Using Markov's Inequality with Power-Of-k Function for Probabilistic WCET Estimation". In 34th Euromicro Conference on Real-Time Systems (ECRTS 2022). Leibniz International Proceedings in Informatics (LIPIcs) 231 20:1-20:24. <doi:10.4230/LIPIcs.ECRTS.2022.20>. This work has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772773).
Use the <https://api.nbp.pl/> API through R. Retrieve currency exchange rates and gold prices data published by the National Bank of Poland in form of convenient R objects.
Ensmallen is a templated C++ mathematical optimization library (by the MLPACK team) that provides a simple set of abstractions for writing an objective function to optimize. Provided within are various standard and cutting-edge optimizers that include full-batch gradient descent techniques, small-batch techniques, gradient-free optimizers, and constrained optimization. The RcppEnsmallen package includes the header files from the Ensmallen library and pairs the appropriate header files from armadillo through the RcppArmadillo package. Therefore, users do not need to install Ensmallen nor Armadillo to use RcppEnsmallen'. Note that Ensmallen is licensed under 3-Clause BSD, Armadillo starting from 7.800.0 is licensed under Apache License 2, RcppArmadillo (the Rcpp bindings/bridge to Armadillo') is licensed under the GNU GPL version 2 or later. Thus, RcppEnsmallen is also licensed under similar terms. Note that Ensmallen requires a compiler that supports C++14 and Armadillo 10.8.2 or later.
This package provides tools for downloading and analyzing CDC NHANES data, with a focus on analytical laboratory data.
Implementation of the MEthod based on the Removal Effects of Criteria - MEREC- a new objective weighting method for determining criteria weights for Multiple Criteria Decision Making problems, created by Mehdi Keshavarz-Ghorabaee (2021) <doi:10.3390/sym13040525>. Given a decision matrix, the function return the Merec´s weight vector and all intermediate matrix/vectors used to calculate it.