Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The aim of the report package is to bridge the gap between RĂ¢ s output and the formatted results contained in your manuscript. This package converts statistical models and data frames into textual reports suited for publication, ensuring standardization and quality in results reporting.
This package provides functions for reading mass spectrometry data in mzXML format.
Within this package the XML-RPC API to NEOS <https://neos-server.org/neos/> is implemented. This enables the user to pass optimization problems to NEOS and retrieve results within R.
The Stuttgart Neural Network Simulator (SNNS) is a library containing many standard implementations of neural networks. This package wraps the SNNS functionality to make it available from within R. Using the RSNNS low-level interface, all of the algorithmic functionality and flexibility of SNNS can be accessed. Furthermore, the package contains a convenient high-level interface, so that the most common neural network topologies and learning algorithms integrate seamlessly into R.
Random forest with a variety of additional features for regression, classification and survival analysis. The features include: parallel computing with OpenMP, embedded model for selecting the splitting variable, based on Zhu, Zeng & Kosorok (2015) <doi:10.1080/01621459.2015.1036994>, subject weight, variable weight, tracking subjects used in each tree, etc.
This package provides tools to access, search, and manipulate ILO's ilostat database, including bulk download of statistical data, dictionary lookups, and table of contents.
Fits measurement error models using Monte Carlo Expectation Maximization (MCEM). For specific details on the methodology, see: Greg C. G. Wei & Martin A. Tanner (1990) A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation Algorithms, Journal of the American Statistical Association, 85:411, 699-704 <doi:10.1080/01621459.1990.10474930> For more examples on measurement error modelling using MCEM, see the RMarkdown vignette: "'refitME R-package tutorial".
Simplifies integration with Amazon Cognito (<https://aws.amazon.com/cognito/>) for R developers, enabling easy management of user authentication, registration, and password flows.
Wrapper for the RSpace Electronic Lab Notebook (<https://www.researchspace.com/>) API. This packages provides convenience functions to browse, search, create, and edit your RSpace documents. In addition, it enables filling RSpace templates from R Markdown/Quarto templates or tabular data (e.g., Excel files). This R package is not developed or endorsed by Research Space'.
This package provides a series of functions to call AD Model Builder (i.e., compile and run models) from within R, read the results back into R as admb objects, and provide standard accessors (i.e. coef(), vcov(), etc.).
The Radiant Basics menu includes interfaces for probability calculation, central limit theorem simulation, comparing means and proportions, goodness-of-fit testing, cross-tabs, and correlation. The application extends the functionality in radiant.data'.
For a sequence of event occurence times, we are interested in finding subsequences in it that are too "regular". We define regular as being significantly different from a homogeneous Poisson process. The departure from the Poisson process is measured using a L1 distance. See Di and Perlman 2007 for more details.
This package provides a Bayesian credible interval is interpreted with respect to posterior probability, and this interpretation is far more intuitive than that of a frequentist confidence interval. However, standard highest-density intervals can be wide due to between-subjects variability and tends to hide within-subject effects, rendering its relationship with the Bayes factor less clear in within-subject (repeated-measures) designs. This urgent issue can be addressed by using within-subject intervals in within-subject designs, which integrate four methods including the Wei-Nathoo-Masson (2023) <doi:10.3758/s13423-023-02295-1>, the Loftus-Masson (1994) <doi:10.3758/BF03210951>, the Nathoo-Kilshaw-Masson (2018) <doi:10.1016/j.jmp.2018.07.005>, and the Heck (2019) <doi:10.31234/osf.io/whp8t> interval estimates.
This package implements safe policy learning under regression discontinuity designs with multiple cutoffs, based on Zhang et al. (2022) <doi:10.48550/arXiv.2208.13323>. The learned cutoffs are guaranteed to perform no worse than the existing cutoffs in terms of overall outcomes. The rdlearn package also includes features for visualizing the learned cutoffs relative to the baseline and conducting sensitivity analyses.
Access to some of the C level functions of the xts package. In its current state, the package is mostly a proof-of-concept to support adding useful functions, and does not yet add any of its own.
An R implementation of the Reinert text clustering method. For more details about the algorithm see the included vignettes or Reinert (1990) <doi:10.1177/075910639002600103>.
This package performs two-sample comparisons using the restricted mean survival time (RMST) when survival curves end at different time points between groups. This package implements a sensitivity approach that allows the threshold timepoint tau to be specified after the longest survival time in the shorter survival group. Two kinds of between-group contrast estimators (the difference in RMST and the ratio of RMST) are computed: Uno et al(2014)<doi:10.1200/JCO.2014.55.2208>, Uno et al(2022)<https://CRAN.R-project.org/package=survRM2>, Ueno and Morita(2023)<doi:10.1007/s43441-022-00484-z>.
Autoencoding Random Forests ('RFAE') provide a method to autoencode mixed-type tabular data using Random Forests ('RF'), which involves projecting the data to a latent feature space of user-chosen dimensionality (usually a lower dimension), and then decoding the latent representations back into the input space. The encoding stage is useful for feature engineering and data visualisation tasks, akin to how principal component analysis ('PCA') is used, and the decoding stage is useful for compression and denoising tasks. At its core, RFAE is a post-processing pipeline on a trained random forest model. This means that it can accept any trained RF of ranger object type: RF', URF or ARF'. Because of this, it inherits Random Forests robust performance and capacity to seamlessly handle mixed-type tabular data. For more details, see Vu et al. (2025) <doi:10.48550/arXiv.2505.21441>.
Calculates risk differences (or prevalence differences for cross-sectional data) using generalized linear models with automatic link function selection. Provides robust model fitting with fallback methods, support for stratification and adjustment variables, inverse probability of treatment weighting (IPTW) for causal inference, and publication-ready output formatting. Handles model convergence issues gracefully and provides confidence intervals using multiple approaches. Methods are based on approaches described in Mark W. Donoghoe and Ian C. Marschner (2018) "logbin: An R Package for Relative Risk Regression Using the Log-Binomial Model" <doi:10.18637/jss.v086.i09> for robust GLM fitting, Peter C. Austin (2011) "An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies" <doi:10.1080/00273171.2011.568786> for IPTW methods, and standard epidemiological methods for risk difference estimation as described in Kenneth J. Rothman, Sander Greenland and Timothy L. Lash (2008, ISBN:9780781755641) "Modern Epidemiology".
Linear and logistic ridge regression functions. Additionally includes special functions for genome-wide single-nucleotide polymorphism (SNP) data. More details can be found in <doi: 10.1002/gepi.21750> and <doi: 10.1186/1471-2105-12-372>.
Empirical orthogonal teleconnections in R. remote is short for R(-based) EMpirical Orthogonal TEleconnections'. It implements a collection of functions to facilitate empirical orthogonal teleconnection analysis. Empirical Orthogonal Teleconnections (EOTs) denote a regression based approach to decompose spatio-temporal fields into a set of independent orthogonal patterns. They are quite similar to Empirical Orthogonal Functions (EOFs) with EOTs producing less abstract results. In contrast to EOFs, which are orthogonal in both space and time, EOT analysis produces patterns that are orthogonal in either space or time.
Software for genomic prediction with the RR-BLUP mixed model (Endelman 2011, <doi:10.3835/plantgenome2011.08.0024>). One application is to estimate marker effects by ridge regression; alternatively, BLUPs can be calculated based on an additive relationship matrix or a Gaussian kernel.
Randomization tests for the statistical comparison of i = two or more individual-based, sample-based or coverage-based rarefaction curves. The ecological null hypothesis is that the i samples were all drawn randomly from a single assemblage, with (necessarily) a single underlying species abundance distribution. The biogeographic null hypothesis is that the i samples were all drawn from different assemblages that, nonetheless, share similar species richness and species abundance distributions. Functions are described in L. Cayuela, N.J. Gotelli & R.K. Colwell (2015) <doi:10.1890/14-1261.1>.
For the calculation of sample size or power in a two-group repeated measures design, accounting for attrition and accommodating a variety of correlation structures for the repeated measures; details of the method can be found in the scientific paper: Donald Hedeker, Robert D. Gibbons, Christine Waternaux (1999) <doi:10.3102/10769986024001070>.