Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides the Jester Dataset for package recommenderlab.
This package provides random number generating functions that are much more context aware than the built-in functions. The functions are also much safer, as they check for incompatible values, and more reproducible.
Currently fully supports Enrichr, JASPAR, miEAA, PANTHER, Reactome, STRING, and UniProt! The goal of rbioapi is to provide a user-friendly and consistent interface to biological databases and services. In a way that insulates the user from the technicalities of using web services API and creates a unified and easy-to-use interface to biological and medical web services. This is an ongoing project; New databases and services will be added periodically. Feel free to suggest any databases or services you often use.
This package provides a set of functions to perform pathway analysis and meta-analysis from multiple gene expression datasets, as well as visualization of the results. This package wraps functionality from the following packages: Ritchie et al. (2015) <doi:10.1093/nar/gkv007>, Love et al. (2014) <doi:10.1186/s13059-014-0550-8>, Robinson et al. (2010) <doi:10.1093/bioinformatics/btp616>, Korotkevich et al. (2016) <arxiv:10.1101/060012>, Efron et al. (2015) <https://CRAN.R-project.org/package=GSA>, and Gu et al. (2012) <https://CRAN.R-project.org/package=CePa>.
Efficient CRUD interface for the Airtable API <https://airtable.com/developers/web/api>, supporting batch requests and parallel encoding of large data sets.
This package provides simplified methods for managing classic Rubik's cubes and many other modifications of it (such as NxNxN size cubes, void cubes and 8-coloured cubes - so called octa cubes). Includes functions of handling special syntax for managing such cubes; and different approach to plotting 3D cubes without using external libraries (for example OpenGL').
Robust categorical data analysis based on the theory of C-estimation developed in Welz (2024) <doi:10.48550/arXiv.2403.11954>. For now, the package only implements robust estimation of polychoric correlation as proposed in Welz, Mair and Alfons (2026) <doi:10.1017/psy.2025.10066> and robust estimation of polyserial correlation (Welz, 2026 <doi:10.1017/psy.2026.10091>) with methods for printing and plotting. We will implement further models in future releases. In addition, the package is still experimental, so input arguments and class structure may change in future releases.
Designed for the import, analysis, and visualization of dosimetric and volumetric data in Radiation Oncology, the tools herein enable import of dose-volume histogram information from multiple treatment planning system platforms and 3D structural representations and dosimetric information from DICOM-RT files. These tools also enable subsequent visualization and statistical analysis of these data.
This package provides functions for the calibration of radiocarbon dates, as well as options to calculate different radiocarbon-related timescales (cal BP, cal BC/AD, C14 age, F14C, pMC, D14C) and estimating the effects of contamination or local reservoir offsets (Reimer and Reimer 2001 <doi:10.1017/S0033822200038339>). The methods follow long-established recommendations such as Stuiver and Polach (1977) <doi:10.1017/S0033822200003672> and Reimer et al. (2004) <doi:10.1017/S0033822200033154>. This package uses the calibration curves from the data package rintcal'.
Automated performance of common transformations used to fulfill parametric assumptions of normality and identification of the best performing method for the user. Output for various normality tests (Thode, 2002) corresponding to the best performing method and a descriptive statistical report of the input data in its original units (5-number summary and mathematical moments) are also presented. Lastly, the Rankit, an empirical normal quantile transformation (ENQT) (Soloman & Sawilowsky, 2009), is provided to accommodate non-standard use cases and facilitate adoption. <DOI: 10.1201/9780203910894>. <DOI: 10.22237/jmasm/1257034080>.
Estimation of both single- and multiple-assignment Regression Discontinuity Designs (RDDs). Provides both parametric (global) and non-parametric (local) estimation choices for both sharp and fuzzy designs, along with power analysis and assumption checks. Introductions to the underlying logic and analysis of RDDs are in Thistlethwaite, D. L., Campbell, D. T. (1960) <doi:10.1037/h0044319> and Lee, D. S., Lemieux, T. (2010) <doi:10.1257/jel.48.2.281>.
The IntCal20 radiocarbon calibration curves (Reimer et al. 2020 <doi:10.1017/RDC.2020.68>) are provided as a data package, together with previous IntCal curves (IntCal13, IntCal09, IntCal04, IntCal98), other curves (e.g., NOTCal04 [van der Plicht et al. 2004], Arnold & Libby 1951, Stuiver & Suess 1966, Pearson & Stuiver 1986) and postbomb curves. Also provided are functions to copy the curves into memory, and to read, query and plot the data underlying the IntCal20 curves.
The package contains all the data sets related to the book written by the maintainer of the package.
Enhances the R Optimization Infrastructure ('ROI') package with the alabama solver for solving nonlinear optimization problems.
This package performs both classical and robust panel clustering by applying Principal Component Analysis (PCA) for dimensionality reduction and clustering via standard K-Means or Trimmed K-Means. The method is designed to ensure stable and reliable clustering, even in the presence of outliers. Suitable for analyzing panel data in domains such as economic research, financial time-series, healthcare analytics, and social sciences. The package allows users to choose between classical K-Means for standard clustering and Trimmed K-Means for robust clustering, making it a flexible tool for various applications. For this package, we have benefited from the studies Rencher (2003), Wang and Lu (2021) <DOI:10.25236/AJBM.2021.031018>, Cuesta-Albertos et al. (1997) <https://www.jstor.org/stable/2242558?seq=1>.
Retrieves efficiently and reliably Investors Exchange ('IEX') stock and market data using IEX Cloud API'. The platform is offered by Investors Exchange Group (IEX Group). Main goal is to leverage R capabilities including existing packages to effectively provide financial and statistical analysis as well as visualization in support of fact-based decisions. In addition, continuously improve and enhance Riex by applying best practices and being in tune with users feedback and requirements. Please, make sure to review and acknowledge Investors Exchange Group (IEX Group) terms and conditions before using Riex (<https://iexcloud.io/terms/>).
ENA (Shaffer, D. W. (2017) Quantitative Ethnography. ISBN: 0578191687) is a method used to identify meaningful and quantifiable patterns in discourse or reasoning. ENA moves beyond the traditional frequency-based assessments by examining the structure of the co-occurrence, or connections in coded data. Moreover, compared to other methodological approaches, ENA has the novelty of (1) modeling whole networks of connections and (2) affording both quantitative and qualitative comparisons between different network models. Shaffer, D.W., Collier, W., & Ruis, A.R. (2016).
An implementation of a method based on information theory devised for the identification of genes showing a significant variation of expression across multiple conditions. Given expression estimates from any number of RNA-Seq samples and conditions it identifies genes or transcripts with a significant variation of expression across all the conditions studied, together with the samples in which they are over- or under-expressed. Zambelli et al. (2018) <doi:10.1093/nar/gky055>.
External jars required for package RKEA.
Numerous functions for cohort-based analyses, either for prediction or causal inference. For causal inference, it includes Inverse Probability Weighting and G-computation for marginal estimation of an exposure effect when confounders are expected. We deal with binary outcomes, times-to-events, competing events, and multi-state data. For multistate data, semi-Markov model with interval censoring may be considered, and we propose the possibility to consider the excess of mortality related to the disease compared to reference lifetime tables. For predictive studies, we propose a set of functions to estimate time-dependent receiver operating characteristic (ROC) curves with the possible consideration of right-censoring times-to-events or the presence of confounders. Finally, several functions are available to assess time-dependent ROC curves or survival curves from aggregated data.
Building interactive web applications with R is incredibly easy with shiny'. Behind the scenes, shiny builds a reactive graph that can quickly become intertwined and difficult to debug. reactlog (Schloerke 2019) <doi:10.5281/zenodo.2591517> provides a visual insight into that black box of shiny reactivity by constructing a directed dependency graph of the application's reactive state at any time point in a reactive recording.
Robust tests (RW and RF) are provided for testing the equality of two long-tailed symmetric (LTS) means when the variances are unknown and arbitrary. RW test is a robust version of Welch's two sample t test and the RF is a robust fiducial based test. The RW and RF tests are proposed using the adaptive modified maximum likelihood (AMML) estimators derived by Tiku and Surucu (2009) <doi:10.1016/j.spl.2008.12.001> and Donmez (2010) <https://open.metu.edu.tr/bitstream/handle/11511/19440/index.pdf>.
This package implements techniques for educational resource inspection, selection, and evaluation (RISE) described in Bodily, Nyland, and Wiley (2017) <doi:10.19173/irrodl.v18i2.2952>. Automates the process of identifying learning materials that are not effectively supporting student learning in technology-mediated courses by synthesizing information about access to course content and performance on assessments.
Hybrid Mortality Modelling (HMM) provides a framework in which mortality around "the accident hump" and at very old ages can be modelled under a single model. The graphics codes necessary for visualization of the models output are included here. Specifically, the graphics are based on the assumption that, the mortality rates can be expressed as a function of the area under the curve between the crude mortality rates plots and the tangential transform of the force of mortality.