Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
These are data sets for the hit TV show, RuPaul's Drag Race. Data right now include episode-level data, contestant-level data, and episode-contestant-level data. This is a work in progress, and a love letter of a kind to RuPaul's Drag Race and the performers that have appeared on the show. This may not be the most productive use of my time, but I have tenure and what are you going to do about it? I think there is at least some value in this package if it allows the show's fandom to learn more about the R programming language around its contents.
This package provides new types of omnibus tests which are generally much more powerful than traditional tests (including the Kolmogorov-Smirnov, Cramer-von Mises and Anderson-Darling tests),see Zhang (2002) <doi:10.1111/1467-9868.00337>.
Generates simulated data representing the LOX drop testing process (also known as impact testing). A simulated process allows for accelerated study of test behavior. Functions are provided to simulate trials, test series, and groups of test series. Functions for creating plots specific to this process are also included. Test attributes and criteria can be set arbitrarily. This work is not endorsed by or affiliated with NASA. See "ASTM G86-17, Standard Test Method for Determining Ignition Sensitivity of Materials to Mechanical Impact in Ambient Liquid Oxygen and Pressurized Liquid and Gaseous Oxygen Environments" <doi:10.1520/G0086-17>.
Generates DNA sequences based on Markov model techniques for matched sequences. This can be generalized to several sequences. The sequences (taxa) are then arranged in an evolutionary tree (phylogenetic tree) depicting how taxa diverge from their common ancestors. This gives the tests and estimation methods for the parameters of different models. Standard phylogenetic methods assume stationarity, homogeneity and reversibility for the Markov processes, and often impose further restrictions on the parameters.
This package provides functions to import multiple files of multiple data file types ('.xlsx', .xls', .csv', .txt') from a given directory into R data frames.
Assists in finding the most suitable thread count for the various data.table routines that support parallel processing.
An interface to DifferentialEquations.jl <https://diffeq.sciml.ai/dev/> from the R programming language. It has unique high performance methods for solving ordinary differential equations (ODE), stochastic differential equations (SDE), delay differential equations (DDE), differential-algebraic equations (DAE), and more. Much of the functionality, including features like adaptive time stepping in SDEs, are unique and allow for multiple orders of magnitude speedup over more common methods. Supports GPUs, with support for CUDA (NVIDIA), AMD GPUs, Intel oneAPI GPUs, and Apple's Metal (M-series chip GPUs). diffeqr attaches an R interface onto the package, allowing seamless use of this tooling by R users. For more information, see Rackauckas and Nie (2017) <doi:10.5334/jors.151>.
Double constrained correspondence analysis (dc-CA) analyzes (multi-)trait (multi-)environment ecological data by using the vegan package and native R code. Throughout the two step algorithm of ter Braak et al. (2018) is used. This algorithm combines and extends community- (sample-) and species-level analyses, i.e. the usual community weighted means (CWM)-based regression analysis and the species-level analysis of species-niche centroids (SNC)-based regression analysis. The two steps use canonical correspondence analysis to regress the abundance data on to the traits and (weighted) redundancy analysis to regress the CWM of the orthonormalized traits on to the environmental predictors. The function dc_CA() has an option to divide the abundance data of a site by the site total, giving equal site weights. This division has the advantage that the multivariate analysis corresponds with an unweighted (multi-trait) community-level analysis, instead of being weighted. The first step of the algorithm uses vegan::cca(). The second step uses wrda() but vegan::rda() if the site weights are equal. This version has a predict() function. For details see ter Braak et al. 2018 <doi:10.1007/s10651-017-0395-x>. and ter Braak & van Rossum 2025 <doi:10.1016/j.ecoinf.2025.103143>.
This package creates define.xml documents used for regulatory submissions based on spreadsheet metadata. Can also help create metadata and generate HTML data explorer.
Improves the concept of multivariate range boxes, which is highly susceptible for outliers and does not consider the distribution of the data. The package uses dynamic range boxes to overcome these problems.
Easy comparison of two tabular data objects in R. Specifically designed to show differences between two sets of data in a useful way that should make it easier to understand the differences, and if necessary, help you work out how to remedy them. Aims to offer a more useful output than all.equal() when your two data sets do not match, but isn't intended to replace all.equal() as a way to test for equality.
This package provides a set of control charts for batch processes based on the VAR model. The package contains the implementation of T2.var and W.var control charts based on VAR model coefficients using the couple vectors theory. In each time-instant the VAR coefficients are estimated from a historical in-control dataset and a decision rule is made for online classifying of a new batch data. Those charts allow efficient online monitoring since the very first time-instant. The offline version is available too. In order to evaluate the chart's performance, this package contains functions to generate batch data for offline and online monitoring.See in Danilo Marcondes Filho and Marcio Valk (2020) <doi:10.1016/j.ejor.2019.12.038>.
Data whitening is a widely used preprocessing step to remove correlation structure since statistical models often assume independence. Here we use a probabilistic model of the observed data to apply a whitening transformation. This Gaussian Inverse Wishart Empirical Bayes model substantially reduces computational complexity, and regularizes the eigen-values of the sample covariance matrix to improve out-of-sample performance.
Calculates Distinctiveness Centrality in social networks. For formulas and descriptions, see Fronzetti Colladon and Naldi (2020) <doi:10.1371/journal.pone.0233276>.
Likelihood-based inference for skewed count distributions, typically of degrees used in network modeling. "degreenet" is a part of the "statnet" suite of packages for network analysis. See Jones and Handcock <doi:10.1098/rspb.2003.2369>.
This package provides a drop-in replacement for dplyr', powered by DuckDB for performance. Offers convenient utilities for working with in-memory and larger-than-memory data while retaining full dplyr compatibility.
Implement weighted higher-order initialization and angle-based iteration for multi-way spherical clustering under degree-corrected tensor block model. See reference Jiaxin Hu and Miaoyan Wang (2023) <doi:10.1109/TIT.2023.3239521>.
Plots dependency logos from a set of aligned input sequences.
Applies dynamic structural equation models to time-series data with generic and simplified specification for simultaneous and lagged effects. Methods are described in Thorson et al. (2024) "Dynamic structural equation models synthesize ecosystem dynamics constrained by ecological mechanisms.".
What is funnier than a dad joke? A dad joke in R! This package utilizes the API for <https://icanhazdadjoke.com> and returns dad jokes from several API endpoints.
This package provides tools for detecting XOR-like patterns in variable pairs in two-class data sets. Includes visualizations for pattern exploration and reporting capabilities with both text and HTML output formats.
All datasets and functions required for the examples and exercises of the book "Data Science for Psychologists" (by Hansjoerg Neth, Konstanz University, 2025, <doi:10.5281/zenodo.7229812>), freely available at <https://bookdown.org/hneth/ds4psy/>. The book and corresponding courses introduce principles and methods of data science to students of psychology and other biological or social sciences. The ds4psy package primarily provides datasets, but also functions for data generation and manipulation (e.g., of text and time data) and graphics that are used in the book and its exercises. All functions included in ds4psy are designed to be explicit and instructive, rather than efficient or elegant.
Distributed Online Mean Tests is a powerful tool designed to efficiently process and analyze distributed datasets. It enables users to perform mean tests in an online, distributed manner, making it highly suitable for large-scale data analysis. By leveraging advanced computational techniques, Domean ensures robust and scalable solutions for statistical analysis, particularly in scenarios where data is dispersed across multiple nodes or sources. This package is ideal for researchers and practitioners working with high-dimensional data, providing a flexible and efficient framework for mean testing. The philosophy of Domean is described in Guo G.(2025) <doi:10.1016/j.physa.2024.130308>.
Implementation of the Decorrelated Local Linear estimator proposed in <arxiv:1907.12732>. It constructs the confidence interval for the derivative of the function of interest under the high-dimensional sparse additive model.