Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Can take in images in either .jpg, .jpeg, or .png format and creates a colour palette of the most frequent colours used in the image. Also provides some custom colour palettes.
This package implements cointegration/co-trending rank selection algorithm in Guo and Shintani (2013) "Consistent co-trending rank selection when both stochastic and nonlinear deterministic trends are present". The Econometrics Journal 16: 473-483 <doi:10.1111/j.1368-423X.2012.00392.x>. Numbered examples correspond to Feb 2011 preprint <http://www.fas.nus.edu.sg/ecs/events/seminar/seminar-papers/05Apr11.pdf>.
When taking online surveys, participants sometimes respond to items without regard to their content. These types of responses, referred to as careless or insufficient effort responding, constitute significant problems for data quality, leading to distortions in data analysis and hypothesis testing, such as spurious correlations. The R package careless provides solutions designed to detect such careless / insufficient effort responses by allowing easy calculation of indices proposed in the literature. It currently supports the calculation of longstring, even-odd consistency, psychometric synonyms/antonyms, Mahalanobis distance, and intra-individual response variability (also termed inter-item standard deviation). For a review of these methods, see Curran (2016) <doi:10.1016/j.jesp.2015.07.006>.
This package contains 3 maps. 1) US States 2) US Counties 3) Countries of the world.
Colorful Data Frames in the terminal. The new class does change the behaviour of any of the objects, but adds a style definition and a print method. Using ANSI escape codes, it colors the terminal output of data frames. Some column types (such as p-values and identifiers) are automatically recognized.
Perform Nonlinear Mixed-Effects (NLME) Modeling using Certara's NLME-Engine. Access the same Maximum Likelihood engines used in the Phoenix platform, including algorithms for parametric methods, individual, and pooled data analysis. The Quasi-Random Parametric Expectation-Maximization Method (QRPEM) is also supported <https://www.page-meeting.org/default.asp?abstract=2338>. Execution is supported both locally or on remote machines. Remote execution includes support for Linux Sun Grid Engine (SGE), Simple Linux Utility for Resource Management (SLURM) grids, Linux and Windows multicore, and individual runs.
This package provides a one-stop shop for intuitive and dependency-free color and palette creation and modification. Includes palettes and functionality from popular packages such as viridis', RColorBrewer', and base R grDevices', as well as ggplot2 plot bindings. Users can generate perceptually uniform and colorblind-friendly palettes, adjust palettes in HSL and RGB color spaces, map color gradients to value ranges, and create color-generating functions.
Weekly notified dengue cases and climate variables in Colombo district Sri Lanka from 2008/ week-52 to 2014/ week-21.
Calculations of "EP15-A3 document. A manual for user verification of precision and estimation of bias" CLSI (2014, ISBN:1-56238-966-1).
Converts numbers to continued fractions and back again. A solver for Pell's Equation is provided. The method for calculating roots in continued fraction form is provided without published attribution in such places as Professor Emeritus Jonathan Lubin, <http://www.math.brown.edu/jlubin/> and his post to StackOverflow, <https://math.stackexchange.com/questions/2215918> , or Professor Ron Knott, e.g., <https://r-knott.surrey.ac.uk/Fibonacci/cfINTRO.html> .
Subset and download data from EU Copernicus Climate Data Service: <https://cds.climate.copernicus.eu/>. Import information about the Earth's past, present and future climate from Copernicus into R without the need of external software.
Simplifies the execution of command line interface (CLI) tools within isolated and reproducible environments. It enables users to effortlessly manage Conda environments, execute command line tools, handle dependencies, and ensure reproducibility in their data analysis workflows.
Tests, utilities, and case studies for analyzing significance in clustered binary matched-pair data. The central function clust.bin.pair uses one of several tests to calculate a Chi-square statistic. Implemented are the tests Eliasziw (1991) <doi:10.1002/sim.4780101211>, Obuchowski (1998) <doi:10.1002/(SICI)1097-0258(19980715)17:13%3C1495::AID-SIM863%3E3.0.CO;2-I>, Durkalski (2003) <doi:10.1002/sim.1438>, and Yang (2010) <doi:10.1002/bimj.201000035> with McNemar (1947) <doi:10.1007/BF02295996> included for comparison. The utility functions nested.to.contingency and paired.to.contingency convert data between various useful formats. Thyroids and psychiatry are the canonical datasets from Obuchowski and Petryshen (1989) <doi:10.1016/0165-1781(89)90196-0> respectively.
Compute price indices using various Hedonic and multilateral methods, including Laspeyres, Paasche, Fisher, and HMTS (Hedonic Multilateral Time series re-estimation with splicing). The central function calculate_price_index() offers a unified interface for running these methods on structured datasets. This package is designed to support index construction workflows for real estate and other domains where quality-adjusted price comparisons over time are essential. The development of this package was funded by Eurostat and Statistics Netherlands (CBS), and carried out by Statistics Netherlands. The HMTS method implemented here is described in Ishaak, Ouwehand and Remøy (2024) <doi:10.1177/0282423X241246617>. For broader methodological context, see Eurostat (2013, ISBN:978-92-79-25984-5, <doi:10.2785/34007>).
Calculates predictions from generalized estimating equations and internally cross-validates them using the logarithmic, quadratic and spherical proper scoring rules; Kung-Yee Liang and Scott L. Zeger (1986) <doi:10.1093/biomet/73.1.13>.
Calculates equitable overload compensation for college instructors based on institutional policies, enrollment thresholds, and regular teaching load limits. Compensation is awarded only for credit hours that exceed the regular load and meet minimum enrollment criteria. When enrollment is below a specified threshold, pay is prorated accordingly. The package prioritizes compensation from high-enrollment courses, or optionally from low-enrollment courses for fairness, depending on user-defined strategy. Includes tools for flexible policy settings, instructor filtering, and produces clean, audit-ready summary tables suitable for payroll and administrative reporting.
Causal Inference Assistance (CIA) for performing causal inference within the structural causal modelling framework. Structure learning is performed using partition Markov chain Monte Carlo (Kuipers & Moffa, 2017) and several additional functions have been added to help with causal inference. Kuipers and Moffa (2017) <doi:10.1080/01621459.2015.1133426>.
Supervised learning from a source distribution (with known segmentation into cell sub-populations) to fit a target distribution with unknown segmentation. It relies regularized optimal transport to directly estimate the different cell population proportions from a biological sample characterized with flow cytometry measurements. It is based on the regularized Wasserstein metric to compare cytometry measurements from different samples, thus accounting for possible mis-alignment of a given cell population across sample (due to technical variability from the technology of measurements). Supervised learning technique based on the Wasserstein metric that is used to estimate an optimal re-weighting of class proportions in a mixture model Details are presented in Freulon P, Bigot J and Hejblum BP (2023) <doi:10.1214/22-AOAS1660>.
Routines for solving convex optimization problems with cone constraints by means of interior-point methods. The implemented algorithms are partially ported from CVXOPT, a Python module for convex optimization (see <https://cvxopt.org> for more information).
Implementation of the Contextual Importance and Utility (CIU) concepts for Explainable AI (XAI). A description of CIU can be found in e.g. Främling (2020) <doi:10.1007/978-3-030-51924-7_4>.
This package provides a specialized tool is designed for assessing contextual bandit algorithms, particularly those aimed at handling overdispersed and zero-inflated count data. It offers a simulated testing environment that includes various models like Poisson, Overdispersed Poisson, Zero-inflated Poisson, and Zero-inflated Overdispersed Poisson. The package is capable of executing five specific algorithms: Linear Thompson sampling with log transformation on the outcome, Thompson sampling Poisson, Thompson sampling Negative Binomial, Thompson sampling Zero-inflated Poisson, and Thompson sampling Zero-inflated Negative Binomial. Additionally, it can generate regret plots to evaluate the performance of contextual bandit algorithms. This package is based on the algorithms by Liu et al. (2023) <arXiv:2311.14359>.
Continuous glucose monitoring (CGM) systems provide real-time, dynamic glucose information by tracking interstitial glucose values throughout the day. Glycemic variability, also known as glucose variability, is an established risk factor for hypoglycemia (Kovatchev) and has been shown to be a risk factor in diabetes complications. Over 20 metrics of glycemic variability have been identified. Here, we provide functions to calculate glucose summary metrics, glucose variability metrics (as defined in clinical publications), and visualizations to visualize trends in CGM data. Cho P, Bent B, Wittmann A, et al. (2020) <https://diabetes.diabetesjournals.org/content/69/Supplement_1/73-LB.abstract> American Diabetes Association (2020) <https://professional.diabetes.org/diapro/glucose_calc> Kovatchev B (2019) <doi:10.1177/1932296819826111> Kovdeatchev BP (2017) <doi:10.1038/nrendo.2017.3> Tamborlane W V., Beck RW, Bode BW, et al. (2008) <doi:10.1056/NEJMoa0805017> Umpierrez GE, P. Kovatchev B (2018) <doi:10.1016/j.amjms.2018.09.010>.
This package provides recent kernel density estimation methods for circular data, including adaptive and higher-order techniques. The implementation is based on recent advances in bandwidth selection and circular smoothing. Key methods include adaptive bandwidth selection methods by ZámeÄ nà k et al. (2024) <doi:10.1007/s00180-023-01401-0>, complete cross-validation by Hasilová et al. (2024) <doi:10.59170/stattrans-2024-024>, Fourier-based plug-in rules by Tenreiro (2022) <doi:10.1080/10485252.2022.2057974>, and higher-order kernels by Tsuruta & Sagae (2017) <doi:10.1016/j.spl.2017.08.003>.
Cluster analysis with compositional data using the alpha--transformation. Relevant papers include: Tsagris M. and Kontemeniotis N. (2025), <doi:10.48550/arXiv.2509.05945>. Tsagris M.T., Preston S. and Wood A.T.A. (2011), <doi:10.48550/arXiv.1106.1451>. Garcia-Escudero Luis A., Gordaliza Alfonso, Matran Carlos, Mayo-Iscar Agustin. (2008), <doi:10.1214/07-AOS515>.