Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Fit mixed-effects location scale models with spike-and-slab priors on the location random effects to identify units with unusual residual variances. The method is described in detail in Carmo, Williams and Rast (2025) <https://osf.io/sh6ne>.
Implementation of the information matrix test for generalized partial credit models.
An implementation of the Unsupervised Smooth Contour Detection algorithm for digital images as described in the paper: "Unsupervised Smooth Contour Detection" by Rafael Grompone von Gioi, and Gregory Randall (2016). The algorithm is explained at <doi:10.5201/ipol.2016.175>.
Imputation of missing values using the last observation carried forward technique on Indonesia food prices data that is time series data. Also, this technique applies imputation to data whose dates do not appear directly. So that the series assumptions in the time series data are met.
This package provides functions and classes to compute, handle and visualise incidence from dated events for a defined time interval. Dates can be provided in various standard formats. The class incidence is used to store computed incidence and can be easily manipulated, subsetted, and plotted. In addition, log-linear models can be fitted to incidence objects using fit'. This package is part of the RECON (<https://www.repidemicsconsortium.org/>) toolkit for outbreak analysis.
An implementation of generalized linear models (GLMs) for studying relationships among attributes in connected populations, where responses of connected units can be dependent, as introduced by Fritz et al. (2025) <doi:10.1080/01621459.2025.2565851>. igml extends GLMs for independent responses to dependent responses and can be used for studying spillover in connected populations and other network-mediated phenomena.
Generate interactive volcano plots for exploring gene expression data. Built with ggplot2', the plots are rendered interactive using ggiraph', enabling users to hover over points to display detailed information or click to trigger custom actions.
Simulate general insurance policies, losses and loss emergence. The functions contemplate deterministic and stochastic policy retention and growth scenarios. Retention and growth rates are percentages relative to the expiring portfolio. Claims are simulated for each policy. This is accomplished either be assuming a frequency distribution per development lag or by generating random wait times until claim emergence and settlement. Loss simulation uses standard loss distributions for claim amounts.
The goal of image2data is to extract images and return them into a data set, especially for teaching data manipulation and data visualization. Basically, the eponymous function takes an image file ('png', tiff', jpeg', bmp') and turn it into a data set, pixels being rows (subjects) and columns (variables) being their coordinate positions (x- and y-axis) and their respective color (in hex codes). The function can return a complete image or a range of color (i.e., contour, silhouette). The data can then be manipulated as would any data set by either creating other related variables (to hide the image) or as a genuine toy data set.
This package provides methods for estimating causal effects in the presence of interference described in B. Saul and M. Hugdens (2017) <doi:10.18637/jss.v082.i02>. Currently it implements the inverse-probability weighted (IPW) estimators proposed by E.J. Tchetgen Tchetgen and T.J. Vanderweele (2012) <doi:10.1177/0962280210386779>.
This package implements the item based collaborative filtering (IBCF) method for continues phenotypes in the context of plant breeding where data are collected for various traits that were studied in various environments proposed by Montesinos-López et al. (2017) <doi:10.1534/g3.117.300309>.
This package provides a pipeline to annotate chromatography peaks from the IDSL.IPA workflow <doi:10.1021/acs.jproteome.2c00120> with molecular formulas of a prioritized chemical space using an isotopic profile matching approach. The IDSL.UFA workflow only requires mass spectrometry level 1 (MS1) data for formula annotation. The IDSL.UFA methods was described in <doi:10.1021/acs.analchem.2c00563> .
The core of the package is cvr2.ipflasso(), an extension of glmnet to be used when the (large) set of available predictors is partitioned into several modalities which potentially differ with respect to their information content in terms of prediction. For example, in biomedical applications patient outcome such as survival time or response to therapy may have to be predicted based on, say, mRNA data, miRNA data, methylation data, CNV data, clinical data, etc. The clinical predictors are on average often much more important for outcome prediction than the mRNA data. The ipflasso method takes this problem into account by using different penalty parameters for predictors from different modalities. The ratio between the different penalty parameters can be chosen from a set of optional candidates by cross-validation or alternatively generated from the input data.
Convert between bookmaker odds and probabilities. Eight different algorithms are available, including basic normalization, Shin's method (Hyun Song Shin, (1992) <doi:10.2307/2234526>), and others.
Converts character vectors between phonetic representations. Supports IPA (International Phonetic Alphabet), X-SAMPA (Extended Speech Assessment Methods Phonetic Alphabet), and ARPABET (used by the CMU Pronouncing Dictionary).
Allows for the non-parametric estimation of transition intensities in interval-censored multi-state models using the approach of Gomon and Putter (2024) <doi:10.48550/arXiv.2409.07176> or Gu et al. (2023) <doi:10.1093/biomet/asad073>.
Call the data wrappers for Izmir Metropolitan Municipality's Open Data Portal. This will return all datasets formatted as Excel files (.csv or .xlsx), as well as datasets that require an API key.
This package contains some important regression methods for interval-valued variables. For each method, it is available the fitted values, residuals and some goodness-of-fit measures.
Fits a double logistic function to NDVI time series and calculates instantaneous rate of green (IRG) according to methods described in Bischoff et al. (2012) <doi:10.1086/667590>.
Distributional regression under stochastic order restrictions for numeric and binary response variables and partially ordered covariates. See Henzi, Ziegel, Gneiting (2020) <arXiv:1909.03725>.
This package provides functions to calculate indices used to score immunoglobulin A (IgA) binding of bacteria in IgA sequencing (IgA-Seq) experiments. This includes the original Kau and Palm indices and more recent methods as described in Jackson et al. (2020) <doi:10.1101/2020.08.19.257501>. Additionally the package contains a function to simulate IgA-Seq data and an example experimental data set for method testing.
This package provides a set of tools for evaluating several measures of case influence for structural equation models.
Volume prediction is one of challenging task in forestry research. This package is a comprehensive toolset designed for the fitting and validation of various linear and nonlinear allometric equations (Linear, Log-Linear, Inverse, Quadratic, Cubic, Compound, Power and Exponential) used in the prediction of conifer tree volume. This package is particularly useful for forestry professionals, researchers, and resource managers engaged in assessing and estimating the volume of coniferous trees. This package has been developed using the algorithm of Sharma et al. (2017) <doi:10.13140/RG.2.2.33786.62407>.
Simple handling of survey data. Smart handling of meta-information like e.g. variable-labels value-labels and scale-levels. Easy access and validation of meta-information. Useage of value labels and values respectively for subsetting and recoding data.