Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
In streaming data analysis, it is crucial to detect significant shifts in the data distribution or the accuracy of predictive models over time, a phenomenon known as concept drift. The package aims to identify when concept drift occurs and provide methodologies for adapting models in non-stationary environments. It offers a range of state-of-the-art techniques for detecting concept drift and maintaining model performance. Additionally, the package provides tools for adapting models in response to these changes, ensuring continuous and accurate predictions in dynamic contexts. Methods for concept drift detection are described in Tavares (2022) <doi:10.1007/s12530-021-09415-z>.
This package provides interpretable high-dimensional mean comparison methods (HMC). For example, users can apply these methods to assess the difference in gene expression between two treatment groups. It is not a gene-by-gene comparison. Instead, the methods focus on the interplay between features and identify those that are predictive of the group label. The tests are valid frequentist procedures and yield sparse estimates indicating which features contribute to the group differences.
Generates a fit plot for diagnosing misspecification in models of binary dependent variables, and calculates the related heatmap fit statistic described in Esarey and Pierce (2012) <DOI:10.1093/pan/mps026>.
This data-only package was created for distributing data used in the examples of the hglm package.
This package provides a tool for Hierarchical Climate Regionalization applicable to any correlation-based clustering. It adds several features and a new clustering method (called, regional linkage) to hierarchical clustering in R ('hclust function in stats library): data regridding, coarsening spatial resolution, geographic masking, contiguity-constrained clustering, data filtering by mean and/or variance thresholds, data preprocessing (detrending, standardization, and PCA), faster correlation function with preliminary big data support, different clustering methods, hybrid hierarchical clustering, multivariate clustering (MVC), cluster validation, visualization of regionalization results, and exporting region map and mean timeseries into NetCDF-4 file. The technical details are described in Badr et al. (2015) <doi:10.1007/s12145-015-0221-7>.
Penalized regression for generalized linear models for measurement error problems (aka. errors-in-variables). The package contains a version of the lasso (L1-penalization) which corrects for measurement error (Sorensen et al. (2015) <doi:10.5705/ss.2013.180>). It also contains an implementation of the Generalized Matrix Uncertainty Selector, which is a version the (Generalized) Dantzig Selector for the case of measurement error (Sorensen et al. (2018) <doi:10.1080/10618600.2018.1425626>).
This package provides functionality to download and cache files from Hugging Face Hub <https://huggingface.co/models>. Uses the same caching structure so files can be shared between different client libraries.
HTTP Request protocols. Implements the GET, POST and multipart POST request.
This package implements an efficient algorithm for fitting the entire regularization path of support vector machine models with elastic-net penalties using a generalized coordinate descent scheme. The framework also supports SCAD and MCP penalties. It is designed for high-dimensional datasets and emphasizes numerical accuracy and computational efficiency. This package implements the algorithms proposed in Tang, Q., Zhang, Y., & Wang, B. (2022) <https://openreview.net/pdf?id=RvwMTDYTOb>.
An open-source R package to deploys reproducible and flexible labels using layers. The huito package is part of the inkaverse project for developing different procedures and tools used in plant science and experimental designs. Learn more about the inkaverse project at <https://inkaverse.com/>.
S3 functions implementing both statistical and graphical goodness-of-fit measures between observed and simulated values, mainly oriented to be used during the calibration, validation, and application of hydrological models. Missing values in observed and/or simulated values can be removed before computations. Comments / questions / collaboration of any kind are very welcomed.
Generates HIDECAN plots that summarise and combine the results of genome-wide association studies (GWAS) and transcriptomics differential expression analyses (DE), along with manually curated candidate genes of interest. The HIDECAN plot is presented in Angelin-Bonnet et al. (2023) (currently in review).
Testing homogeneity of k multivariate distributions is a classical and challenging problem in statistics, and this becomes even more challenging when the dimension of the data exceeds the sample size. We construct some tests for this purpose which are exact level (size) alpha tests based on clustering. These tests are easy to implement and distribution-free in finite sample situations. Under appropriate regularity conditions, these tests have the consistency property in HDLSS asymptotic regime, where the dimension of data grows to infinity while the sample size remains fixed. We also consider a multiscale approach, where the results for different number of partitions are aggregated judiciously. Details are in Biplab Paul, Shyamal K De and Anil K Ghosh (2021) <doi:10.1016/j.jmva.2021.104897>; Soham Sarkar and Anil K Ghosh (2019) <doi:10.1109/TPAMI.2019.2912599>; William M Rand (1971) <doi:10.1080/01621459.1971.10482356>; Cyrus R Mehta and Nitin R Patel (1983) <doi:10.2307/2288652>; Joseph C Dunn (1973) <doi:10.1080/01969727308546046>; Sture Holm (1979) <doi:10.2307/4615733>; Yoav Benjamini and Yosef Hochberg (1995) <doi: 10.2307/2346101>.
Import and classify canopy fish-eye images, estimate angular gap fraction and derive canopy attributes like leaf area index and openness. Additional information is provided in the study by Chianucci F., Macek M. (2023) <doi:10.1016/j.agrformet.2023.109470>.
The Gene Ontology (GO) Consortium <https://geneontology.org/> organizes genes into hierarchical categories based on biological process (BP), molecular function (MF) and cellular component (CC, i.e., subcellular localization). Tools such as GoMiner (see Zeeberg, B.R., Feng, W., Wang, G. et al. (2003) <doi:10.1186/gb-2003-4-4-r28>) can leverage GO to perform ontological analysis of microarray and proteomics studies, typically generating a list of significant functional categories. Microarray studies are usually analyzed with BP, whereas proteomics researchers often prefer CC. To capture the benefit of both of those ontologies, I developed a two-dimensional version of High-Throughput GoMiner ('HTGM2D'). I generate a 2D heat map whose axes are any two of BP, MF, or CC, and the value within a picture element of the heat map reflects the Jaccard metric p-value for the number of genes in common for the corresponding pair.
The Gene Ontology (GO) Consortium <https://geneontology.org/> organizes genes into hierarchical categories based on biological process (BP), molecular function (MF) and cellular component (CC, i.e., subcellular localization). Tools such as GoMiner (see Zeeberg, B.R., Feng, W., Wang, G. et al. (2003) <doi:10.1186/gb-2003-4-4-r28>) can leverage GO to perform ontological analysis of microarray and proteomics studies, typically generating a list of significant functional categories. To capture the benefit of all three ontologies, I developed HTGM3D', a three-dimensional version of GoMiner'.
Hadoop InteractiVE facilitates distributed computing via the MapReduce paradigm through R and Hadoop. An easy to use interface to Hadoop, the Hadoop Distributed File System (HDFS), and Hadoop Streaming is provided.
This package provides functions to conduct robust inference in difference-in-differences and event study designs by implementing the methods developed in Rambachan & Roth (2023) <doi:10.1093/restud/rdad018>, "A More Credible Approach to Parallel Trends" [Previously titled "An Honest Approach..."]. Inference is conducted under a weaker version of the parallel trends assumption. Uniformly valid confidence sets are constructed based upon conditional confidence sets, fixed-length confidence sets and hybridized confidence sets.
Enables chat completion and text annotation with local and OpenAI <https://openai.com/> language models, supporting batch processing, multiple annotators, and consistent output formats.
This package contains functions to construct high-dimensional orthogonal maximin distance designs in two, four, eight, and sixteen levels from rotating the Kronecker product of sub-Hadamard matrices.
Package that simplifies the use of the HPZone API. Most of the annoying and labor-intensive parts of the interface are handled by wrapper functions. Note that the API and its details are not publicly available. Information can be found at <https://www.ggdghorkennisnet.nl/groep/726-platform-infectieziekte-epidemiologen/documenten/map/9609> for those with access.
Create dynamic, data-driven text. Given two values, a list of talking points is generated and can be combined using string interpolation. Based on the glue package.
Translation between experimental null hypotheses, hypothesis matrices, and contrast matrices as used in linear regression models. The package is based on the method described in Schad et al. (2019) <doi:10.1016/j.jml.2019.104038> and Rabe et al. (2020) <doi:10.21105/joss.02134>.
Simulate and analyze hierarchical composite endpoints. Includes implementation for the kidney hierarchical composite endpoint as defined in Heerspink HL et al (2023) â Development and validation of a new hierarchical composite end point for clinical trials of kidney disease progressionâ (Journal of the American Society of Nephrology 34 (2): 2025â 2038, <doi:10.1681/ASN.0000000000000243>). Win odds, also called Wilcoxon-Mann-Whitney or success odds, is the main analysis method. Other win statistics (win probability, win ratio, net benefit) are also implemented in the univariate case, provided there is no censoring. The win probability analysis is based on the Brunner-Munzel test and uses the DeLong-DeLong-Clarke-Pearson variance estimator, as described by Brunner and Konietschke (2025) in â An unbiased rank-based estimator of the Mannâ Whitney variance including the case of tiesâ (Statistical Papers 66 (1): 20, <doi:10.1007/s00362-024-01635-0>). Includes implementation of a new Wilson-type, compatible confidence interval for the win odds, as proposed by Schüürhuis, Konietschke, Brunner (2025) in â A new approach to the nonparametric Behrensâ Fisher problem with compatible confidence intervals.â (Biometrical Journal 67 (6), <doi:10.1002/bimj.70096>). Stratification and covariate adjustment are performed based on the methodology presented by Koch GG et al. in â Issues for covariance analysis of dichotomous and ordered categorical data from randomized clinical trials and non-parametric strategies for addressing themâ (Statistics in Medicine 17 (15-16): 1863â 92). For a review, see Gasparyan SB et al (2021) â Adjusted win ratio with stratification: Calculation methods and interpretationâ (Statistical Methods in Medical Research 30 (2): 580â 611, <doi:10.1177/0962280220942558>).