Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Interact with the application programming interface for the web annotation service Hypothes.is (See <http://hypothes.is> for more information.) Allows users to download data about public annotations, and create, retrieve, update, and delete their own annotations.
HAR files are generated and consumed by GEMPACK applications. This package reads/writes HAR files (and SL4 files) directly using basic R functions.
Ridge regression provide biased estimators of the regression parameters with lower variance. The HDBRR ("High Dimensional Bayesian Ridge Regression") function fits Bayesian Ridge regression without MCMC, this one uses the SVD or QR decomposition for the posterior computation.
This package provides a dummy package to demonstrate how to interface to a jar file that resides inside an R package.
This package provides functions for fitting various penalized parametric and semi-parametric mixture cure models with different penalty functions, testing for a significant cure fraction, and testing for sufficient follow-up as described in Fu et al (2022)<doi:10.1002/sim.9513> and Archer et al (2024)<doi:10.1186/s13045-024-01553-6>. False discovery rate controlled variable selection is provided using model-X knock-offs.
Perform statistical writership analysis of scanned handwritten documents. Webpage provided at: <https://github.com/CSAFE-ISU/handwriter>.
This package performs iterative extrapolation of species haplotype accumulation curves using a nonparametric stochastic (Monte Carlo) optimization method for assessment of specimen sampling completeness based on the approach of Phillips et al. (2015) <doi:10.1515/dna-2015-0008>, Phillips et al. (2019) <doi:10.1002/ece3.4757> and Phillips et al. (2020) <doi: 10.7717/peerj-cs.243>. HACSim outputs a number of useful summary statistics of sampling coverage ("Measures of Sampling Closeness"), including an estimate of the likely required sample size (along with desired level confidence intervals) necessary to recover a given number/proportion of observed unique species haplotypes. Any genomic marker can be targeted to assess likely required specimen sample sizes for genetic diversity assessment. The method is particularly well-suited to assess sampling sufficiency for DNA barcoding initiatives. Users can also simulate their own DNA sequences according to various models of nucleotide substitution. A Shiny app is also available.
Estimates treatment effects using covariate adjustment methods in Randomized Clinical Trials (RCT) motivated by higher-order influence functions (HOIF). Provides point estimates, oracle bias, variance, and approximate variance for HOIF-adjusted estimators. For methodology details, see Zhao et al. (2024) <doi:10.48550/arXiv.2411.08491> and Gu et al. (2025) <doi:10.48550/arXiv.2512.20046>.
Raster based flood modelling internally using hyd1d', an R package to interpolate 1d water level and gauging data. The package computes flood extent and duration through strategies originally developed for INFORM', an ArcGIS'-based hydro-ecological modelling framework. It does not provide a full, physical hydraulic modelling algorithm, but a simplified, near real time GIS approach for flood extent and duration modelling. Computationally demanding annual flood durations have been computed already and data products were published by Weber (2022) <doi:10.1594/PANGAEA.948042>.
Plot an R package's recursive dependency graph and tabulate the number of unique downstream dependencies added by top-level dependencies. This helps R package developers identify which of their declared dependencies add the most downstream dependencies in order to prioritize them for removal if needed. Uses graph stress minimization adapted from Schoch (2023) <doi:10.21105/joss.05238> and originally reported in Gansner et al. (2004) <doi:10.1007/978-3-540-31843-9_25>.
Hospital data analysis workflow tools, modeling, and automations. This library provides many useful tools to review common administrative hospital data. Some of these include average length of stay, readmission rates, average net pay amounts by service lines just to name a few. The aim is to provide a simple and consistent verb framework that takes the guesswork out of everything.
Read, plot, manipulate and process hydro-meteorological data records (with special features for Argentina and Chile data-sets).
The Ljung-Box test is one of the most important tests for time series diagnostics and model selection. The Hassani SACF (Sum of the Sample Autocorrelation Function) Theorem , however, indicates that the sum of sample autocorrelation function is always fix for any stationary time series with arbitrary length. This package confirms for sensitivity of the Ljung-Box test to the number of lags involved in the test and therefore it should be used with extra caution. The Hassani SACF Theorem has been described in : Hassani, Yeganegi and M. R. (2019) <doi:10.1016/j.physa.2018.12.028>.
Apply an adaptation of the SuperFastHash algorithm to any R object. Hash whole R objects or, for vectors or lists, hash R objects to obtain a set of hash values that is stored in a structure equivalent to the input. See <http://www.azillionmonkeys.com/qed/hash.html> for a description of the hash algorithm.
This package provides a suite of routines for the hyperdirichlet distribution and reified Bradley-Terry; supersedes the hyperdirichlet package; uses disordR discipline <doi:10.48550/ARXIV.2210.03856>. To cite in publications please use Hankin 2017 <doi:10.32614/rj-2017-061>, and for Generalized Plackett-Luce likelihoods use Hankin 2024 <doi:10.18637/jss.v109.i08>.
This package provides a set of tools to analyze and visualize the relationships between host-associated microbiomes of hybrid organisms and those of their progenitor species. Though not necessary, installing the microViz package is recommended as a check for phyloseq objects. To install microViz from R Universe use the following command: install.packages("microViz", repos = c(davidbarnett = "https://david-barnett.r-universe.dev", getOption("repos"))). To install microViz from GitHub use the following commands: install.packages("devtools") followed by devtools::install_github("david-barnett/microViz").
Given a database of previous treatment/placebo estimates, their standard errors and sample sizes, the program calculates a significance criteria and power estimate that takes into account the among trial variation.
Perform Hi-C data differential analysis based on pixel-level differential analysis and a post hoc inference strategy to quantify signal in clusters of pixels. Clusters of pixels are obtained through a connectivity-constrained two-dimensional hierarchical clustering.
In high-dimensional settings: Estimate the number of distant spikes based on the Generalized Spiked Population (GSP) model. Estimate the population eigenvalues, angles between the sample and population eigenvectors, correlations between the sample and population PC scores, and the asymptotic shrinkage factors. Adjust the shrinkage bias in the predicted PC scores. Dey, R. and Lee, S. (2019) <doi:10.1016/j.jmva.2019.02.007>.
Offers a convenient way to compute parameters in the framework of the theory of vocational choice introduced by J.L. Holland, (1997). A comprehensive summary to this theory of vocational choice is given in Holland, J.L. (1997). Making vocational choices. A theory of vocational personalities and work environments. Lutz, FL: Psychological Assessment.
This package provides a toolkit for the analysis and management of data for genes in the so-called "Human Leukocyte Antigen" (HLA) region. Functions extract reference data from the Anthony Nolan HLA Informatics Group/ImmunoGeneTics HLA GitHub repository (ANHIG/IMGTHLA) <https://github.com/ANHIG/IMGTHLA>, validate Genotype List (GL) Strings, convert between UNIFORMAT and GL String Code (GLSC) formats, translate HLA alleles and GLSCs across ImmunoPolymorphism Database (IPD) IMGT/HLA Database release versions, identify differences between pairs of alleles at a locus, generate customized, multi-position sequence alignments, trim and convert allele-names across nomenclature epochs, and extend existing data-analysis methods.
The HURRECON model estimates wind speed, wind direction, enhanced Fujita scale wind damage, and duration of EF0 to EF5 winds as a function of hurricane location and maximum sustained wind speed. Results may be generated for a single site or an entire region. Hurricane track and intensity data may be imported directly from the US National Hurricane Center's HURDAT2 database. For details on the original version of the model written in Borland Pascal, see: Boose, Chamberlin, and Foster (2001) <doi:10.1890/0012-9615(2001)071[0027:LARIOH]2.0.CO;2> and Boose, Serrano, and Foster (2004) <doi:10.1890/02-4057>.
Seed germinates through the physical process of water uptake by dry seed driven by the difference in water potential between the seed and the water. There exists seed-to-seed variability in the base seed water potential. Hence, there is a need for a distribution such that a viable seed with its base seed water potential germinates if and only if the soil water potential is more than the base seed water potential. This package estimates the stress tolerance and uniformity parameters of the seed lot for germination under various temperatures by using the hydro-time model of counts of germinated seeds under various water potentials. The distribution of base seed water potential has been considered to follow Normal, Logistic and Extreme value distribution. The estimated proportion of germinated seeds along with the estimates of stress and uniformity parameters are obtained using a generalised linear model. The significance test of the above parameters for within and between temperatures is also performed in the analysis. Details can be found in Kebreab and Murdoch (1999) <doi:10.1093/jxb/50.334.655> and Bradford (2002) <https://www.jstor.org/stable/4046371>.
Aimed at applying the Harvest classification tree algorithm, modified algorithm of classic classification tree.The harvested tree has advantage of deleting redundant rules in trees, leading to a simplify and more efficient tree model.It was firstly used in drug discovery field, but it also performs well in other kinds of data, especially when the region of a class is disconnected. This package also improves the basic harvest classification tree algorithm by extending the field of data of algorithm to both continuous and categorical variables. To learn more about the harvest classification tree algorithm, you can go to http://www.stat.ubc.ca/Research/TechReports/techreports/220.pdf for more information.