Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Compute house price indexes and series using a variety of different methods and models common through the real estate literature. Evaluate index goodness based on accuracy, volatility and revision statistics. Background on basic model construction for repeat sales models can be found at: Case and Quigley (1991) <https://ideas.repec.org/a/tpr/restat/v73y1991i1p50-58.html> and for hedonic pricing models at: Bourassa et al (2006) <doi:10.1016/j.jhe.2006.03.001>. The package author's working paper on the random forest approach to house price indexes can be found at: <http://www.github.com/andykrause/hpi_research>.
In order to improve performance for HTTP API clients, httpcache provides simple tools for caching and invalidating cache. It includes the HTTP verb functions GET, PUT, PATCH, POST, and DELETE, which are drop-in replacements for those in the httr package. These functions are cache-aware and provide default settings for cache invalidation suitable for RESTful APIs; the package also enables custom cache-management strategies. Finally, httpcache includes a basic logging framework to facilitate the measurement of HTTP request time and cache performance.
This package provides functions for fitting various penalized parametric and semi-parametric mixture cure models with different penalty functions, testing for a significant cure fraction, and testing for sufficient follow-up as described in Fu et al (2022)<doi:10.1002/sim.9513> and Archer et al (2024)<doi:10.1186/s13045-024-01553-6>. False discovery rate controlled variable selection is provided using model-X knock-offs.
This package provides a forecasting method that efficiently maps vast numbers of (scalar-valued) signals into an aggregate density forecast in a time-varying and computationally fast manner. The method proceeds in two steps: First, it transforms a predictive signal into a density forecast and, second, it combines the resulting candidate density forecasts into an ultimate aggregate density forecast. For a detailed explanation of the method, please refer to Adaemmer et al. (2025) <doi:10.1080/07350015.2025.2526424>.
This package provides functions for designing phase II clinical trials adjusting for the heterogeneity of the population using known subgroups or historical controls.
Can be used for paternity and maternity assignment and outperforms conventional methods where closely related individuals occur in the pool of possible parents. The method compares the genotypes of offspring with any combination of potentials parents and scores the number of mismatches of these individuals at bi-allelic genetic markers (e.g. Single Nucleotide Polymorphisms). It elaborates on a prior exclusion method based on the Homozygous Opposite Test (HOT; Huisman 2017 <doi:10.1111/1755-0998.12665>) by introducing the additional exclusion criterion HIPHOP (Homozygous Identical Parents, Heterozygous Offspring are Precluded; Cockburn et al., in revision). Potential parents are excluded if they have more mismatches than can be expected due to genotyping error and mutation, and thereby one can identify the true genetic parents and detect situations where one (or both) of the true parents is not sampled. Package hiphop can deal with (a) the case where there is contextual information about parentage of the mother (i.e. a female has been seen to be involved in reproductive tasks such as nest building), but paternity is unknown (e.g. due to promiscuity), (b) where both parents need to be assigned, because there is no contextual information on which female laid eggs and which male fertilized them (e.g. polygynandrous mating system where multiple females and males deposit young in a common nest, or organisms with external fertilisation that breed in aggregations). For details: Cockburn, A., Penalba, J.V.,Jaccoud, D.,Kilian, A., Brouwer, L., Double, M.C., Margraf, N., Osmond, H.L., van de Pol, M. and Kruuk, L.E.B. (in revision). HIPHOP: improved paternity assignment among close relatives using a simple exclusion method for bi-allelic markers. Molecular Ecology Resources, DOI to be added upon acceptance.
Implemented here are procedures for fitting hierarchical generalized linear models (HGLM). It can be used for linear mixed models and generalized linear mixed models with random effects for a variety of links and a variety of distributions for both the outcomes and the random effects. Fixed effects can also be fitted in the dispersion part of the mean model. As statistical models, HGLMs were initially developed by Lee and Nelder (1996) <https://www.jstor.org/stable/2346105?seq=1>. We provide an implementation (Ronnegard, Alam and Shen 2010) <https://journal.r-project.org/archive/2010-2/RJournal_2010-2_Roennegaard~et~al.pdf> following Lee, Nelder and Pawitan (2006) <ISBN: 9781420011340> with algorithms extended for spatial modeling (Alam, Ronnegard and Shen 2015) <https://journal.r-project.org/archive/2015/RJ-2015-017/RJ-2015-017.pdf>.
Perform high dimensional Feature Selection in the presence of survival outcome. Based on Feature Selection method and different survival analysis, it will obtain the best markers with optimal threshold levels according to their effect on disease progression and produce the most consistent level according to those threshold values. The functions methodology is based on by Sonabend et al (2021) <doi:10.1093/bioinformatics/btab039> and Bhattacharjee et al (2021) <arXiv:2012.02102>.
The haversine is a function used to calculate the distance between a pair of latitude and longitude points while accounting for the assumption that the points are on a spherical globe. This package provides a fast, dataframe compatible, haversine function. For the first publication on the haversine calculation see Joseph de Mendoza y RÃ os (1795) <https://books.google.cat/books?id=030t0OqlX2AC> (In Spanish).
Hard drive data: Class of data allowing the easy importation/manipulation of out of memory data sets. The data sets are located on disk but look like in-memory, the syntax for manipulation is similar to data.table'. Operations are performed "chunk-wise" behind the scene.
Ridge regression provide biased estimators of the regression parameters with lower variance. The HDBRR ("High Dimensional Bayesian Ridge Regression") function fits Bayesian Ridge regression without MCMC, this one uses the SVD or QR decomposition for the posterior computation.
Fit, summarize and plot sinusoidal hysteretic processes using: two-step simple harmonic least squares, ellipse-specific non-linear least squares, the direct method, geometric least squares or linear least squares. See Yang, F and A. Parkhurst, "Efficient Estimation of Elliptical Hysteresis with Application to the Characterization of Heat Stress" <DOI:10.1007/s13253-015-0213-6>.
Simulate and analyze hierarchical composite endpoints. Includes implementation for the kidney hierarchical composite endpoint as defined in Heerspink HL et al (2023) â Development and validation of a new hierarchical composite end point for clinical trials of kidney disease progressionâ (Journal of the American Society of Nephrology 34 (2): 2025â 2038, <doi:10.1681/ASN.0000000000000243>). Win odds, also called Wilcoxon-Mann-Whitney or success odds, is the main analysis method. Other win statistics (win probability, win ratio, net benefit) are also implemented in the univariate case, provided there is no censoring. The win probability analysis is based on the Brunner-Munzel test and uses the DeLong-DeLong-Clarke-Pearson variance estimator, as described by Brunner and Konietschke (2025) in â An unbiased rank-based estimator of the Mannâ Whitney variance including the case of tiesâ (Statistical Papers 66 (1): 20, <doi:10.1007/s00362-024-01635-0>). Includes implementation of a new Wilson-type, compatible confidence interval for the win odds, as proposed by Schüürhuis, Konietschke, Brunner (2025) in â A new approach to the nonparametric Behrensâ Fisher problem with compatible confidence intervals.â (Biometrical Journal 67 (6), <doi:10.1002/bimj.70096>). Stratification and covariate adjustment are performed based on the methodology presented by Koch GG et al. in â Issues for covariance analysis of dichotomous and ordered categorical data from randomized clinical trials and non-parametric strategies for addressing themâ (Statistics in Medicine 17 (15-16): 1863â 92). For a review, see Gasparyan SB et al (2021) â Adjusted win ratio with stratification: Calculation methods and interpretationâ (Statistical Methods in Medical Research 30 (2): 580â 611, <doi:10.1177/0962280220942558>).
Inference of chromosome-length haplotypes using a few haploid gametes of an individual. The gamete genotype data may be generated from various platforms including genotyping arrays and sequencing even with low-coverage. Hapi simply takes genotype data of known hetSNPs in single gamete cells as input and report the high-resolution haplotypes as well as confidence of each phased hetSNPs. The package also includes a module allowing downstream analyses and visualization of identified crossovers in the gametes.
Generalized additive models with a numeric hyper column tabulated on a common grid. Sign-adjustment based on the correlation of model prediction and a selected slice of the hyper column. Visualization of the integrand surface over the hyper column.
High throughput toxicokinetics ("HTTK") is the combination of 1) chemical-specific in vitro measurements or in silico predictions and 2) generic mathematical models, to predict absorption, distribution, metabolism, and excretion by the body. HTTK methods have been described by Pearce et al. (2017) (<doi:10.18637/jss.v079.i04>) and Breen et al. (2021) (<doi:10.1080/17425255.2021.1935867>). Here we provide examples (vignettes) applying HTTK to solve various problems in bioinformatics, toxicology, and exposure science. In accordance with Davidson-Fritz et al. (2025) (<doi:10.1371/journal.pone.0321321>), whenever a new HTTK model is developed, the code to generate the figures evaluating that model is added as a new vignettte.
Read PLINK 1.9 binary datasets (BED/BIM/FAM) and generate the CSV files required by the Erasmus MC HIrisPlex / HIrisPlex-S webtool <https://hirisplex.erasmusmc.nl/>. It maps PLINK alleles to the webtool's required rsID_Allele columns (0/1/2/NA). No external tools (e.g., PLINK CLI') are required.
An algorithm for flexible conditional density estimation based on application of pooled hazard regression to an artificial repeated measures dataset constructed by discretizing the support of the outcome variable. To facilitate flexible estimation of the conditional density, the highly adaptive lasso, a non-parametric regression function shown to estimate cadlag (RCLL) functions at a suitably fast convergence rate, is used. The use of pooled hazards regression for conditional density estimation as implemented here was first described for by DÃ az and van der Laan (2011) <doi:10.2202/1557-4679.1356>. Building on the conditional density estimation utilities, non-parametric inverse probability weighted (IPW) estimators of the causal effects of additive modified treatment policies are implemented, using conditional density estimation to estimate the generalized propensity score. Non-parametric IPW estimators based on this can be coupled with undersmoothing of the generalized propensity score estimator to attain the semi-parametric efficiency bound (per Hejazi, DÃ az, and van der Laan <doi:10.48550/arXiv.2205.05777>).
This package provides methods for data engineering in the human resources (HR) corporate domain. Designed for HR analytics practitioners and workforce-oriented data sets.
Implementation of the Hysteretic and Gatekeeping Depressions Model (HGDM) which calculates variable connected/contributing areas and resulting discharge volumes in prairie basins dominated by depressions ("slough" or "potholes"). The small depressions are combined into a single "meta" depression which explicitly models the hysteresis between the storage of water and the connected/contributing areas of the depressions. The largest (greater than 5% of the total depressional area) depression (if it exists) is represented separately to model its gatekeeping, i.e. the blocking of upstream flows until it is filled. The methodolgy is described in detail in Shook and Pomeroy (2025, <doi:10.1016/j.jhydrol.2025.132821>).
Work with model files (setup, input, output) from the hydrological catchment model HYPE: Streamlined file import and export, standard evaluation plot routines, diverse post-processing and aggregation routines for hydrological model analysis. The HYPEtools package is also archived at <doi:10.5281/zenodo.7627955> and can be cited in publications with Brendel et al. (2024) <doi:10.1016/j.envsoft.2024.106094>.
This package implements various tools for storing and analyzing hypergraphs. Handles basic undirected, unweighted hypergraphs, and various ways of creating hypergraphs from a number of representations, and converting between graphs and hypergraphs.
This package implements the Clarke-Wright algorithm to find a quasi-optimal solution to the Capacitated Vehicle Routing Problem. See Clarke, G. and Wright, J.R. (1964) <doi:10.1287/opre.12.4.568> for details. The implementation is accompanied by helper functions to inspect its solution.
H-index and h-alpha are a bibliometric indicators. This package provides functions to simulate how these indicators may develop over time for a given set of researchers and to visualize the simulation data. The implementation is based on the STATA ado h-index and is described in more detail in Bornmann et al. (2019) <arXiv:1905.11052>.