Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a forecasting method that efficiently maps vast numbers of (scalar-valued) signals into an aggregate density forecast in a time-varying and computationally fast manner. The method proceeds in two steps: First, it transforms a predictive signal into a density forecast and, second, it combines the resulting candidate density forecasts into an ultimate aggregate density forecast. For a detailed explanation of the method, please refer to Adaemmer et al. (2025) <doi:10.1080/07350015.2025.2526424>.
This package provides a data set of the Portuguese NHS hospitals.
Function to identify haplotypes within QTL (Quantitative Trait Loci). One haplotype is a combination of SNP (Single Nucleotide Polymorphisms) within the QTL. This function groups together all individuals of a population with the same haplotype. Each group contains individual with the same allele in each SNP, whether or not missing data. Thus, haplotyper groups individuals, that to be imputed, have a non-zero probability of having the same alleles in the entire sequence of SNP's. Moreover, haplotyper calculates such probability from relative frequencies.
Work with model files (setup, input, output) from the hydrological catchment model HYPE: Streamlined file import and export, standard evaluation plot routines, diverse post-processing and aggregation routines for hydrological model analysis. The HYPEtools package is also archived at <doi:10.5281/zenodo.7627955> and can be cited in publications with Brendel et al. (2024) <doi:10.1016/j.envsoft.2024.106094>.
This package provides a collection of reweighted marginal hypothesis tests for clustered data, based on reweighting methods of Williamson, J., Datta, S., and Satten, G. (2003) <doi:10.1111/1541-0420.00005>. The tests in this collection are clustered analogs to well-known hypothesis tests in the classical setting, and are appropriate for data with cluster- and/or group-size informativeness. The syntax and output of functions are modeled after common, recognizable functions native to R. Methods used in the package refer to Gregg, M., Datta, S., and Lorenz, D. (2020) <doi:10.1177/0962280220928572>, Nevalainen, J., Oja, H., and Datta, S. (2017) <doi:10.1002/sim.7288> Dutta, S. and Datta, S. (2015) <doi:10.1111/biom.12447>, Lorenz, D., Datta, S., and Harkema, S. (2011) <doi:10.1002/sim.4368>, Datta, S. and Satten, G. (2008) <doi:10.1111/j.1541-0420.2007.00923.x>, Datta, S. and Satten, G. (2005) <doi:10.1198/016214504000001583>.
Package provides the estimation of the structure and the parameters, sampling methods and structural plots of Hierarchical Archimedean Copulae (HAC).
Generates HIDECAN plots that summarise and combine the results of genome-wide association studies (GWAS) and transcriptomics differential expression analyses (DE), along with manually curated candidate genes of interest. The HIDECAN plot is presented in Angelin-Bonnet et al. (2023) (currently in review).
Calculates the interval estimates for the parameters of linear models with heteroscedastic regression using bootstrap - (Wild Bootstrap) and double bootstrap-t (Wild Bootstrap). It is also possible to calculate confidence intervals using the percentile bootstrap and percentile bootstrap double. The package can calculate consistent estimates of the covariance matrix of the parameters of linear regression models with heteroscedasticity of unknown form. The package also provides a function to consistently calculate the covariance matrix of the parameters of linear models with heteroscedasticity of unknown form. The bootstrap methods exported by the package are based on the master's thesis of the first author, available at <https://raw.githubusercontent.com/prdm0/hcci/master/references/dissertacao_mestrado.pdf>. The hcci package in previous versions was cited in the book VINOD, Hrishikesh D. Hands-on Intermediate Econometrics Using R: Templates for Learning Quantitative Methods and R Software. 2022, p. 441, ISBN 978-981-125-617-2 (hardcover). The simple bootstrap schemes are based on the works of Cribari-Neto F and Lima M. G. (2009) <doi:10.1080/00949650801935327>, while the double bootstrap schemes for the parameters that index the linear models with heteroscedasticity of unknown form are based on the works of Beran (1987) <doi:10.2307/2336685>. The use of bootstrap for the calculation of interval estimates in regression models with heteroscedasticity of unknown form from a weighting of the residuals was proposed by Wu (1986) <doi:10.1214/aos/1176350142>. This bootstrap scheme is known as weighted or wild bootstrap.
Simulates stochastic hybrid models for transmission of infectious diseases in dynamic networks. It is a metapopulation model in which each node in the network is a sub-population and disease spreads within nodes and among them, combining two approaches: stochastic simulation algorithm (<doi:10.1146/annurev.physchem.58.032806.104637>) and individual-based approach, respectively. Equations that models spread within nodes are customizable and there are two link types among nodes: migration and influence (commuting). More information in Fernando S. Marques, Jose H. H. Grisi-Filho, Marcos Amaku et al. (2020) <doi:10.18637/jss.v094.i06>.
Estimation of high-dimensional multi-response regression with heterogeneous noises under Heterogeneous group square-root Lasso penalty. For details see: Ren, Z., Kang, Y., Fan, Y. and Lv, J. (2018)<arXiv:1606.03803>.
The HBV hydrological model (Bergström, S. and Lindström, G., (2015) <doi:10.1002/hyp.10510>) has been split in modules to allow the user to build his/her own model. This version was developed by the author in IANIGLA-CONICET (Instituto Argentino de Nivologia, Glaciologia y Ciencias Ambientales - Consejo Nacional de Investigaciones Cientificas y Tecnicas) for hydroclimatic studies in the Andes. HBV.IANIGLA incorporates routines for clean and debris covered glacier melt simulations.
An implementation of Random Forest-based two-sample tests as introduced in Hediger & Michel & Naef (2022).
Mediation analysis is used to identify and quantify intermediate effects from factors that intervene the observed relationship between an exposure/predicting variable and an outcome. We use a Bayesian adaptive lasso method to take care of the hierarchical structures and high dimensional exposures or mediators.
This data-only package was created for distributing data used in the examples of the hglm package.
The seed germination process starts with water uptake by the seed and ends with the protrusion of radicle and plumule under varying temperatures and soil water potential. Hydrotime is a way to describe the relationship between water potential and seed germination rates at germination percentages. One important quantity before applying hydrotime modeling of germination percentages is to consider the proportion of viable seeds that could germinate under saturated conditions. This package can be used to apply correction factors at various water potentials before estimating parameters like stress tolerance, and uniformity of the hydrotime model. Three different distributions namely, Gaussian, Logistic, and Extreme value distributions have been considered to fit the model to the seed germination time course. Details can be found in Bradford (2002) <https://www.jstor.org/stable/4046371>, and Bradford and Still(2004) <https://www.jstor.org/stable/23433495>.
This package provides a utility to quickly obtain clean and tidy men's basketball play by play data. Provides functions to access live play by play and box score data from ESPN<https://www.espn.com> with shot locations when available. It is also a full NBA Stats API<https://www.nba.com/stats/> wrapper. It is also a scraping and aggregating interface for Ken Pomeroy's men's college basketball statistics website<https://kenpom.com>. It provides users with an active subscription the capability to scrape the website tables and analyze the data for themselves.
In some cases you will have data in a histogram format, where you have a vector of all possible observations, and a vector of how many times each observation appeared. You could expand this into a single 1D vector, but this may not be advisable if the counts are extremely large. HistDat allows for the calculation of summary statistics without the need for expanding your data.
By binding R functions and the Highmaps <https://www.highcharts.com.cn/products/highmaps> chart library, hchinamap package provides a simple way to map China and its provinces. The map of China drawn by this package contains complete Chinese territory, especially the Nine-dotted line, South Tibet, Hong Kong, Macao and Taiwan.
Simulate and analyze hierarchical composite endpoints. Includes implementation for the kidney hierarchical composite endpoint as defined in Heerspink HL et al (2023) â Development and validation of a new hierarchical composite end point for clinical trials of kidney disease progressionâ (Journal of the American Society of Nephrology 34 (2): 2025â 2038, <doi:10.1681/ASN.0000000000000243>). Win odds, also called Wilcoxon-Mann-Whitney or success odds, is the main analysis method. Other win statistics (win probability, win ratio, net benefit) are also implemented in the univariate case, provided there is no censoring. The win probability analysis is based on the Brunner-Munzel test and uses the DeLong-DeLong-Clarke-Pearson variance estimator, as described by Brunner and Konietschke (2025) in â An unbiased rank-based estimator of the Mannâ Whitney variance including the case of tiesâ (Statistical Papers 66 (1): 20, <doi:10.1007/s00362-024-01635-0>). Includes implementation of a new Wilson-type, compatible confidence interval for the win odds, as proposed by Schüürhuis, Konietschke, Brunner (2025) in â A new approach to the nonparametric Behrensâ Fisher problem with compatible confidence intervals.â (Biometrical Journal 67 (6), <doi:10.1002/bimj.70096>). Stratification and covariate adjustment are performed based on the methodology presented by Koch GG et al. in â Issues for covariance analysis of dichotomous and ordered categorical data from randomized clinical trials and non-parametric strategies for addressing themâ (Statistics in Medicine 17 (15-16): 1863â 92). For a review, see Gasparyan SB et al (2021) â Adjusted win ratio with stratification: Calculation methods and interpretationâ (Statistical Methods in Medical Research 30 (2): 580â 611, <doi:10.1177/0962280220942558>).
Used for predicting a genotypeâ s allelic state at a specific locus/QTL/gene. This is accomplished by using both a genotype matrix and a separate file which has categorizations about loci/QTL/genes of interest for the individuals in the genotypic matrix. A training population can be created from a panel of individuals who have been previously screened for specific loci/QTL/genes, and this previous screening could be summarized into a category. Using the categorization of individuals which have been genotyped using a genome wide marker platform, a model can be trained to predict what category (haplotype) an individual belongs in based on their genetic sequence in the region associated with the locus/QTL/gene. These trained models can then be used to predict the haplotype of a locus/QTL/gene for individuals which have been genotyped with a genome wide platform yet not genotyped for the specific locus/QTL/gene. This package is based off work done by Winn et al 2021. For more specific information on this method, refer to <doi:10.1007/s00122-022-04178-w>.
We provide a collection of various classical tests and latest normal-reference tests for comparing high-dimensional mean vectors including two-sample and general linear hypothesis testing (GLHT) problem. Some existing tests for two-sample problem [see Bai, Zhidong, and Hewa Saranadasa.(1996) <https://www.jstor.org/stable/24306018>; Chen, Song Xi, and Ying-Li Qin.(2010) <doi:10.1214/09-aos716>; Srivastava, Muni S., and Meng Du.(2008) <doi:10.1016/j.jmva.2006.11.002>; Srivastava, Muni S., Shota Katayama, and Yutaka Kano.(2013)<doi:10.1016/j.jmva.2012.08.014>]. Normal-reference tests for two-sample problem [see Zhang, Jin-Ting, Jia Guo, Bu Zhou, and Ming-Yen Cheng.(2020) <doi:10.1080/01621459.2019.1604366>; Zhang, Jin-Ting, Bu Zhou, Jia Guo, and Tianming Zhu.(2021) <doi:10.1016/j.jspi.2020.11.008>; Zhang, Liang, Tianming Zhu, and Jin-Ting Zhang.(2020) <doi:10.1016/j.ecosta.2019.12.002>; Zhang, Liang, Tianming Zhu, and Jin-Ting Zhang.(2023) <doi:10.1080/02664763.2020.1834516>; Zhang, Jin-Ting, and Tianming Zhu.(2022) <doi:10.1080/10485252.2021.2015768>; Zhang, Jin-Ting, and Tianming Zhu.(2022) <doi:10.1007/s42519-021-00232-w>; Zhu, Tianming, Pengfei Wang, and Jin-Ting Zhang.(2023) <doi:10.1007/s00180-023-01433-6>]. Some existing tests for GLHT problem [see Fujikoshi, Yasunori, Tetsuto Himeno, and Hirofumi Wakaki.(2004) <doi:10.14490/jjss.34.19>; Srivastava, Muni S., and Yasunori Fujikoshi.(2006) <doi:10.1016/j.jmva.2005.08.010>; Yamada, Takayuki, and Muni S. Srivastava.(2012) <doi:10.1080/03610926.2011.581786>; Schott, James R.(2007) <doi:10.1016/j.jmva.2006.11.007>; Zhou, Bu, Jia Guo, and Jin-Ting Zhang.(2017) <doi:10.1016/j.jspi.2017.03.005>]. Normal-reference tests for GLHT problem [see Zhang, Jin-Ting, Jia Guo, and Bu Zhou.(2017) <doi:10.1016/j.jmva.2017.01.002>; Zhang, Jin-Ting, Bu Zhou, and Jia Guo.(2022) <doi:10.1016/j.jmva.2021.104816>; Zhu, Tianming, Liang Zhang, and Jin-Ting Zhang.(2022) <doi:10.5705/ss.202020.0362>; Zhu, Tianming, and Jin-Ting Zhang.(2022) <doi:10.1007/s00180-021-01110-6>; Zhang, Jin-Ting, and Tianming Zhu.(2022) <doi:10.1016/j.csda.2021.107385>].
Calculate Hopkins statistic to assess the clusterability of data. See Wright (2023) <doi:10.32614/RJ-2022-055>.
Ridge regression provide biased estimators of the regression parameters with lower variance. The HDBRR ("High Dimensional Bayesian Ridge Regression") function fits Bayesian Ridge regression without MCMC, this one uses the SVD or QR decomposition for the posterior computation.
This package provides a wrapper for the Highcharts library including shortcut functions to plot R objects. Highcharts <https://www.highcharts.com/> is a charting library offering numerous chart types with a simple configuration syntax.