Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a user-friendly tool to fit Bayesian regression models. It can fit 3 types of Bayesian models using individual-level, summary-level, and individual plus pedigree-level (single-step) data for both Genomic prediction/selection (GS) and Genome-Wide Association Study (GWAS), it was designed to estimate joint effects and genetic parameters for a complex trait, including: (1) fixed effects and coefficients of covariates, (2) environmental random effects, and its corresponding variance, (3) genetic variance, (4) residual variance, (5) heritability, (6) genomic estimated breeding values (GEBV) for both genotyped and non-genotyped individuals, (7) SNP effect size, (8) phenotype/genetic variance explained (PVE) for single or multiple SNPs, (9) posterior probability of association of the genomic window (WPPA), (10) posterior inclusive probability (PIP). The functions are not limited, we will keep on going in enriching it with more features. References: Lilin Yin et al. (2025) <doi:10.18637/jss.v114.i06>; Meuwissen et al. (2001) <doi:10.1093/genetics/157.4.1819>; Gustavo et al. (2013) <doi:10.1534/genetics.112.143313>; Habier et al. (2011) <doi:10.1186/1471-2105-12-186>; Yi et al. (2008) <doi:10.1534/genetics.107.085589>; Zhou et al. (2013) <doi:10.1371/journal.pgen.1003264>; Moser et al. (2015) <doi:10.1371/journal.pgen.1004969>; Lloyd-Jones et al. (2019) <doi:10.1038/s41467-019-12653-0>; Henderson (1976) <doi:10.2307/2529339>; Fernando et al. (2014) <doi:10.1186/1297-9686-46-50>.
To test the homogeneity of stratum effects in stratified paired binary data.
This package provides a set of functions to estimate haziness of an image based on RGB bands. It returns a haze factor, varying from 0 to 1, a metric for fogginess and cloudiness. The package also presents additional functions to estimate brightness, darkness and contrast rasters of the RGB image. This package can be used for several applications such as inference of weather quality data and performing environmental studies from interpreting digital images.
Binary segmentation methods for detecting and estimating multiple change-points in the mean or second-order structure of high-dimensional time series as described in Cho and Fryzlewicz (2014) <doi:10.1111/rssb.12079> and Cho (2016) <doi:10.1214/16-EJS1155>.
This package provides a modern idiomatic header-only C++ interface for libhdf5'. Original software can be found at <https://github.com/highfive-devs/highfive/>.
This package provides a broad collection of datasets focused on health, biomechanics, and human motion. It includes clinical, physiological, and kinematic information from diverse sources, covering aspects such as surgery outcomes, vital signs, rheumatoid arthritis, osteoarthritis, accelerometry, gait analysis, motion sensing, and biomechanics experiments. Designed for researchers, analysts, and students, the package facilitates exploration and analysis of data related to health monitoring, physical activity, and rehabilitation.
An implementation of the modelling and reporting features described in reference textbook and guidelines (Briggs, Andrew, et al. Decision Modelling for Health Economic Evaluation. Oxford Univ. Press, 2011; Siebert, U. et al. State-Transition Modeling. Medical Decision Making 32, 690-700 (2012).): deterministic and probabilistic sensitivity analysis, heterogeneity analysis, time dependency on state-time and model-time (semi-Markov and non-homogeneous Markov models), etc.
HIGHT(HIGh security and light weigHT) algorithm is a block cipher encryption algorithm developed to provide confidentiality in computing environments that demand low power consumption and lightweight, such as RFID(Radio-Frequency Identification) and USN(Ubiquitous Sensor Network), or in mobile environments that require low power consumption and lightweight, such as smartphones and smart cards. Additionally, it is designed with a simple structure that enables it to be used with basic arithmetic operations, XOR, and circular shifts in 8-bit units. This algorithm was designed to consider both safety and efficiency in a very simple structure suitable for limited environments, compared to the former 128-bit encryption algorithm SEED. In December 2010, it became an ISO(International Organization for Standardization) standard. The detailed procedure is described in Hong et al. (2006) <doi:10.1007/11894063_4>.
Simulates stochastic hybrid models for transmission of infectious diseases in dynamic networks. It is a metapopulation model in which each node in the network is a sub-population and disease spreads within nodes and among them, combining two approaches: stochastic simulation algorithm (<doi:10.1146/annurev.physchem.58.032806.104637>) and individual-based approach, respectively. Equations that models spread within nodes are customizable and there are two link types among nodes: migration and influence (commuting). More information in Fernando S. Marques, Jose H. H. Grisi-Filho, Marcos Amaku et al. (2020) <doi:10.18637/jss.v094.i06>.
Self-reported health, happiness, attitudes, and other statuses or perceptions are often the subject of biases that may come from different sources. For example, the evaluation of an individualâ s own health may depend on previous medical diagnoses, functional status, and symptoms and signs of illness; as on well as life-style behaviors, including contextual social, gender, age-specific, linguistic and other cultural factors (Jylha 2009 <doi:10.1016/j.socscimed.2009.05.013>; Oksuzyan et al. 2019 <doi:10.1016/j.socscimed.2019.03.002>). The hopit package offers versatile functions for analyzing different self-reported ordinal variables, and for helping to estimate their biases. Specifically, the package provides the function to fit a generalized ordered probit model that regresses original self-reported status measures on two sets of independent variables (King et al. 2004 <doi:10.1017/S0003055403000881>; Jurges 2007 <doi:10.1002/hec.1134>; Oksuzyan et al. 2019 <doi:10.1016/j.socscimed.2019.03.002>). The first set of variables (e.g., health variables) included in the regression are individual statuses and characteristics that are directly related to the self-reported variable. In the case of self-reported health, these could be chronic conditions, mobility level, difficulties with daily activities, performance on grip strength tests, anthropometric measures, and lifestyle behaviors. The second set of independent variables (threshold variables) is used to model cut-points between adjacent self-reported response categories as functions of individual characteristics, such as gender, age group, education, and country (Oksuzyan et al. 2019 <doi:10.1016/j.socscimed.2019.03.002>). The model helps to adjust for specific socio-demographic and cultural differences in how the continuous latent health is projected onto the ordinal self-rated measure. The fitted model can be used to calculate an individual predicted latent status variable, a latent index, and standardized latent coefficients; and makes it possible to reclassify a categorical status measure that has been adjusted for inter-individual differences in reporting behavior.
Computes the scores and ranks candidates according to voting rules electing the highest median grade. Based on "Tie-breaking the highest median: alternatives to the majority judgment", A. Fabre, Social Choice & Welfare (forthcoming as of 2020). The paper is available here: <https://github.com/bixiou/highest_median/raw/master/Tie-breaking%20Highest%20Median%20-%20Fabre%202019.pdf>. Functions to plot the voting profiles can be found on github: <https://github.com/bixiou/highest_median/blob/master/packages_functions_data.R>.
Estimates frictional constants for hydraulic analysis of rivers. This HYDRaulic ROughness CALculator (HYDROCAL) was previously developed as a spreadsheet tool and accompanying documentation by McKay and Fischenich (2011, <https://erdc-library.erdc.dren.mil/jspui/bitstream/11681/2034/1/CHETN-VII-11.pdf>).
An algorithm for time series analysis that leverages hidden Markov models, cluster analysis, and mixture distributions to segment data, detect patterns and predict future sequences.
Fits latent space models for single networks and hierarchical latent space models for ensembles of networks as described in Sweet, Thomas & Junker (2013).
Offers methods for visualizing, modelling, and forecasting high-dimensional functional time series, also known as functional panel data. Documentation about hdftsa is provided via the paper by Cristian F. Jimenez-Varon, Ying Sun and Han Lin Shang (2024, <doi:10.1080/10618600.2024.2319166>).
Compute house price indexes and series using a variety of different methods and models common through the real estate literature. Evaluate index goodness based on accuracy, volatility and revision statistics. Background on basic model construction for repeat sales models can be found at: Case and Quigley (1991) <https://ideas.repec.org/a/tpr/restat/v73y1991i1p50-58.html> and for hedonic pricing models at: Bourassa et al (2006) <doi:10.1016/j.jhe.2006.03.001>. The package author's working paper on the random forest approach to house price indexes can be found at: <http://www.github.com/andykrause/hpi_research>.
This package provides a novel searching scheme for tuning parameter in high-dimensional penalized regression. We propose a new estimate of the regularization parameter based on an estimated lower bound of the proportion of false null hypotheses (Meinshausen and Rice (2006) <doi:10.1214/009053605000000741>). The bound is estimated by applying the empirical null distribution of the higher criticism statistic, a second-level significance testing, which is constructed by dependent p-values from a multi-split regression and aggregation method (Jeng, Zhang and Tzeng (2019) <doi:10.1080/01621459.2018.1518236>). An estimate of tuning parameter in penalized regression is decided corresponding to the lower bound of the proportion of false null hypotheses. Different penalized regression methods are provided in the multi-split algorithm.
An R port of the hashids library. hashids generates YouTube-like hashes from integers or vector of integers. Hashes generated from integers are relatively short, unique and non-seqential. hashids can be used to generate unique ids for URLs and hide database row numbers from the user. By default hashids will avoid generating common English cursewords by preventing certain letters being next to each other. hashids are not one-way: it is easy to encode an integer to a hashid and decode a hashid back into an integer.
Efficient tools for parsing and standardizing Australian addresses from textual data. It utilizes optimized algorithms to accurately identify and extract components of addresses, such as street names, types, and postcodes, especially for large batched data in contexts where sending addresses to internet services may be slow or inappropriate. The core functionality is built on fast string processing techniques to handle variations in address formats and abbreviations commonly found in Australian address data. Designed for data scientists, urban planners, and logistics analysts, the package facilitates the cleaning and normalization of address information, supporting better data integration and analysis in urban studies, geography, and related fields.
Nonparametric cumulative-incidence based estimation of the ratios of sub-hazard ratios to cause-specific hazard ratios using the approach from Ng et al. (2020).
This package provides a utility to quickly obtain clean and tidy men's basketball play by play data. Provides functions to access live play by play and box score data from ESPN<https://www.espn.com> with shot locations when available. It is also a full NBA Stats API<https://www.nba.com/stats/> wrapper. It is also a scraping and aggregating interface for Ken Pomeroy's men's college basketball statistics website<https://kenpom.com>. It provides users with an active subscription the capability to scrape the website tables and analyze the data for themselves.
This package contains functions for hidden Markov models with observations having extra zeros as defined in the following two publications, Wang, T., Zhuang, J., Obara, K. and Tsuruoka, H. (2016) <doi:10.1111/rssc.12194>; Wang, T., Zhuang, J., Buckby, J., Obara, K. and Tsuruoka, H. (2018) <doi:10.1029/2017JB015360>. The observed response variable is either univariate or bivariate Gaussian conditioning on presence of events, and extra zeros mean that the response variable takes on the value zero if nothing is happening. Hence the response is modelled as a mixture distribution of a Bernoulli variable and a continuous variable. That is, if the Bernoulli variable takes on the value 1, then the response variable is Gaussian, and if the Bernoulli variable takes on the value 0, then the response is zero too. This package includes functions for simulation, parameter estimation, goodness-of-fit, the Viterbi algorithm, and plotting the classified 2-D data. Some of the functions in the package are based on those of the R package HiddenMarkov by David Harte. This updated version has included an example dataset and R code examples to show how to transform the data into the objects needed in the main functions. We have also made changes to increase the speed of some of the functions.
Implementation of S4 class of sets and multisets of numbers. The implementation is based on the hash table from the package hash'. Quick operations are allowed when the set is a dynamic object. The implementation is discussed in detail in Ceoldo and Wit (2023) <arXiv:2304.09809>.
This package provides functions to build and use equal-area hexagonal discrete global grids using the Snyder ISEA projection ('Snyder 1992 <doi:10.3138/27H7-8K88-4882-1752>). Implements the ISEA discrete global grid system ('Sahr', White and Kimerling 2003 <doi:10.1559/152304003100011090>). Includes a fast C++ core for projection and aperture quantization, and sf'/'terra'-compatible R wrappers for grid generation and coordinate assignment. Output is compatible with dggridR for interoperability.