Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package contains one function for drawing Piper diagrams (also called Piper-Hill diagrams) of water analyses for major ions.
It is used to construct run sequences with minimum changes for half replicate of two level factorial run order. Experimenter can save time and resources by minimizing the number of changes in levels of individual factor and therefore the total number of changes. It consists of the function minimal_hrtlf(). This technique can be employed to any half replicate of two level factorial run order where the number of factors are greater than two. In Design of Experiments (DOE) theory, two level of a factor can be represented as integers e.g. - 1 for low and 1 for high. User is expected to enter total number of factors to be considered in the experiment. minimal_hrtlf() provides the required run sequences for the input number of factors. The output also gives the number of changes of each factor along with total number of changes in the run sequence. Due to restricted randomization the minimally changed run sequences of half replicate of two level factorial run order will be affected by trend effect. The output also provides the Trend Factor value of the run order. Trend factor value will lies between 0 to 1. Higher the values, lesser the influence of trend effects on the run order.
Hypergeometric Intersection distributions are a broad group of distributions that describe the probability of picking intersections when drawing independently from two (or more) urns containing variable numbers of balls belonging to the same n categories. <arXiv:1305.0717>.
Quantifying similarity between high-dimensional single cell samples is challenging, and usually requires some simplifying hypothesis to be made. By transforming the high dimensional space into a high dimensional grid, the number of cells in each sub-space of the grid is characteristic of a given sample. Using a Hilbert curve each sample can be visualized as a simple density plot, and the distance between samples can be calculated from the distribution of cells using the Jensen-Shannon distance. Bins that correspond to significant differences between samples can identified using a simple bootstrap procedure.
Calculate clinical scores for hidradenitis suppurativa (HS), a dermatologic disease. The scores are typically used for evaluation of efficacy in clinical trials. The scores are not commonly used in clinical practice. The specific scores implemented are Hidradenitis Suppurativa Clinical Response (HiSCR) (Kimball, et al. (2015) <doi:10.1111/jdv.13216>), Hidradenitis Suppurativa Area and Severity Index Revised (HASI-R) (Goldfarb, et al. (2020) <doi:10.1111/bjd.19565>), hidradenitis suppurativa Physician Global Assessment (HS PGA) (Marzano, et al. (2020) <doi:10.1111/jdv.16328>), and the International Hidradenitis Suppurativa Severity Score System (IHS4) (Zouboulis, et al. (2017) <doi:10.1111/bjd.15748>).
Holistic Multimodel Domain Analysis (HMDA) is a robust and transparent framework designed for exploratory machine learning research, aiming to enhance the process of feature assessment and selection. HMDA addresses key limitations of traditional machine learning methods by evaluating the consistency across multiple high-performing models within a fine-tuned modeling grid, thereby improving the interpretability and reliability of feature importance assessments. Specifically, it computes Weighted Mean SHapley Additive exPlanations (WMSHAP), which aggregate feature contributions from multiple models based on weighted performance metrics. HMDA also provides confidence intervals to demonstrate the stability of these feature importance estimates. This framework is particularly beneficial for analyzing complex, multidimensional datasets common in health research, supporting reliable exploration of mental health outcomes such as suicidal ideation, suicide attempts, and other psychological conditions. Additionally, HMDA includes automated procedures for feature selection based on WMSHAP ratios and performs dimension reduction analyses to identify underlying structures among features. For more details see Haghish (2025) <doi:10.13140/RG.2.2.32473.63846>.
Calculate and visualize Healthy Eating Index (HEI) scores from National Health and Nutrition Examination Survey 24-hour dietary recall data utilizing three methods recommended by the National Cancer Institute (2024) <https://epi.grants.cancer.gov/hei/hei-methods-and-calculations.html#:~:text=To%20use%20the%20simple%20HEI,the%20total%20scores%20across%20individuals.>. Effortlessly analyze HEI scores across different demographic groups and years.
Hospital machine learning and ai data analysis workflow tools, modeling, and automations. This library provides many useful tools to review common administrative hospital data. Some of these include predicting length of stay, and readmits. The aim is to provide a simple and consistent verb framework that takes the guesswork out of everything.
Hadamard matrix based statistical designs are of immense importance as the resultant designs carry various desirable characterizing properties. Constructing Partially Balanced Incomplete Block Designs (PBIBds) using Kronecker product of incidence matrices of Balanced Incomplete Block (BIB) and Partially Balanced Incomplete Block (PBIB) designs is much evident from literature. Here, we have constructed Incomplete Block Designs (IBDs) based on Hadamard matrices and Kronecker product of Hadamard matrices.
Compute 21 summary measures of health inequality and its corresponding confidence intervals for ordered and non-ordered dimensions using disaggregated data. Measures for ordered dimensions (e.g., Slope Index of Inequality, Absolute Concentration Index) also accept individual and survey data.
Test the significance of coefficients in high dimensional generalized linear models.
The theoretical covariance between pairs of markers is calculated from either paternal haplotypes and maternal linkage disequilibrium (LD) or vise versa. A genetic map is required. Grouping of markers is based on the correlation matrix and a representative marker is suggested for each group. Employing the correlation matrix, optimal sample size can be derived for association studies based on a SNP-BLUP approach. The implementation relies on paternal half-sib families and biallelic markers. If maternal half-sib families are used, the roles of sire/dam are swapped. Multiple families can be considered. Wittenburg, Bonk, Doschoris, Reyer (2020) "Design of Experiments for Fine-Mapping Quantitative Trait Loci in Livestock Populations" <doi:10.1186/s12863-020-00871-1>. Carlson, Eberle, Rieder, Yi, Kruglyak, Nickerson (2004) "Selecting a maximally informative set of single-nucleotide polymorphisms for association analyses using linkage disequilibrium" <doi:10.1086/381000>.
An algorithm for time series analysis that leverages hidden Markov models, cluster analysis, and mixture distributions to segment data, detect patterns and predict future sequences.
Tests for a treatment effect using surrogate marker information accounting for heterogeneity in the utility of the surrogate. Details are described in Parast et al (2022) <arXiv:2209.08315>.
Predict hatch and emergence timing for a wide range of wild fishes using the effective value framework (Sparks et al., (2019) <DOI:10.1139/cjfas-2017-0468>). hatchR offers users access to established phenological models and the flexibility to incorporate custom parameterizations using external datasets.
By analyzing time series, it is possible to observe significant changes in the behavior of observations that frequently characterize events. Events present themselves as anomalies, change points, or motifs. In the literature, there are several methods for detecting events. However, searching for a suitable time series method is a complex task, especially considering that the nature of events is often unknown. This work presents Harbinger, a framework for integrating and analyzing event detection methods. Harbinger contains several state-of-the-art methods described in Salles et al. (2020) <doi:10.5753/sbbd.2020.13626>.
Simple tools for converting columns to new data types. Intuitive functions for columns with missing values.
This package provides functions to implement a hierarchical approach which is designed to perform joint analysis of summary statistics using the framework of Mendelian Randomization or transcriptome analysis. Reference: Lai Jiang, Shujing Xu, Nicholas Mancuso, Paul J. Newcombe, David V. Conti (2020). "A Hierarchical Approach Using Marginal Summary Statistics for Multiple Intermediates in a Mendelian Randomization or Transcriptome Analysis." <bioRxiv><doi:10.1101/2020.02.03.924241>.
The HistData package provides a collection of small data sets that are interesting and important in the history of statistics and data visualization. The goal of the package is to make these available, both for instructional use and for historical research. Some of these present interesting challenges for graphics or analysis in R.
Decode elements of the Australian Higher Education Information Management System (HEIMS) data for clarity and performance. HEIMS is the record system of the Department of Education, Australia to record enrolments and completions in Australia's higher education system, as well as a range of relevant information. For more information, including the source of the data dictionary, see <http://heimshelp.education.gov.au/sites/heimshelp/dictionary/pages/data-element-dictionary>.
Statistical functions used in the French HydroPortail <https://hydro.eaufrance.fr/>. This includes functions to estimate distributions, quantile curves and uncertainties, along with various other utilities. Technical details are available (in French) in Renard (2016) <https://hal.inrae.fr/hal-02605318>.
The Tweedie lasso model implements an iteratively reweighed least square (IRLS) strategy that incorporates a blockwise majorization decent (BMD) method, for efficiently computing solution paths of the (grouped) lasso and the (grouped) elastic net methods.
Offers methods for visualizing, modelling, and forecasting high-dimensional functional time series, also known as functional panel data. Documentation about hdftsa is provided via the paper by Cristian F. Jimenez-Varon, Ying Sun and Han Lin Shang (2024, <doi:10.1080/10618600.2024.2319166>).
This package provides a shiny application, which allows you to perform single- and multi-omics analyses using your own omics datasets. After the upload of the omics datasets and a metadata file, single-omics is performed for feature selection and dataset reduction. These datasets are used for pairwise- and multi-omics analyses, where automatic tuning is done to identify correlations between the datasets - the end goal of the recommended Holomics workflow. Methods used in the package were implemented in the package mixomics by Florian Rohart,Benoît Gautier,Amrit Singh,Kim-Anh Lê Cao (2017) <doi:10.1371/journal.pcbi.1005752> and are described there in further detail.