Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
H(x) is the h-index for the past x years. Here, the h(x) of a scientist/department/etc. can be calculated using the exported excel file from a Web of Science citation report of a search. Also calculated is the year of first publication, total number of publications, and sum of times cited for the specified period. Therefore, for h-10: the date of first publication, total number of publications, and sum of times cited in the past 10 years are calculated. Note: the excel file has to first be saved in a .csv format.
This package provides a collection of functions for sampling and simulating 3D surfaces and objects and estimating metrics like rugosity, fractal dimension, convexity, sphericity, circularity, second moments of area and volume, and more.
Hadamard matrix based statistical designs are of immense importance as the resultant designs carry various desirable characterizing properties. Constructing Partially Balanced Incomplete Block Designs (PBIBds) using Kronecker product of incidence matrices of Balanced Incomplete Block (BIB) and Partially Balanced Incomplete Block (PBIB) designs is much evident from literature. Here, we have constructed Incomplete Block Designs (IBDs) based on Hadamard matrices and Kronecker product of Hadamard matrices.
Perform high dimensional Feature Selection in the presence of survival outcome. Based on Feature Selection method and different survival analysis, it will obtain the best markers with optimal threshold levels according to their effect on disease progression and produce the most consistent level according to those threshold values. The functions methodology is based on by Sonabend et al (2021) <doi:10.1093/bioinformatics/btab039> and Bhattacharjee et al (2021) <arXiv:2012.02102>.
Estimates frictional constants for hydraulic analysis of rivers. This HYDRaulic ROughness CALculator (HYDROCAL) was previously developed as a spreadsheet tool and accompanying documentation by McKay and Fischenich (2011, <https://erdc-library.erdc.dren.mil/jspui/bitstream/11681/2034/1/CHETN-VII-11.pdf>).
Structural handling of Finnish identity codes (natural persons and organizations); extract information, check ID validity and diagnostics.
This algorithm is described in detail in the paper "Hedging Forecast Combinations With an Application to the Random Forest" by Beck et al. (2024) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5032102>. The package provides a function hedgedrf() that can be used to train a Hedged Random Forest model on a dataset, and a function predict.hedgedrf() that can be used to make predictions with the model.
This package provides flexible maximum likelihood estimation and inference for Hidden Markov Models (HMMs) and Hidden Semi-Markov Models (HSMMs), as well as the underlying systems in which they operate. The package supports a wide range of observation and dwell-time distributions, offering a flexible modelling framework suitable for diverse practical data. Efficient implementations of the forward-backward and Viterbi algorithms are provided via Rcpp for enhanced computational performance. Additional functionality includes model simulation, residual analysis, non-initialised estimation, local and global decoding, calculation of diverse information criteria, computation of confidence intervals using parametric bootstrap methods, numerical covariance matrix estimation, and comprehensive visualisation functions for interpreting the data-generating processes inferred from the models. Methods follow standard approaches described by Guédon (2003) <doi:10.1198/1061860032030>, Zucchini and MacDonald (2009, ISBN:9781584885733), and O'Connell and Højsgaard (2011) <doi:10.18637/jss.v039.i04>.
Clustering of high dimensional data with Hidden Markov Model on Variable Blocks (HMM-VB) fitted via Baum-Welch algorithm. Clustering is performed by the Modal Baum-Welch algorithm (MBW), which finds modes of the density function. Lin Lin and Jia Li (2017) <https://jmlr.org/papers/v18/16-342.html>.
Fast, model-agnostic implementation of different H-statistics introduced by Jerome H. Friedman and Bogdan E. Popescu (2008) <doi:10.1214/07-AOAS148>. These statistics quantify interaction strength per feature, feature pair, and feature triple. The package supports multi-output predictions and can account for case weights. In addition, several variants of the original statistics are provided. The shape of the interactions can be explored through partial dependence plots or individual conditional expectation plots. DALEX explainers, meta learners ('mlr3', tidymodels', caret') and most other models work out-of-the-box.
This package contains one function for drawing Piper diagrams (also called Piper-Hill diagrams) of water analyses for major ions.
This package provides a set of tools supporting more flexible heatmaps. The graphics is grid-like using the old graphics system. The main function is heatmap.n2(), which is a wrapper around the various functions constructing individual parts of the heatmap, like sidebars, picket plots, legends etc. The function supports zooming and splitting, i.e., having (unlimited) small heatmaps underneath each other in one plot deriving from the same data set, e.g., clustered and ordered by a supervised clustering method.
Objective: Implement new methods for detecting change points in high-dimensional time series data. These new methods can be applied to non-Gaussian data, account for spatial and temporal dependence, and detect a wide variety of change-point configurations, including changes near the boundary and changes in close proximity. Additionally, this package helps address the â small n, large pâ problem, which occurs in many research contexts. This problem arises when a dataset contains changes that are visually evident but do not rise to the level of statistical significance due to the small number of observations and large number of parameters. The problem is overcome by treating the dimensions as a whole and scaling the test statistics only by its standard deviation, rather than scaling each dimension individually. Due to the computational complexity of the functions, the package runs best on datasets with a relatively large number of attributes but no more than a few hundred observations.
The classical Markowitz's mean-variance portfolio formulation ignores heavy tails and skewness. High-order portfolios use higher order moments to better characterize the return distribution. Different formulations and fast algorithms are proposed for high-order portfolios based on the mean, variance, skewness, and kurtosis. The package is based on the papers: R. Zhou and D. P. Palomar (2021). "Solving High-Order Portfolios via Successive Convex Approximation Algorithms." <arXiv:2008.00863>. X. Wang, R. Zhou, J. Ying, and D. P. Palomar (2022). "Efficient and Scalable High-Order Portfolios Design via Parametric Skew-t Distribution." <arXiv:2206.02412>.
Implementation of multiple approaches to perform inference in high-dimensional models.
This package provides functions to view files in raw binary form like in a hex editor. Additional functions to specify and read arbitrary binary formats.
This package provides a broad collection of datasets focused on health, biomechanics, and human motion. It includes clinical, physiological, and kinematic information from diverse sources, covering aspects such as surgery outcomes, vital signs, rheumatoid arthritis, osteoarthritis, accelerometry, gait analysis, motion sensing, and biomechanics experiments. Designed for researchers, analysts, and students, the package facilitates exploration and analysis of data related to health monitoring, physical activity, and rehabilitation.
This package provides functions for basic hydraulic calculations related to water flow in circular pipes both flowing full (under pressure), and partially full (gravity flow), and trapezoidal open channels. For pressure flow this includes friction loss calculations by solving the Darcy-Weisbach equation for head loss, flow or diameter, plotting a Moody diagram, matching a pump characteristic curve to a system curve, and solving for flows in a pipe network using the Hardy-Cross method. The Darcy-Weisbach friction factor is calculated using the Colebrook (or Colebrook-White equation), the basis of the Moody diagram, the original citation being Colebrook (1939) <doi:10.1680/ijoti.1939.13150>. For gravity flow, the Manning equation is used, again solving for missing parameters. The derivation of and solutions using the Darcy-Weisbach equation and the Manning equation are outlined in many fluid mechanics texts such as Finnemore and Maurer (2024, ISBN:978-1-264-78729-6). Some gradually- and rapidly-varied flow functions are included. For the Manning equation solutions, this package uses modifications of original code from the iemisc package by Irucka Embry.
An important environmental impact on running water ecosystems is caused by hydropeaking - the discontinuous release of turbine water because of peaks of energy demand. An event-based algorithm is implemented to detect flow fluctuations referring to increase events (IC) and decrease events (DC). For each event, a set of parameters related to the fluctuation intensity is calculated. The framework is introduced in Greimel et al. (2016) "A method to detect and characterize sub-daily flow fluctuations" <doi:10.1002/hyp.10773> and can be used to identify different fluctuation types according to the potential source: e.g., sub-daily flow fluctuations caused by hydropeaking, rainfall, or snow and glacier melt. This is a companion to the package hydroroute', which is used to detect and follow hydropower plant-specific hydropeaking waves at the sub-catchment scale and to describe how hydropeaking flow parameters change along the longitudinal flow path as proposed and validated in Greimel et al. (2022).
Harriet was Charles Darwin's pet tortoise (possibly). harrietr implements some function to manipulate distance matrices and phylogenetic trees to make it easier to plot with ggplot2 and to manipulate using tidyverse tools.
Pre-made models that can be rapidly tailored to various chemicals and species using chemical-specific in vitro data and physiological information. These tools allow incorporation of chemical toxicokinetics ("TK") and in vitro-in vivo extrapolation ("IVIVE") into bioinformatics, as described by Pearce et al. (2017) (<doi:10.18637/jss.v079.i04>). Chemical-specific in vitro data characterizing toxicokinetics have been obtained from relatively high-throughput experiments. The chemical-independent ("generic") physiologically-based ("PBTK") and empirical (for example, one compartment) "TK" models included here can be parameterized with in vitro data or in silico predictions which are provided for thousands of chemicals, multiple exposure routes, and various species. High throughput toxicokinetics ("HTTK") is the combination of in vitro data and generic models. We establish the expected accuracy of HTTK for chemicals without in vivo data through statistical evaluation of HTTK predictions for chemicals where in vivo data do exist. The models are systems of ordinary differential equations that are developed in MCSim and solved using compiled (C-based) code for speed. A Monte Carlo sampler is included for simulating human biological variability (Ring et al., 2017 <doi:10.1016/j.envint.2017.06.004>) and propagating parameter uncertainty (Wambaugh et al., 2019 <doi:10.1093/toxsci/kfz205>). Empirically calibrated methods are included for predicting tissue:plasma partition coefficients and volume of distribution (Pearce et al., 2017 <doi:10.1007/s10928-017-9548-7>). These functions and data provide a set of tools for using IVIVE to convert concentrations from high-throughput screening experiments (for example, Tox21, ToxCast) to real-world exposures via reverse dosimetry (also known as "RTK") (Wetmore et al., 2015 <doi:10.1093/toxsci/kfv171>).
Detection of haplotype patterns that include single nucleotide polymorphisms (SNPs) and non-contiguous haplotypes that are associated with a phenotype. Methods for implementing HTRX are described in Yang Y, Lawson DJ (2023) <doi:10.1093/bioadv/vbad038> and Barrie W, Yang Y, Irving-Pease E.K, et al (2024) <doi:10.1038/s41586-023-06618-z>.
Helper functions designed to make dynamically generating R Markdown documents easier by providing a simple and tidy way to create report pieces, shape them to your data, and combine them for exporting into a single R Markdown document.
Hospital time series data analysis workflow tools, modeling, and automations. This library provides many useful tools to review common administrative time series hospital data. Some of these include average length of stay, and readmission rates. The aim is to provide a simple and consistent verb framework that takes the guesswork out of everything.