Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Offers methods for visualizing, modelling, and forecasting high-dimensional functional time series, also known as functional panel data. Documentation about hdftsa is provided via the paper by Cristian F. Jimenez-Varon, Ying Sun and Han Lin Shang (2024, <doi:10.1080/10618600.2024.2319166>).
The HURRECON model estimates wind speed, wind direction, enhanced Fujita scale wind damage, and duration of EF0 to EF5 winds as a function of hurricane location and maximum sustained wind speed. Results may be generated for a single site or an entire region. Hurricane track and intensity data may be imported directly from the US National Hurricane Center's HURDAT2 database. For details on the original version of the model written in Borland Pascal, see: Boose, Chamberlin, and Foster (2001) <doi:10.1890/0012-9615(2001)071[0027:LARIOH]2.0.CO;2> and Boose, Serrano, and Foster (2004) <doi:10.1890/02-4057>.
Computes diagnostics for linear regression when treatment effects are heterogeneous. The output of hettreatreg represents ordinary least squares (OLS) estimates of the effect of a binary treatment as a weighted average of the average treatment effect on the treated (ATT) and the average treatment effect on the untreated (ATU). The program estimates the OLS weights on these parameters, computes the associated model diagnostics, and reports the implicit OLS estimate of the average treatment effect (ATE). See Sloczynski (2019), <http://people.brandeis.edu/~tslocz/Sloczynski_paper_regression.pdf>.
This package provides tools for estimating sample sizes primarily based on heritability, while also considering additional parameters such as statistical power and fold change. The package normalizes heritability values according to trait-specific heritability and classification to enhance accuracy in sample size estimation.
Given a database of previous treatment/placebo estimates, their standard errors and sample sizes, the program calculates a significance criteria and power estimate that takes into account the among trial variation.
Reliability Analysis and Maintenance Optimization using Hidden Markov Models (HMM). The use of HMMs to model the state of a system which is not directly observable and instead certain indicators (signals) of the true situation are provided via a control system. A hidden model can provide key information about the system dependability, such as the reliability of the system and related measures. An estimation procedure is implemented based on the Baum-Welch algorithm. Classical structures such as K-out-of-N systems and Shock models are illustrated. Finally, the maintenance of the system is considered in the HMM context and two functions for new preventive maintenance strategies are considered. Maintenance efficiency is measured in terms of expected cost. Methods are described in Gamiz, Limnios, and Segovia-Garcia (2023) <doi:10.1016/j.ejor.2022.05.006>.
Hedgehog will eat all your bugs. Hedgehog is a property-based testing package in the spirit of QuickCheck'. With Hedgehog', one can test properties of their programs against randomly generated input, providing far superior test coverage compared to unit testing. One of the key benefits of Hedgehog is integrated shrinking of counterexamples, which allows one to quickly find the cause of bugs, given salient examples when incorrect behaviour occurs.
Facilitates building topology preserving maps for data analysis.
Identifying labeled compounds in a 13C-tracer experiment in non-targeted fashion is a cumbersome process. This package facilitates such type of analyses by providing high level quality control plots, deconvoluting and evaluating spectra and performing a multitude of tests in an automatic fashion. The main idea is to use changing intensity ratios of ion pairs from peak list generated with xcms as candidates and evaluate those against base peak chromatograms and spectra information within the raw measurement data automatically. The functionality is described in Hoffmann et al. (2018) <doi:10.1021/acs.analchem.8b00356>.
This package implements Data Envelopment Analysis (DEA) with a hyperbolic orientation using a non-linear programming solver. It enables flexible estimations with weight restrictions, non-discretionary variables, and a generalized distance function. Additionally, it allows for the calculation of slacks and super-efficiency scores. The methods are detailed in à ttl et al. (2023), <doi:10.1016/j.dajour.2023.100343>. Furthermore, the package provides a non-linear profitability estimation built upon the DEA framework.
Joint models have been widely used to study the associations between longitudinal biomarkers and a survival outcome. However, existing joint models only consider one or a few longitudinal biomarkers and cannot deal with high-dimensional longitudinal biomarkers. This package can be used to fit our recently developed penalized joint model that can handle high-dimensional longitudinal biomarkers. Specifically, an adaptive lasso penalty is imposed on the parameters for the effects of the longitudinal biomarkers on the survival outcome, which allows for variable selection. Also, our algorithm is computationally efficient, which is based on the Gaussian variational approximation method.
In some cases you will have data in a histogram format, where you have a vector of all possible observations, and a vector of how many times each observation appeared. You could expand this into a single 1D vector, but this may not be advisable if the counts are extremely large. HistDat allows for the calculation of summary statistics without the need for expanding your data.
Cross-species identification of novel gene candidates using the NCBI web service is provided. Further, sets of miRNA target genes can be identified by using the targetscan.org API.
This package provides a data set of the Portuguese NHS hospitals.
Method and tool for generating hybrid time series forecasts using an error remodeling approach. These forecasting approaches utilize a recursive technique for modeling the linearity of the series using a linear method (e.g., ARIMA, Theta, etc.) and then models (forecasts) the residuals of the linear forecaster using non-linear neural networks (e.g., ANN, ARNN, etc.). The hybrid architectures comprise three steps: firstly, the linear patterns of the series are forecasted which are followed by an error re-modeling step, and finally, the forecasts from both the steps are combined to produce the final output. This method additionally provides the confidence intervals as needed. Ten different models can be implemented using this package. This package generates different types of hybrid error correction models for time series forecasting based on the algorithms by Zhang. (2003), Chakraborty et al. (2019), Chakraborty et al. (2020), Bhattacharyya et al. (2021), Chakraborty et al. (2022), and Bhattacharyya et al. (2022) <doi:10.1016/S0925-2312(01)00702-0> <doi:10.1016/j.physa.2019.121266> <doi:10.1016/j.chaos.2020.109850> <doi:10.1109/IJCNN52387.2021.9533747> <doi:10.1007/978-3-030-72834-2_29> <doi:10.1007/s11071-021-07099-3>.
This package provides functions implementing change point detection methods using the maximum pairwise Bayes factor approach. Additionally, the package includes tools for generating simulated datasets for comparing and evaluating change point detection techniques.
An RStudio Addin for Hippie Expand (AKA Hippie Code Completion or Cyclic Expand Word). This type of completion searches for matching tokens within the user's current source editor file, regardless of file type. By searching only within the current source file, hippie offers a fast way to identify and insert completions that appear around the user's cursor.
Events from individual hydrologic time series are extracted, and events are matched across multiple time series. The package has been applied in studies such as Wasko and Guo (2022) <doi:10.1002/hyp.14563> and Mohammadpour Khoie, Guo and Wasko (2025) <doi:10.1016/j.envsoft.2025.106521>.
Inference concerning equilibrium and random mating in autopolyploids. Methods are available to test for equilibrium and random mating at any even ploidy level (>2) in the presence of double reduction at biallelic loci. For autopolyploid populations in equilibrium, methods are available to estimate the degree of double reduction. We also provide functions to calculate genotype frequencies at equilibrium, or after one or several rounds of random mating, given rates of double reduction. The main function is hwefit(). This material is based upon work supported by the National Science Foundation under Grant No. 2132247. The opinions, findings, and conclusions or recommendations expressed are those of the author and do not necessarily reflect the views of the National Science Foundation. For details of these methods, see Gerard (2023a) <doi:10.1111/biom.13722> and Gerard (2023b) <doi:10.1111/1755-0998.13856>.
This package implements the Brakerski-Fan-Vercauteren (BFV, 2012) <https://eprint.iacr.org/2012/144>, Brakerski-Gentry-Vaikuntanathan (BGV, 2014) <doi:10.1145/2633600>, and Cheon-Kim-Kim-Song (CKKS, 2016) <https://eprint.iacr.org/2016/421.pdf> schema for Fully Homomorphic Encryption. The included vignettes demonstrate the encryption procedures.
This package provides tools for computing HUM (Hypervolume Under the Manifold) value to estimate features ability to discriminate the class labels, visualizing the ROC curve for two or three class labels (Natalia Novoselova, Cristina Della Beffa, Junxi Wang, Jialiang Li, Frank Pessler, Frank Klawonn (2014) <doi:10.1093/bioinformatics/btu086>).
Human names are complicated and nonstandard things. Humaniformat, which is based on Anthony Ettinger's humanparser project (https://github.com/ chovy/humanparser) provides functions for parsing human names, making a best- guess attempt to distinguish sub-components such as prefixes, suffixes, middle names and salutations.
This package provides a system for identifying diseases or events from healthcare databases and preparing data for epidemiological studies. It includes capabilities not supported by SQL', such as matching strings by stringr style regular expressions, and can compute comorbidity scores (Quan et al. (2005) <doi:10.1097/01.mlr.0000182534.19832.83>) directly on a database server. The implementation is based on dbplyr with full tidyverse compatibility.
This package provides access to Uber's H3 library for geospatial indexing via its JavaScript transpile h3-js <https://github.com/uber/h3-js> and V8 <https://github.com/jeroen/v8>.