Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Analysis of Surface Plasmon Resonance (SPR) and Biolayer Interferometry data, with automations for high-throughput SPR. This version of the package fits the 1: 1 binding model, with and without bulkshift. It offers optional local or global Rmax fitting. The user must provide a sample sheet and a Carterra output file in Carterra's current format. There is a utility function to convert from Carterra's old output format. The user may run a custom pipeline or use the provided Runscript', which will produce a pdf file containing fitted Rmax, ka, kd and standard errors, a plot of the sensorgram and fits, and a plot of residuals. The script will also produce a .csv file with all of the relevant parameters for each spot on the SPR chip.
Hard drive data: Class of data allowing the easy importation/manipulation of out of memory data sets. The data sets are located on disk but look like in-memory, the syntax for manipulation is similar to data.table'. Operations are performed "chunk-wise" behind the scene.
This data-only package was created for distributing data used in the examples of the hglm package.
Computes the ACMIF test and Bonferroni-adjusted p-value of interaction in two-factor studies. Produces corresponding interaction plot and analysis of variance tables and p-values from several other tests of non-additivity.
This package implements methods developed by Ding, Feller, and Miratrix (2016) <doi:10.1111/rssb.12124> <arXiv:1412.5000>, and Ding, Feller, and Miratrix (2018) <doi:10.1080/01621459.2017.1407322> <arXiv:1605.06566> for testing whether there is unexplained variation in treatment effects across observations, and for characterizing the extent of the explained and unexplained variation in treatment effects. The package includes wrapper functions implementing the proposed methods, as well as helper functions for analyzing and visualizing the results of the test.
This package provides functions for combining model outputs (e.g. predictions or estimates) from multiple models into an aggregated ensemble model output.
Implementing Hierarchical Bayesian Small Area Estimation models using the brms package as the computational backend. The modeling framework follows the methodological foundations described in area-level models. This package is designed to facilitate a principled Bayesian workflow, enabling users to conduct prior predictive checks, model fitting, posterior predictive checks, model comparison, and sensitivity analysis in a coherent and reproducible manner. It supports flexible model specifications via brms and promotes transparency in model development, aligned with the recommendations of modern Bayesian data analysis practices, implementing methods described in Rao and Molina (2015) <doi:10.1002/9781118735855>.
Statistical analysis of static chamber concentration data for trace gas flux estimation.
This package provides access to Uber's H3 geospatial indexing system via h3lib <https://CRAN.R-project.org/package=h3lib>. h3r is designed to mimic the H3 Application Programming Interface (API) <https://h3geo.org/docs/api/indexing/>, so that any function in the API is also available in h3r'.
Helper functions for creating reproducible hexagon sticker purely in R.
This package provides a modern idiomatic header-only C++ interface for libhdf5'. Original software can be found at <https://github.com/highfive-devs/highfive/>.
Hospital machine learning and ai data analysis workflow tools, modeling, and automations. This library provides many useful tools to review common administrative hospital data. Some of these include predicting length of stay, and readmits. The aim is to provide a simple and consistent verb framework that takes the guesswork out of everything.
Identifies regime changes in streamflow runoff not explained by variations in precipitation. The package builds a flexible set of Hidden Markov Models of annual, seasonal or monthly streamflow runoff with precipitation as a predictor. Suites of models can be built for a single site, ranging from one to three states and each with differing combinations of error models and auto-correlation terms. The most parsimonious model is easily identified by AIC, and useful for understanding catchment drought non-recovery: Peterson TJ, Saft M, Peel MC & John A (2021) <doi:10.1126/science.abd5085>.
Given one or multiple paths to files produced by a PULSE multi-channel or a PULSE one-channel system (<https://electricblue.eu/pulse>) from a single experiment: [1] check pulse files for inconsistencies and read/merge all data, [2] split across time windows, [3] interpolate and smooth to optimize the dataset, [4] compute the heart rate frequency for each channel/window, and [5] facilitate quality control, summarising and plotting. Heart rate frequency is calculated using the Automatic Multi-scale Peak Detection algorithm proposed by Felix Scholkmann and team. For more details see Scholkmann et al (2012) <doi:10.3390/a5040588>. Check original code at <https://github.com/ig248/pyampd>. ElectricBlue is a non-profit technology transfer startup creating research-oriented solutions for the scientific community (<https://electricblue.eu>).
Allows to evaluate Higher Order Assortativity of complex networks defined through objects of class igraph from the package of the same name. The package returns a result also for directed and weighted graphs. References, Arcagni, A., Grassi, R., Stefani, S., & Torriero, A. (2017) <doi:10.1016/j.ejor.2017.04.028> Arcagni, A., Grassi, R., Stefani, S., & Torriero, A. (2021) <doi:10.1016/j.jbusres.2019.10.008> Arcagni, A., Cerqueti, R., & Grassi, R. (2023) <doi:10.48550/arXiv.2304.01737>.
We use the Alternating Direction Method of Multipliers (ADMM) for parameter estimation in high-dimensional, single-modality mediation models. To improve the sensitivity and specificity of estimated mediation effects, we offer the sure independence screening (SIS) function for dimension reduction. The available penalty options include Lasso, Elastic Net, Pathway Lasso, and Network-constrained Penalty. The methods employed in the package are based on Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). <doi:10.1561/2200000016>, Fan, J., & Lv, J. (2008) <doi:10.1111/j.1467-9868.2008.00674.x>, Li, C., & Li, H. (2008) <doi:10.1093/bioinformatics/btn081>, Tibshirani, R. (1996) <doi:10.1111/j.2517-6161.1996.tb02080.x>, Zhao, Y., & Luo, X. (2022) <doi:10.4310/21-sii673>, and Zou, H., & Hastie, T. (2005) <doi:10.1111/j.1467-9868.2005.00503.x>.
Raster based flood modelling internally using hyd1d', an R package to interpolate 1d water level and gauging data. The package computes flood extent and duration through strategies originally developed for INFORM', an ArcGIS'-based hydro-ecological modelling framework. It does not provide a full, physical hydraulic modelling algorithm, but a simplified, near real time GIS approach for flood extent and duration modelling. Computationally demanding annual flood durations have been computed already and data products were published by Weber (2022) <doi:10.1594/PANGAEA.948042>.
Initializes a class that obtains API credentials and provides a method to use those credentials to make GET requests to the Hakai API server. Usage instructions are documented at <https://hakaiinstitute.github.io/hakai-api/>.
This package provides an interface to HDFql <https://www.hdfql.com/> and helper functions for reading data from and writing data to HDF5 files. HDFql provides a high-level language for managing HDF5 data that is platform independent. For more information, see the reference manual <https://www.hdfql.com/resources/HDFqlReferenceManual.pdf>.
Template R package with minimal setup to use Rust code in R without hacks or frameworks. Includes basic examples of importing cargo dependencies, spawning threads and passing numbers or strings from Rust to R. Cargo crates are automatically vendored in the R source package to support offline installation. The GitHub repository for this package has more details and also explains how to set up CI. This project was first presented at Erum2018 to showcase R-Rust integration <https://jeroen.github.io/erum2018/>; for a real world use-case, see the gifski package on CRAN'.
Offers efficient algorithms for fitting regularization paths for lasso or elastic-net penalized regression models with Huber loss, quantile loss or squared loss. Reference: Congrui Yi and Jian Huang (2017) <doi:10.1080/10618600.2016.1256816>.
Estimates the shape and volume of high-dimensional datasets and performs set operations: intersection / overlap, union, unique components, inclusion test, and hole detection. Uses stochastic geometry approach to high-dimensional kernel density estimation, support vector machine delineation, and convex hull generation. Applications include modeling trait and niche hypervolumes and species distribution modeling.
This package implements the method developed by Cao and Kosorok (2011) for the significance analysis of thousands of features in high-dimensional biological studies. It is an asymptotically valid data-driven procedure to find critical values for rejection regions controlling the k-familywise error rate, false discovery rate, and the tail probability of false discovery proportion.
This package implements Data Envelopment Analysis (DEA) with a hyperbolic orientation using a non-linear programming solver. It enables flexible estimations with weight restrictions, non-discretionary variables, and a generalized distance function. Additionally, it allows for the calculation of slacks and super-efficiency scores. The methods are detailed in à ttl et al. (2023), <doi:10.1016/j.dajour.2023.100343>. Furthermore, the package provides a non-linear profitability estimation built upon the DEA framework.