Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Makes it easy to download a large number of files such as PDF files and CSV files, while automatically slowing down requests, letting you know where it is up to, and adjusting for files that have already been downloaded.
Perform statistical writership analysis of scanned handwritten documents with a shiny app for handwriter'.
Focuses on data processing and visualization in hydrology and climate forecasting. Main function includes data extraction, data downscaling, data resampling, gap filler of precipitation, bias correction of forecasting data, flexible time series plot, and spatial map generation. It is a good pre- processing and post-processing tool for hydrological and hydraulic modellers.
Testing homogeneity of k multivariate distributions is a classical and challenging problem in statistics, and this becomes even more challenging when the dimension of the data exceeds the sample size. We construct some tests for this purpose which are exact level (size) alpha tests based on clustering. These tests are easy to implement and distribution-free in finite sample situations. Under appropriate regularity conditions, these tests have the consistency property in HDLSS asymptotic regime, where the dimension of data grows to infinity while the sample size remains fixed. We also consider a multiscale approach, where the results for different number of partitions are aggregated judiciously. Details are in Biplab Paul, Shyamal K De and Anil K Ghosh (2021) <doi:10.1016/j.jmva.2021.104897>; Soham Sarkar and Anil K Ghosh (2019) <doi:10.1109/TPAMI.2019.2912599>; William M Rand (1971) <doi:10.1080/01621459.1971.10482356>; Cyrus R Mehta and Nitin R Patel (1983) <doi:10.2307/2288652>; Joseph C Dunn (1973) <doi:10.1080/01969727308546046>; Sture Holm (1979) <doi:10.2307/4615733>; Yoav Benjamini and Yosef Hochberg (1995) <doi: 10.2307/2346101>.
Perform hierarchical Bayesian Aldrich-McKelvey scaling using Hamiltonian Monte Carlo via Stan'. Aldrich-McKelvey ('AM') scaling is a method for estimating the ideological positions of survey respondents and political actors on a common scale using positional survey data. The hierarchical versions of the Bayesian AM model included in this package outperform other versions both in terms of yielding meaningful posterior distributions for respondent positions and in terms of recovering true respondent positions in simulations. The package contains functions for preparing data, fitting models, extracting estimates, plotting key results, and comparing models using cross-validation. The original version of the default model is described in Bølstad (2024) <doi:10.1017/pan.2023.18>.
Allows to evaluate Higher Order Assortativity of complex networks defined through objects of class igraph from the package of the same name. The package returns a result also for directed and weighted graphs. References, Arcagni, A., Grassi, R., Stefani, S., & Torriero, A. (2017) <doi:10.1016/j.ejor.2017.04.028> Arcagni, A., Grassi, R., Stefani, S., & Torriero, A. (2021) <doi:10.1016/j.jbusres.2019.10.008> Arcagni, A., Cerqueti, R., & Grassi, R. (2023) <doi:10.48550/arXiv.2304.01737>.
Hard drive data: Class of data allowing the easy importation/manipulation of out of memory data sets. The data sets are located on disk but look like in-memory, the syntax for manipulation is similar to data.table'. Operations are performed "chunk-wise" behind the scene.
This package provides functions for designing phase II clinical trials adjusting for the heterogeneity of the population using known subgroups or historical controls.
Monthly median home listing, sale price per square foot, and number of units sold for 2984 counties in the contiguous United States From 2008 to January 2016. Additional data sets containing geographical information and links to Wikipedia are also included.
This package performs genetic association analyses of case-parent triad (trio) data with multiple markers. It can also incorporate complete or incomplete control triads, for instance independent control children. Estimation is based on haplotypes, for instance SNP haplotypes, even though phase is not known from the genetic data. Haplin estimates relative risk (RR + conf.int.) and p-value associated with each haplotype. It uses maximum likelihood estimation to make optimal use of data from triads with missing genotypic data, for instance if some SNPs has not been typed for some individuals. Haplin also allows estimation of effects of maternal haplotypes and parent-of-origin effects, particularly appropriate in perinatal epidemiology. Haplin allows special models, like X-inactivation, to be fitted on the X-chromosome. A GxE analysis allows testing interactions between environment and all estimated genetic effects. The models were originally described in "Gjessing HK and Lie RT. Case-parent triads: Estimating single- and double-dose effects of fetal and maternal disease gene haplotypes. Annals of Human Genetics (2006) 70, pp. 382-396".
The conditional treatment effect for competing risks data in observational studies is estimated. While it is described as a constant difference between the hazard functions given the covariates, we do not assume specific functional forms for the covariates. Rava, D. and Xu, R. (2021) <arXiv:2112.09535>.
This package implements the method developed by Cao and Kosorok (2011) for the significance analysis of thousands of features in high-dimensional biological studies. It is an asymptotically valid data-driven procedure to find critical values for rejection regions controlling the k-familywise error rate, false discovery rate, and the tail probability of false discovery proportion.
Implementation of the Hysteretic and Gatekeeping Depressions Model (HGDM) which calculates variable connected/contributing areas and resulting discharge volumes in prairie basins dominated by depressions ("slough" or "potholes"). The small depressions are combined into a single "meta" depression which explicitly models the hysteresis between the storage of water and the connected/contributing areas of the depressions. The largest (greater than 5% of the total depressional area) depression (if it exists) is represented separately to model its gatekeeping, i.e. the blocking of upstream flows until it is filled. The methodolgy is described in detail in Shook and Pomeroy (2025, <doi:10.1016/j.jhydrol.2025.132821>).
This package provides a generic function and a set of methods to calculate highest density intervals for a variety of classes of objects which can specify a probability density distribution, including MCMC output, fitted density objects, and functions.
H3 is a hexagonal hierarchical spatial index developed by Uber <https://h3geo.org/>. This package exposes the source code of H3 (written in C') to routines that are callable through R'.
Provide functionality to manage, clean and match highfrequency trades and quotes data, calculate various liquidity measures, estimate and forecast volatility, detect price jumps and investigate microstructure noise and intraday periodicity. A detailed vignette can be found in the open-access paper "Analyzing Intraday Financial Data in R: The highfrequency Package" by Boudt, Kleen, and Sjoerup (2022, <doi:10.18637/jss.v104.i08>).
It is used to travel graphs, by using DFS and BFS to get the path from node to each leaf node. Depth first traversal(DFS) is a recursive algorithm for searching all the vertices of a graph or tree data structure. Traversal means visiting all the nodes of a graph. Breadth first traversal(BFS) algorithm is used to search a tree or graph data structure for a node that meets a set of criteria. It starts at the treeâ s root or graph and searches/visits all nodes at the current depth level before moving on to the nodes at the next depth level. Also, it provides the matrix which is reachable between each node. Implement reference about Baruch Awerbuch (1985) <doi:10.1016/0020-0190(85)90083-3>.
Published meta-analyses routinely present one of the measures of heterogeneity introduced in Higgins and Thompson (2002) <doi:10.1002/sim.1186>. For critiquing articles it is often better to convert to another of those measures. Some conversions are provided here and confidence intervals are also available.
When performing multiple imputations, while 5-10 imputations are sufficient for obtaining point estimates, a larger number of imputations are needed for proper standard error estimates. This package allows you to calculate how many imputations are needed, following the work of von Hippel (2020) <doi:10.1177/0049124117747303>.
Plot an R package's recursive dependency graph and tabulate the number of unique downstream dependencies added by top-level dependencies. This helps R package developers identify which of their declared dependencies add the most downstream dependencies in order to prioritize them for removal if needed. Uses graph stress minimization adapted from Schoch (2023) <doi:10.21105/joss.05238> and originally reported in Gansner et al. (2004) <doi:10.1007/978-3-540-31843-9_25>.
Create compressed, interactive HTML (Hypertext Markup Language) reports with embedded Python code, custom JS ('JavaScript') and CSS (Cascading Style Sheets), and wrappers for CanvasXpress plots, networks and more. Based on <https://pypi.org/project/py-report-html/>, its sister project.
Allows users to create time series of tropical storm exposure histories for chosen counties for a number of hazard metrics (wind, rain, distance from the storm, etc.). This package interacts with data available through the hurricaneexposuredata package, which is available in a drat repository. To access this data package, see the instructions at <https://github.com/geanders/hurricaneexposure>. The size of the hurricaneexposuredata package is approximately 20 MB. This work was supported in part by grants from the National Institute of Environmental Health Sciences (R00ES022631), the National Science Foundation (1331399), and a NASA Applied Sciences Program/Public Health Program Grant (NNX09AV81G).
Given one or multiple paths to files produced by a PULSE multi-channel or a PULSE one-channel system (<https://electricblue.eu/pulse>) from a single experiment: [1] check pulse files for inconsistencies and read/merge all data, [2] split across time windows, [3] interpolate and smooth to optimize the dataset, [4] compute the heart rate frequency for each channel/window, and [5] facilitate quality control, summarising and plotting. Heart rate frequency is calculated using the Automatic Multi-scale Peak Detection algorithm proposed by Felix Scholkmann and team. For more details see Scholkmann et al (2012) <doi:10.3390/a5040588>. Check original code at <https://github.com/ig248/pyampd>. ElectricBlue is a non-profit technology transfer startup creating research-oriented solutions for the scientific community (<https://electricblue.eu>).
The HMS (Hierarchic Memetic Strategy) is a composite global optimization strategy consisting of a multi-population evolutionary strategy and some auxiliary methods. The HMS makes use of a dynamically-evolving data structure that provides an organization among the component populations. It is a tree with a fixed maximal height and variable internal node degree. Each component population is governed by a particular evolutionary engine. This package provides a simple R implementation with examples of using different genetic algorithms as the population engines. References: J. Sawicki, M. Å oÅ , M. SmoÅ ka, J. Alvarez-Aramberri (2022) <doi:10.1007/s11047-020-09836-w>.