Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools to estimate, compare, and visualize healthcare resource utilization using data derived from electronic health records or real-world evidence sources. The package supports pre index and post index analysis, patient cohort comparison, and customizable summaries and visualizations for clinical and health economics research. Methods implemented are based on Scott et al. (2022) <doi:10.1080/13696998.2022.2037917> and Xia et al. (2024) <doi:10.14309/ajg.0000000000002901>.
This package implements the high-dimensional two-sample test proposed by Zhang (2019) <http://hdl.handle.net/2097/40235>. It also implements the test proposed by Srivastava, Katayama, and Kano (2013) <doi:10.1016/j.jmva.2012.08.014>. These tests are particularly suitable to high dimensional data from two populations for which the classical multivariate Hotelling's T-square test fails due to sample sizes smaller than dimensionality. In this case, the ZWL and ZWLm tests proposed by Zhang (2019) <http://hdl.handle.net/2097/40235>, referred to as zwl_test() in this package, provide a reliable and powerful test.
This package provides a function to assess and test for heterogeneity in the utility of a surrogate marker with respect to a baseline covariate. The main function can be used for either a continuous or discrete baseline covariate. More details will be available in the future in: Parast, L., Cai, T., Tian L (2021). "Testing for Heterogeneity in the Utility of a Surrogate Marker." Biometrics, In press.
Various functions and algorithms are provided here for solving optimal matching tasks in the context of preclinical cancer studies. Further, various helper and plotting functions are provided for unsupervised and supervised machine learning as well as longitudinal mixed-effects modeling of tumor growth response patterns.
Hadamard matrix based statistical designs are of immense importance as the resultant designs carry various desirable characterizing properties. Constructing Partially Balanced Incomplete Block Designs (PBIBds) using Kronecker product of incidence matrices of Balanced Incomplete Block (BIB) and Partially Balanced Incomplete Block (PBIB) designs is much evident from literature. Here, we have constructed Incomplete Block Designs (IBDs) based on Hadamard matrices and Kronecker product of Hadamard matrices.
Harriet was Charles Darwin's pet tortoise (possibly). harrietr implements some function to manipulate distance matrices and phylogenetic trees to make it easier to plot with ggplot2 and to manipulate using tidyverse tools.
Higher order likelihood inference is a promising approach for analyzing small sample size data. The holi package provides web applications for higher order likelihood inference. It currently supports linear, logistic, and Poisson generalized linear models through the rstar_glm() function, based on Pierce and Bellio (2017) <doi:10.1111/insr.12232> and likelihoodAsy'. The package offers two main features: LA_rstar(), which launches an interactive shiny application allowing users to fit models with rstar_glm() through their web browser, and sim_rstar_glm_pgsql(), which streamlines the process of launching a web-based shiny simulation application that saves results to a user-created PostgreSQL database.
Manipulate data through memory-mapped files, as vectors, matrices or arrays. Basic arithmetic functions are implemented, but currently no matrix arithmetic. Can write and read descriptor files for compatibility with the bigmemory package.
Functions, data sets, analyses and examples from the book A Handbook of Statistical Analyses Using R (Brian S. Everitt and Torsten Hothorn, Chapman & Hall/CRC, 2006). The first chapter of the book, which is entitled An Introduction to R'', is completely included in this package, for all other chapters, a vignette containing all data analyses is available.
Kernel density estimation with hexagonal grid for bivariate data. Hexagonal grid has many beneficial properties like equidistant neighbours and less edge bias, making it better for spatial analyses than the more commonly used rectangular grid. Carr, D. B. et al. (1987) <doi:10.2307/2289444>. Diggle, P. J. (2010) <doi:10.1201/9781420072884>. Hill, B. (2017) <https://blog.bruce-hill.com/meandering-triangles>. Jones, M. C. (1993) <doi:10.1007/BF00147776>.
This package implements hierarchical clustering methods (single linkage, complete linkage, average linkage, and centroid linkage) with stepwise printing and dendrograms for didactic purposes.
This package implements the Clarke-Wright algorithm to find a quasi-optimal solution to the Capacitated Vehicle Routing Problem. See Clarke, G. and Wright, J.R. (1964) <doi:10.1287/opre.12.4.568> for details. The implementation is accompanied by helper functions to inspect its solution.
This package provides a set of tools to create georeferenced hillshade relief raster maps using ray-tracing and other advanced hill-shading techniques. It includes a wrapper function to create a georeferenced, ray-traced hillshade map from a digital elevation model, and other functions that can be used in a rayshader pipeline.
This package provides a tool to format R markdown with CSS ids for HTML output. The tool may be most helpful for those using markdown to create reproducible documents. The biggest limitations in formatting is the knowledge of CSS by the document authors.
In the framework of Symbolic Data Analysis, a relatively new approach to the statistical analysis of multi-valued data, we consider histogram-valued data, i.e., data described by univariate histograms. The methods and the basic statistics for histogram-valued data are mainly based on the L2 Wasserstein metric between distributions, i.e., the Euclidean metric between quantile functions. The package contains unsupervised classification techniques, least square regression and tools for histogram-valued data and for histogram time series. An introducing paper is Irpino A. Verde R. (2015) <doi: 10.1007/s11634-014-0176-4>.
This tool identifies hydropeaking events from raw time-series flow record, a rapid flow variation induced by the hourly-adjusted electricity market. The novelty of HEDA is to use vector angle instead of the first-order derivative to detect change points which not only largely improves the computing efficiency but also accounts for the rate of change of the flow variation. More details <doi:10.1016/j.jhydrol.2021.126392>.
This package provides a data set of the Portuguese NHS hospitals.
This package provides functions for the estimation, plotting, predicting and cross-validation of hierarchical feature regression models as described in Pfitzinger (2024). Cluster Regularization via a Hierarchical Feature Regression. Econometrics and Statistics (in press). <doi:10.1016/j.ecosta.2024.01.003>.
Implementing Hierarchical Bayesian Small Area Estimation models using the brms package as the computational backend. The modeling framework follows the methodological foundations described in area-level models. This package is designed to facilitate a principled Bayesian workflow, enabling users to conduct prior predictive checks, model fitting, posterior predictive checks, model comparison, and sensitivity analysis in a coherent and reproducible manner. It supports flexible model specifications via brms and promotes transparency in model development, aligned with the recommendations of modern Bayesian data analysis practices, implementing methods described in Rao and Molina (2015) <doi:10.1002/9781118735855>.
Processing, analysis and visualization of Hydrogen Deuterium eXchange monitored by Mass Spectrometry experiments (HDX-MS). HaDeX2 introduces a new standardized and reproducible workflow for the analysis of the HDX-MS data, including uncertainty propagation, data aggregation and visualization on 3D structure. Additionally, it covers data exploration, quality control and generation of publication-quality figures. All functionalities are also available in the accompanying shiny app.
This package provides a data only package containing commercial domestic flights that departed Houston (IAH and HOU) in 2011.
This package provides functions to view files in raw binary form like in a hex editor. Additional functions to specify and read arbitrary binary formats.
This package implements an efficient algorithm for fitting the entire regularization path of quantile regression models with elastic-net penalties using a generalized coordinate descent scheme. The framework also supports SCAD and MCP penalties. It is designed for high-dimensional datasets and emphasizes numerical accuracy and computational efficiency. This package implements the algorithms proposed in Tang, Q., Zhang, Y., & Wang, B. (2022) <https://openreview.net/pdf?id=RvwMTDYTOb>.
EQ-5D value set estimation can be done using the hybrid model likelihood as described by Oppe and van Hout (2010) <doi:10.1002/hec.3560> and Ramos-Goñi et al. (2017) <doi:10.1097/MLR.0000000000000283>. The package is based on flexmix and among others contains an M-step-driver as described by Leisch (2004) <doi:10.18637/jss.v011.i08>. Users can estimate latent classes and address preference heterogeneity. Both uncensored and censored data are supported. Furthermore, heteroscedasticity can be taken into account. It is possible to control for different covariates on the continuous and dichotomous parts of the data and start values can differ between the expected latent classes.