Allows users to fit a cosinor model using the glmmTMB framework. This extends on existing cosinor modeling packages, including cosinor and circacompare', by including a wide range of available link functions and the capability to fit mixed models. The cosinor model is described by Cornelissen (2014) <doi:10.1186/1742-4682-11-16>.
Hospital time series data analysis workflow tools, modeling, and automations. This library provides many useful tools to review common administrative time series hospital data. Some of these include average length of stay, and readmission rates. The aim is to provide a simple and consistent verb framework that takes the guesswork out of everything.
This package implements Heckman selection models using a Bayesian approach via Stan and compares the performance of normal, Studentâ s t, and contaminated normal distributions in addressing complexities and selection bias (Heeju Lim, Victor E. Lachos, and Victor H. Lachos, Bayesian analysis of flexible Heckman selection models using Hamiltonian Monte Carlo, 2025, under submission).
This package provides a collection of Irucka Embry's miscellaneous USGS data sets (USGS Parameter codes with fixed values, USGS global time zone codes, and US Air Force Global Engineering Weather Data). Irucka created these data sets while a Cherokee Nation Technology Solutions (CNTS) United States Geological Survey (USGS) Contractor and/or USGS employee.
This package provides tools specifically designed for analyzing longitudinal microbiome data. This tool integrates seven functional modules, providing a systematic framework for microbiome time-series analysis. For more details on inferences involving interspecies interactions see Fisher (2014) <doi:10.1371/journal.pone.0102451>. Details on this package are also described in an unpublished manuscript.
This package provides a shiny app that supports merging of PDF and/or image files with page selection, removal, or rotation options. It is a fast, free, and secure alternative to commercial software or various online websites which require users to sign-up, and it avoids any potential risks associated with uploading files elsewhere.
This package implements the revised Synthetic Matching Algorithm of Kreitmeir, Lane, and Raschky (2025) <doi:10.2139/ssrn.3751162>, building on the original approach of Acemoglu, Johnson, Kermani, Kwak, and Mitton (2016) <doi:10.1016/j.jfineco.2015.10.001>, to estimate the cumulative treatment effect of an event on treated firmsâ stock returns.
This package provides a collection of functions for sensitivity analysis of model outputs (factor screening, global sensitivity analysis and robustness analysis), for variable importance measures of data, as well as for interpretability of machine learning models. Most of the functions have to be applied on scalar output, but several functions support multi-dimensional outputs.
Read General Transit Feed Specification (GTFS) zipfiles into a list of R dataframes. Perform validation of the data structure against the specification. Analyze the headways and frequencies at routes and stops. Create maps and perform spatial analysis on the routes and stops. Please see the GTFS documentation here for more detail: <https://gtfs.org/>.
This package provides tools for analyzing R expressions or blocks of code and determining the dependencies between them. It focuses on R scripts, but can be used on the bodies of functions. There are many facilities including the ability to summarize or get a high-level view of code, determining dependencies between variables, code improvement suggestions.
This package creates dummy columns from columns that have categorical variables (character or factor types). You can also specify which columns to make dummies out of, or which columns to ignore. Also creates dummy rows from character, factor, and Date columns. This package provides a significant speed increase from creating dummy variables through model.matrix().
The IPC::Run3 module allows you to run a subprocess and redirect stdin, stdout, and/or stderr to files and perl data structures. It aims to satisfy 99% of the need for using system, qx, and open3 with a simple, extremely Perlish API and none of the bloat and rarely used features of IPC::Run.
Bit-level reading and writing are necessary when dealing with many file formats e.g. compressed data and binary files. Currently, R connections are manipulated at the byte level. This package wraps existing connections and raw vectors so that it is possible to read bits, bit sequences, unaligned bytes and low-bit representations of integers.
This package performs change point detection on univariate and multivariate time series (Martà nez & Mena, 2014, <doi:10.1214/14-BA878> ; Corradin, Danese & Ongaro, 2022, <doi:10.1016/j.ijar.2021.12.019>) and clusters time-dependent data with common change points (Corradin, Danese, KhudaBukhsh & Ongaro, 2026, <doi:10.1007/s11222-025-10756-x>).
Create life tables with a Bayesian approach, which can be very useful for modelling a complex health process when considering multiple predisposing factors and multiple coexisting health conditions. Details for this method can be found in: Lynch, Scott, et al., (2022) <doi:10.1177/00811750221112398>; Zang, Emma, et al., (2022) <doi:10.1093/geronb/gbab149>.
An aid for manipulating data associated with biomonitoring and bioassessment. Calculations include metric calculation, marking of excluded taxa, subsampling, and multimetric index calculation. Targeted communities are benthic macroinvertebrates, fish, periphyton, and coral. As described in the Revised Rapid Bioassessment Protocols (Barbour et al. 1999) <https://archive.epa.gov/water/archive/web/html/index-14.html>.
Easily create color-coded (choropleth) maps in R. No knowledge of cartography or shapefiles needed; go directly from your geographically identified data to a highly customizable map with a single line of code! Supported geographies: U.S. states, counties, census tracts, and zip codes, world countries and sub-country regions (e.g., provinces, prefectures, etc.).
Various functions to import, verify, process and plot high-resolution dendrometer data using daily and stem-cycle approaches as described in Deslauriers et al, 2007 <doi:10.1016/j.dendro.2007.05.003>. For more details about the package please see: Van der Maaten et al. 2016 <doi:10.1016/j.dendro.2016.06.001>.
This package provides tools for decomposing Global Value Chain (GVC) participation and value-added trade. It implements the frameworks proposed by Borin and Mancini (2023) 10.1080/09535314.2022.2153221> for source-based and sink-based decompositions, and by Borin, Mancini, and Taglioni (2025) 10.1093/wber/lhaf017> for tripartite and output-based GVC measures.
Uses multiple AUCs to select a combination of predictors when the outcome has multiple (ordered) levels and the focus is discriminating one particular level from the others. This method is most naturally applied to settings where the outcome has three levels. (Meisner, A, Parikh, CR, and Kerr, KF (2017) <http://biostats.bepress.com/uwbiostat/paper423/>.).
This package provides tools for exchanging pedigree data between the pedsuite packages and the Familias software for forensic kinship computations (Egeland et al. (2000) <doi:10.1016/s0379-0738(00)00147-x>). These functions were split out from the forrel package to streamline maintenance and provide a lightweight alternative for packages otherwise independent of forrel'.
Price comparisons within or between countries provide an overall measure of the relative difference in prices, often denoted as price levels. This package provides index number methods for such price comparisons (e.g., The World Bank, 2011, <doi:10.1596/978-0-8213-9728-2>). Moreover, it contains functions for sampling and characterizing price data.
The pharmaverse is a set of packages that compose multiple pathways through clinical data generation and reporting in the pharmaceutical industry. This package is designed to guide users to our work-spaces on GitHub', Slack and LinkedIn as well as our website and examples. Learn more about the pharmaverse at <https://pharmaverse.org>.
Corrects the spelling of a given word in English using a modification of Peter Norvig's spell correct algorithm (see <http://norvig.com/spell-correct.html>) which handles up to three edits. The algorithm tries to find the spelling with maximum probability of intended correction out of all possible candidate corrections from the original word.