Compute covariate-adjusted specificity at controlled sensitivity level, or covariate-adjusted sensitivity at controlled specificity level, or covariate-adjust receiver operating characteristic curve, or covariate-adjusted thresholds at controlled sensitivity/specificity level. All statistics could also be computed for specific sub-populations given their covariate values. Methods are described in Ziyi Li, Yijian Huang, Datta Patil, Martin G. Sanda (2021+) "Covariate adjustment in continuous biomarker assessment".
Fit Cox proportional hazards models containing both fixed and random effects. The random effects can have a general form, of which familial interactions (a "kinship" matrix) is a particular special case. Note that the simplest case of a mixed effects Cox model, i.e. a single random per-group intercept, is also called a "frailty" model. The approach is based on Ripatti and Palmgren, Biometrics 2002.
Estimation of Markov generator matrices from discrete-time observations. The implemented approaches comprise diagonal and weighted adjustment of matrix logarithm based candidate solutions as in Israel (2001) <doi:10.1111/1467-9965.00114> as well as a quasi-optimization approach. Moreover, the expectation-maximization algorithm and the Gibbs sampling approach of Bladt and Sorensen (2005) <doi:10.1111/j.1467-9868.2005.00508.x> are included.
This package provides methods for reading, displaying, processing and writing files originally arranged for the DSSAT-CSM fixed width format. The DSSAT-CSM cropping system model is described at J.W. Jones, G. Hoogenboomb, C.H. Porter, K.J. Boote, W.D. Batchelor, L.A. Hunt, P.W. Wilkens, U. Singh, A.J. Gijsman, J.T. Ritchie (2003) <doi:10.1016/S1161-0301(02)00107-7>.
Various Expectation-Maximization (EM) algorithms are implemented for item response theory (IRT) models. The package includes IRT models for binary and ordinal responses, along with dynamic and hierarchical IRT models with binary responses. The latter two models are fitted using variational EM. The package also includes variational network and text scaling models. The algorithms are described in Imai, Lo, and Olmsted (2016) <DOI:10.1017/S000305541600037X>.
Flow of funds are financial accounts that are provided by Federal Reserve quarterly. The package contains all datasets <https://www.federalreserve.gov/datadownload/Choose.aspx?rel=z1>, tables <https://www.federalreserve.gov/apps/fof/FOFTables.aspx> and descriptions <https://www.federalreserve.gov/apps/fof/Guide/z1_tables_description.pdf> with functions to understand series <https://www.federalreserve.gov/apps/fof/SeriesStructure.aspx> and explore them.
The gamma lasso algorithm provides regularization paths corresponding to a range of non-convex cost functions between L0 and L1 norms. As much as possible, usage for this package is analogous to that for the glmnet package (which does the same thing for penalization between L1 and L2 norms). For details see: Taddy (2017 JCGS), One-Step Estimator Paths for Concave Regularization', <arXiv:1308.5623>.
River hydrograph separation and daily runoff time series analysis. Provides various filters to separate baseflow and quickflow. Implements advanced separation technique by Rets et al. (2022) <doi:10.1134/S0097807822010146> which involves meteorological data to reveal genetic components of the runoff: ground, rain, thaw and spring (seasonal thaw). High-performance C++17 computation, annually aggregated variables, statistical testing and numerous plotting functions for high-quality visualization.
Implementation of some of the formulations for the thermodynamic and transport properties released by the International Association for the Properties of Water and Steam (IAPWS). More specifically, the releases R1-76(2014), R5-85(1994), R6-95(2018), R7-97(2012), R8-97, R9-97, R10-06(2009), R11-24, R12-08, R15-11, R16-17(2018), R17-20 and R18-21 at <https://iapws.org>.
The goal of jetty is to execute R functions and code snippets in an isolated R subprocess within a Docker container and return the evaluated results to the local R session. jetty can install necessary packages at runtime and seamlessly propagates errors and outputs from the Docker subprocess back to the main session. jetty is primarily designed for sandboxed testing and quick execution of example code.
Implementation of some unit and area level EBLUP estimators as well as the estimators of their MSE also under heteroscedasticity. The package further documents the publications Breidenbach and Astrup (2012) <DOI:10.1007/s10342-012-0596-7>, Breidenbach et al. (2016) <DOI:10.1016/j.rse.2015.07.026> and Breidenbach et al. (2018 in press). The vignette further explains the use of the implemented functions.
Shiny apps for the quantitative analysis of images from lateral flow assays (LFAs). The images are segmented and background corrected and color intensities are extracted. The apps can be used to import and export intensity data and to calibrate LFAs by means of linear, loess, or gam models. The calibration models can further be saved and applied to intensity data from new images for determining concentrations.
Print vectors (and data frames) of floating point numbers using a non-scientific format optimized for human readers. Vectors of numbers are rounded using significant digits, aligned at the decimal point, and all zeros trailing the decimal point are dropped. See: Wright (2016). Lucid: An R Package for Pretty-Printing Floating Point Numbers. In JSM Proceedings, Statistical Computing Section. Alexandria, VA: American Statistical Association. 2270-2279.
Three algorithms for estimating a Markov random field structure.Two of them are an exact version and a simulated annealing version of a penalized maximum conditional likelihood method similar to the Bayesian Information Criterion. These algorithm are described in Frondana (2016) <doi:10.11606/T.45.2018.tde-02022018-151123>.The third one is a greedy algorithm, described in Bresler (2015) <doi:10.1145/2746539.2746631).
Mica is a server application used to create data web portals for large-scale epidemiological studies or multiple-study consortia. Mica helps studies to provide scientifically robust data visibility and web presence without significant information technology effort. Mica provides a structured description of consortia, studies, annotated and searchable data dictionaries, and data access request management. This Mica client allows to perform data extraction for reporting purposes.
This package provides functions for working with (grouped) multivariate normal variance mixture distributions (evaluation of distribution functions and densities, random number generation and parameter estimation), including Student's t distribution for non-integer degrees-of-freedom as well as the grouped t distribution and copula with multiple degrees-of-freedom parameters. See <doi:10.18637/jss.v102.i02> for a high-level description of select functionality.
TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models. A module is a self-contained piece of a TensorFlow graph, along with its weights and assets, that can be reused across different tasks in a process known as transfer learning. Transfer learning train a model with a smaller dataset, improve generalization, and speed up training.
Descriptive statistics for large data tend to be low resolution on the tails. Whisker Odds generate a table of descriptive statistics for large data. This is the same as letter-values, but with an alternative naming of depths which allow for depths beyond 26. For a reference to letter-values see Heike Hofmann and Hadley Wickham and Karen Kafadar (2017) <doi:10.1080/10618600.2017.1305277>.
HERON is a software package for analyzing peptide binding array data. In addition to identifying significant binding probes, HERON also provides functions for finding epitopes (string of consecutive peptides within a protein). HERON also calculates significance on the probe, epitope, and protein level by employing meta p-value methods. HERON is designed for obtaining calls on the sample level and calculates fractions of hits for different conditions.
This package performs outlier detection of sequences in a multiple sequence alignment using bootstrap of predefined distance metrics. Outlier sequences can make downstream analyses unreliable or make the alignments less accurate while they are being constructed. This package implements the OD-seq algorithm proposed by Jehl et al (doi 10.1186/s12859-015-0702-1) for aligned sequences and a variant using string kernels for unaligned sequences.
Zoltar is a website that provides a repository of model forecast results in a standardized format and a central location. It supports storing, retrieving, comparing, and analyzing time series forecasts for prediction challenges of interest to the modeling community. This package provides functions for working with the Zoltar API, including connecting and authenticating, getting information about projects, models, and forecasts, deleting and uploading forecast data, and downloading scores.
This package provides the usual distribution functions, maximum likelihood inference and model diagnostics for univariate stationary extreme value mixture models. Also, there are provided kernel density estimation including various boundary corrected kernel density estimation methods and a wide choice of kernels, with cross-validation likelihood based bandwidth estimator. Reasonable consistency with the base functions in the evd package is provided, so that users can safely interchange most code.
Roswell started out as a command-line tool with the aim to make installing and managing Common Lisp implementations really simple and easy. Roswell has now evolved into a full-stack environment for Common Lisp development, and has many features that makes it easy to test, share, and distribute your Lisp applications.
Roswell is still in beta. Despite this, the basic interfaces are stable and not likely to change.
Stanford ATLAS (Advanced Temporal Search Engine) is a powerful tool that allows constructing cohorts of patients extremely quickly and efficiently. This package is designed to interface directly with an instance of ATLAS search engine and facilitates API queries and data dumps. Prerequisite is a good knowledge of the temporal language to be able to efficiently construct a query. More information available at <https://shahlab.stanford.edu/start>.