Knowledge space theory by Doignon and Falmagne (1999) <doi:10.1007/978-3-642-58625-5> is a set- and order-theoretical framework which proposes mathematical formalisms to operationalize knowledge structures in a particular domain. The kstIO
package provides basic functionalities to read and write KST data from/to files to be used together with the kst', kstMatrix
', CDSS', pks', or DAKS packages.
This package provides a suite of helper functions and a collection of datasets used in the book <https://uncovering-data-science.netlify.app>. It is designed to make data science techniques accessible to individuals with minimal coding experience. Inspired by an ancient Persian idiom, the package likens this learning process to "eating the liver of data science," symbolizing deep and immersive engagement with the field.
Builds and interprets multi-response machine learning models using tidymodels syntax. Users can supply a tidy model, and mrIML
automates the process of fitting multiple response models to multivariate data and applying interpretable machine learning techniques across them. For more details see Fountain-Jones (2021) <doi:10.1111/1755-0998.13495> and Fountain-Jones et al. (2024) <doi:10.22541/au.172676147.77148600/v1>.
It includes four methods: DCOL-based K-profiles clustering, non-linear network reconstruction, non-linear hierarchical clustering, and variable selection for generalized additive model. References: Tianwei Yu (2018)<DOI: 10.1002/sam.11381>; Haodong Liu and others (2016)<DOI: 10.1371/journal.pone.0158247>; Kai Wang and others (2015)<DOI: 10.1155/2015/918954>; Tianwei Yu and others (2010)<DOI: 10.1109/TCBB.2010.73>.
This package contains functions useful for debugging, set operations on vectors, and UTC date and time functionality. It adds a few vector manipulation verbs to purrr and dplyr packages. It can also generate an R file to install and update packages to simplify deployment into production. The functions were developed at the data science firm Numeract LLC and are used in several packages and projects.
Different estimators are provided to solve the blind source separation problem for multivariate time series with stochastic volatility and supervised dimension reduction problem for multivariate time series. Different functions based on AMUSE and SOBI are also provided for estimating the dimension of the white noise subspace. The package is fully described in Nordhausen, Matilainen, Miettinen, Virta and Taskinen (2021) <doi:10.18637/jss.v098.i15>.
This package provides functions to compute Wasserstein barycenters of subset posteriors using the swapping algorithm developed by Puccetti, Rüschendorf and Vanduffel (2020) <doi:10.1016/j.jmaa.2017.02.003>. The Wasserstein barycenter is a geometric approach for combining subset posteriors. It allows for parallel and distributed computation of the posterior in case of complex models and/or big datasets, thereby increasing computational speed tremendously.
This package computes the areas under the precision-recall (PR) and ROC curve for weighted (e.g. soft-labeled) and unweighted data. In contrast to other implementations, the interpolation between points of the PR curve is done by a non-linear piecewise function. In addition to the areas under the curves, the curves themselves can also be computed and plotted by a specific S3-method.
The bit64 package provides serializable S3 atomic 64 bit (signed) integers that can be used in vectors, matrices, arrays and data.frames
. Methods are available for coercion from and to logicals, integers, doubles, characters and factors as well as many elementwise and summary functions. Many fast algorithmic operations such as match
and order
support interactive data exploration and manipulation and optionally leverage caching.
Fits linear models with endogenous regressor using latent instrumental variable approaches. The methods included in the package are Lewbel's (1997) <doi:10.2307/2171884> higher moments approach as well as Lewbel's (2012) <doi:10.1080/07350015.2012.643126> heteroscedasticity approach, Park and Gupta's (2012) <doi:10.1287/mksc.1120.0718> joint estimation method that uses Gaussian copula and Kim and Frees's (2007) <doi:10.1007/s11336-007-9008-1> multilevel generalized method of moment approach that deals with endogeneity in a multilevel setting. These are statistical techniques to address the endogeneity problem where no external instrumental variables are needed. See the publication related to this package in the Journal of Statistical Software for more details: <doi:10.18637/jss.v107.i03>. Note that with version 2.0.0 sweeping changes were introduced which greatly improve functionality and usability but break backwards compatibility.
Latent variable modeling with Principal Component Analysis (PCA) and Partial Least Squares (PLS) are powerful methods for visualization, regression, classification, and feature selection of omics data where the number of variables exceeds the number of samples and with multicollinearity among variables. Orthogonal Partial Least Squares (OPLS) enables to separately model the variation correlated (predictive) to the factor of interest and the uncorrelated (orthogonal) variation. While performing similarly to PLS, OPLS facilitates interpretation.
This package provides imlementations of PCA, PLS, and OPLS for multivariate analysis and feature selection of omics data. In addition to scores, loadings and weights plots, the package provides metrics and graphics to determine the optimal number of components (e.g. with the R2 and Q2 coefficients), check the validity of the model by permutation testing, detect outliers, and perform feature selection (e.g. with Variable Importance in Projection or regression coefficients).
Align-GVGD ('A-GVGD') is a method to predict the impact of missense substitutions based on the properties of amino acid side chains and protein multiple sequence alignments <doi:10.1136/jmg.2005.033878>. A-GVGD is an extension of the original Grantham distance to multiple sequence alignments. This package provides an alternative R implementation to the web version found on <http://agvgd.hci.utah.edu/>.
This package provides functions for Posterior estimates of Accelerated Failure Time(AFT) model with MCMC and Maximum likelihood estimates of AFT model without MCMC for univariate and multivariate analysis in high dimensional gene expression data are available in this afthd package. AFT model with Bayesian framework for multivariate in high dimensional data has been proposed by Prabhash et al.(2016) <doi:10.21307/stattrans-2016-046>.
Boldness-recalibration maximally spreads out probability predictions while maintaining a user specified level of calibration, facilitated the brcal()
function. Supporting functions to assess calibration via Bayesian and Frequentist approaches, Maximum Likelihood Estimator (MLE) recalibration, Linear in Log Odds (LLO)-adjust via any specified parameters, and visualize results are also provided. Methodological details can be found in Guthrie & Franck (2024) <doi:10.1080/00031305.2024.2339266>.
Compute covariate-adjusted specificity at controlled sensitivity level, or covariate-adjusted sensitivity at controlled specificity level, or covariate-adjust receiver operating characteristic curve, or covariate-adjusted thresholds at controlled sensitivity/specificity level. All statistics could also be computed for specific sub-populations given their covariate values. Methods are described in Ziyi Li, Yijian Huang, Datta Patil, Martin G. Sanda (2021+) "Covariate adjustment in continuous biomarker assessment".
Estimation of Markov generator matrices from discrete-time observations. The implemented approaches comprise diagonal and weighted adjustment of matrix logarithm based candidate solutions as in Israel (2001) <doi:10.1111/1467-9965.00114> as well as a quasi-optimization approach. Moreover, the expectation-maximization algorithm and the Gibbs sampling approach of Bladt and Sorensen (2005) <doi:10.1111/j.1467-9868.2005.00508.x> are included.
Fit Cox proportional hazards models containing both fixed and random effects. The random effects can have a general form, of which familial interactions (a "kinship" matrix) is a particular special case. Note that the simplest case of a mixed effects Cox model, i.e. a single random per-group intercept, is also called a "frailty" model. The approach is based on Ripatti and Palmgren, Biometrics 2002.
This package provides methods for reading, displaying, processing and writing files originally arranged for the DSSAT-CSM fixed width format. The DSSAT-CSM cropping system model is described at J.W. Jones, G. Hoogenboomb, C.H. Porter, K.J. Boote, W.D. Batchelor, L.A. Hunt, P.W. Wilkens, U. Singh, A.J. Gijsman, J.T. Ritchie (2003) <doi:10.1016/S1161-0301(02)00107-7>.
Various Expectation-Maximization (EM) algorithms are implemented for item response theory (IRT) models. The package includes IRT models for binary and ordinal responses, along with dynamic and hierarchical IRT models with binary responses. The latter two models are fitted using variational EM. The package also includes variational network and text scaling models. The algorithms are described in Imai, Lo, and Olmsted (2016) <DOI:10.1017/S000305541600037X>.
Flow of funds are financial accounts that are provided by Federal Reserve quarterly. The package contains all datasets <https://www.federalreserve.gov/datadownload/Choose.aspx?rel=z1>, tables <https://www.federalreserve.gov/apps/fof/FOFTables.aspx> and descriptions <https://www.federalreserve.gov/apps/fof/Guide/z1_tables_description.pdf> with functions to understand series <https://www.federalreserve.gov/apps/fof/SeriesStructure.aspx>
and explore them.
River hydrograph separation and daily runoff time series analysis. Provides various filters to separate baseflow and quickflow. Implements advanced separation technique by Rets et al. (2022) <doi:10.1134/S0097807822010146> which involves meteorological data to reveal genetic components of the runoff: ground, rain, thaw and spring (seasonal thaw). High-performance C++17 computation, annually aggregated variables, statistical testing and numerous plotting functions for high-quality visualization.
The gamma lasso algorithm provides regularization paths corresponding to a range of non-convex cost functions between L0 and L1 norms. As much as possible, usage for this package is analogous to that for the glmnet package (which does the same thing for penalization between L1 and L2 norms). For details see: Taddy (2017 JCGS), One-Step Estimator Paths for Concave Regularization', <arXiv:1308.5623>
.
Implementation of some of the formulations for the thermodynamic and transport properties released by the International Association for the Properties of Water and Steam (IAPWS). More specifically, the releases R1-76(2014), R5-85(1994), R6-95(2018), R7-97(2012), R8-97, R9-97, R10-06(2009), R11-24, R12-08, R15-11, R16-17(2018), R17-20 and R18-21 at <https://iapws.org>.
The goal of jetty is to execute R functions and code snippets in an isolated R subprocess within a Docker container and return the evaluated results to the local R session. jetty can install necessary packages at runtime and seamlessly propagates errors and outputs from the Docker subprocess back to the main session. jetty is primarily designed for sandboxed testing and quick execution of example code.