The package provides commands to define enumerable items with a number and a long name, which can be referenced later with the name or just the short form. For instance, Milestone M1: Specification created can be defined and later on be referenced with M1 or M1 ("Specification created"). The text in the references is derived from the definition and also rendered as hyperlink to the definition.
Queries multiple resources authors HGNC (2019) <https://www.genenames.org>, authors limma (2015) <doi:10.1093/nar/gkv007> to find the correspondence between evolving nomenclature of human gene symbols, aliases, previous symbols or synonyms with stable, curated gene entrezID from NCBI database. This allows fast, accurate and up-to-date correspondence between human gene expression datasets from various date and platform (e.g: gene symbol: BRCA1 - ID: 672).
Constructs optimal policy trees which provide a rule-based treatment prescription policy. Input is covariate and reward data, where, typically, the rewards will be doubly robust reward estimates. This package aims to construct optimal policy trees more quickly than the existing policytree package and is intended to be used alongside that package. For more details see Cussens, Hatamyar, Shah and Kreif (2025) <doi:10.48550/arXiv.2506.15435>.
Implement meta-analyses for simultaneously estimating individual means with shrinkage, isotonic regression and pretests. Include our original implementation of the isotonic regression via the pool-adjacent-violators algorithm (PAVA) algorithm. For the pretest estimator, the confidence interval for individual means are provided. Methodologies were published in Taketomi et al. (2021) <doi:10.3390/axioms10040267>, Taketomi et al. (2022) <doi:10.3390/a15010026>, Taketomi et al. (2023-) (under review).
Many treatment effect estimators can be written as weighted outcomes. These weights have established use cases like checking covariate balancing via packages like cobalt'. This package takes the original estimator objects and outputs these outcome weights. It builds on the general framework of Knaus (2024) <doi:10.48550/arXiv.2411.11559>. This version is compatible with the grf package and provides an internal implementation of Double Machine Learning.
This package provides a collection of functions that can be used to estimate selection and complementarity effects, sensu Loreau & Hector (2001) <doi:10.1038/35083573>, even in cases where data are only available for a random subset of species (i.e. incomplete sample-level data). A full derivation and explanation of the statistical corrections used here is available in Clark et al. (2019) <doi:10.1111/2041-210X.13285>.
This package implements exact, normally approximated, and sampling-based sensitivity analysis for observational studies with contingency tables. Includes exact (kernel-based), normal approximation, and sequential importance sampling (SIS) methods using Rcpp for computational efficiency. The methods build upon the framework introduced in Rosenbaum (2002) <doi:10.1007/978-1-4757-3692-2> and the generalized design sensitivity framework developed by Chiu (2025) <doi:10.48550/arXiv.2507.17207>.
This package provides a collection of functions to perform Detrended Fluctuation Analysis (DFA exponent), GUEDES et al. (2019) <doi:10.1016/j.physa.2019.04.132> , Detrended cross-correlation coefficient (RHODCCA), GUEDES & ZEBENDE (2019) <doi:10.1016/j.physa.2019.121286>, DMCA cross-correlation coefficient and Detrended multiple cross-correlation coefficient (DMC), GUEDES & SILVA-FILHO & ZEBENDE (2018) <doi:10.1016/j.physa.2021.125990>, both with sliding windows approach.
Allows Bayesian borrowing from a historical dataset for time-to- event data. A flexible baseline hazard function is achieved via a piecewise exponential likelihood with time varying split points and smoothing prior on the historic baseline hazards. The method is described in Scott and Lewin (2024) <doi:10.48550/arXiv.2401.06082>, and the software paper is in Axillus et al. (2024) <doi:10.48550/arXiv.2408.04327>.
This package provides a collection of methods for quantifying the similarity of two or more datasets, many of which can be used for two- or k-sample testing. It provides newly implemented methods as well as wrapper functions for existing methods that enable calling many different methods in a unified framework. The methods were selected from the review and comparison of Stolte et al. (2024) <doi:10.1214/24-SS149>.
Visualizes variables from descriptive tables produced by descsuppR::buildDescrTbl() using ggstatsplot'. It automatically maps each variable to a suitable ggstatsplot plotting function based on the applied or suggested statistical test. Users can override the automatic mapping via a named list of plot specifications. The package supports grouped and ungrouped tables, and forwards additional arguments to the underlying ggstatsplot functions, providing quick, reproducible, and customizable default visualizations for descriptive summaries.
The functions for creating temporary files and directories in the base library are quite limited. The unixutils package contains some good ones, but they aren't portable to Windows. This library just repackages the Cabal implementations of its own temporary file and folder functions so that you can use them without linking against Cabal or depending on it being installed. This is a better maintained fork of the "temporary" package.
Visualization of Design of Experiments from the agricolae package with ggplot2 framework The user provides an experiment design from the agricolae package, calls the corresponding function and will receive a visualization with ggplot2 based functions that are specific for each design. As there are many different designs, each design is tested on its type. The output can be modified with standard ggplot2 commands or with other packages with ggplot2 function extensions.
This package provides tools for simulating data generated by direct observation recording. Behavior streams are simulated based on an alternating renewal process, given specified distributions of event durations and interim times. Different procedures for recording data can then be applied to the simulated behavior streams. Functions are provided for the following recording methods: continuous duration recording, event counting, momentary time sampling, partial interval recording, whole interval recording, and augmented interval recording.
This package provides a way to simulate from the prior distribution of Bayesian trees by Chipman et al. (1998) <DOI:10.2307/2669832>. The prior distribution of Bayesian trees is highly dependent on the design matrix X, therefore using the suggested hyperparameters by Chipman et al. (1998) <DOI:10.2307/2669832> is not recommended and could lead to unexpected prior distribution. This work is part of my master thesis (expected 2016).
Monitor and trace changes in clustering solutions of accumulating datasets at successive time points. The clusters can adopt External and Internal transition at succeeding time points. The External transitions comprise of Survived, Merged, Split, Disappeared, and newly Emerged candidates. In contrast, Internal transition includes changes in location and cohesion of the survived clusters. The package uses MONIC framework developed by Spiliopoulou, Ntoutsi, Theodoridis, and Schult (2006)<doi:10.1145/1150402.1150491> .
Flexible framework for coalescent analyses in R. It includes a main function running the MCMC algorithm, auxiliary functions for tree rearrangement, and some functions to compute population genetic parameters. Extended description can be found in Paradis (2020) <doi:10.1201/9780429466700>. For details on the MCMC algorithm, see Kuhner et al. (1995) <doi:10.1093/genetics/140.4.1421> and Drummond et al. (2002) <doi:10.1093/genetics/161.3.1307>.
This package provides tools for Bayesian copula generalized linear models (GLMs). The sampling scheme is based on Pitt, Chan, and Kohn (2006) <doi:10.1093/biomet/93.3.537>. Regression parameters (including coefficients and dispersion parameters) are estimated via the adaptive random walk Metropolis approach developed by Haario, Saksman, and Tamminen (1999) <doi:10.1007/s001800050022>. The prior for the correlation matrix is based on Hoff (2007) <doi:10.1214/07-AOAS107>.
Test for cluster tendency (clusterability) of a data set. The methods implemented - reducing the data set to a single dimension using principal component analysis or computing pairwise distances, and performing a multimodality test like the Dip Test or Silverman's Critical Bandwidth Test - are described in Adolfsson, Ackerman, and Brownstein (2019) <doi:10.1016/j.patcog.2018.10.026>. Such methods can inform whether clustering algorithms are appropriate for a data set.
This package provides functions to construct finite-sample calibrated predictive intervals for Bayesian models, following the approach in Barber et al. (2021) <doi:10.1214/20-AOS1965>. These intervals are calculated efficiently using importance sampling for the leave-one-out residuals. By default, the intervals will also reflect the relative uncertainty in the Bayesian model, using the locally-weighted conformal methods of Lei et al. (2018) <doi:10.1080/01621459.2017.1307116>.
This package provides tools for temporal disaggregation, including: (1) High-dimensional and low-dimensional series generation for simulation studies; (2) A toolkit for temporal disaggregation and benchmarking using low-dimensional indicator series as proposed by Dagum and Cholette (2006, ISBN:978-0-387-35439-2); (3) Novel techniques by Mosley, Gibberd, and Eckley (2022, <doi:10.1111/rssa.12952>) for disaggregating low-frequency series in the presence of high-dimensional indicator matrices.
This package performs analysis of split-split plot experiments in both completely randomized and randomized complete block designs. With the results, you can obtain ANOVA, mean tests, and regression analysis (Este pacote faz a analise de experimentos em parcela subsubdivididas no delineamento inteiramente casualizado e delineamento em blocos casualizados. Com resultados e possà vel obter a ANOVA, testes de medias e análise de regressao) <https://www.expstat.com/pacotes-do-r>.
Find similarities between texts using the Smith-Waterman algorithm. The algorithm performs local sequence alignment and determines similar regions between two strings. The Smith-Waterman algorithm is explained in the paper: "Identification of common molecular subsequences" by T.F.Smith and M.S.Waterman (1981), available at <doi:10.1016/0022-2836(81)90087-5>. This package implements the same logic for sequences of words and letters instead of molecular sequences.
This package provides a wrapper for The Cancer Imaging Archive's REST API. The Cancer Imaging Archive (TCIA) hosts de-identified medical images of cancer available for public download, as well as rich metadata for each image series. TCIA provides a REST API for programmatic access to the data. This package provides simple functions to access each API endpoint. For more information, see <https://github.com/pamelarussell/TCIApathfinder> and TCIA's website.