Includes: (i) tests and visualisations that can help the modeller explore time series components and perform decomposition; (ii) modelling shortcuts, such as functions to construct lagmatrices and seasonal dummy variables of various forms; (iii) an implementation of the Theta method; (iv) tools to facilitate the design of the forecasting process, such as ABC-XYZ analyses; and (v) "quality of life" functions, such as treating time series for trailing and leading values.
This package provides tools to analyze vaccine coverage data and simulate potential disease outbreak scenarios. It allows users to calculate key epidemiological metrics such as the effective reproduction number (Re), outbreak probabilities, and expected infection counts based on county-level vaccination rates, disease characteristics, and vaccine effectiveness. The package includes historical kindergarten vaccination data for Florida counties and offers functions for generating summary tables, visualizations, and exporting the underlying plot data.
This package provides a Bayesian method for quantifying the liklihood that a given plasma mutation arises from clonal hematopoesis or the underlying tumor. It requires sequencing data of the mutation in plasma and white blood cells with the number of distinct and mutant reads in both tissues. We implement a Monte Carlo importance sampling method to assess the likelihood that a mutation arises from the tumor relative to non-tumor origin.
This package provides a set of tools for working with miRNA affinity models (KdModels), efficiently scanning for miRNA binding sites, and predicting target repression. It supports scanning using miRNA seeds, full miRNA sequences (enabling 3 alignment) and KdModels, and includes the prediction of slicing and TDMD sites. Finally, it includes utility and plotting functions (e.g. for the visual representation of miRNA-target alignment).
This package provides the cumulative distribution function (CDF), quantile, and statistical power calculator for a collection of thresholding Fisher's p-value combination methods, including Fisher's p-value combination method, truncated product method and, in particular, soft-thresholding Fisher's p-value combination method which is proven to be optimal in some context of signal detection. The p-value calculator for the omnibus version of these tests are also included.
Calculate confidence intervals for alpha and standardized alpha using asymptotic theory or the studentized bootstrap, with or without transformations. Supports the asymptotic distribution-free method of Maydeu-Olivares, et al. (2007) <doi:10.1037/1082-989X.12.2.157>, the pseudo-elliptical method of Yuan & Bentler (2002) <doi:10.1007/BF02294845>, and the normal method of van Zyl et al. (1999) <doi:10.1007/BF02296146>, for both coefficient alpha and standardized alpha.
Unifying an inconsistently coded categorical variable between two different time points in accordance with a mapping table. The main rule is to replicate the observation if it could be assigned to a few categories. Then using frequencies or statistical methods to approximate the probabilities of being assigned to each of them. This procedure was invented and implemented in the paper by Nasinski, Majchrowska, and Broniatowska (2020) <doi:10.24425/cejeme.2020.134747>.
It is an open source insurance claim simulation engine sponsored by the Casualty Actuarial Society. It generates individual insurance claims including open claims, reopened claims, incurred but not reported claims and future claims. It also includes claim data fitting functions to help set simulation assumptions. It is useful for claim level reserving analysis. Parodi (2013) <https://www.actuaries.org.uk/documents/triangle-free-reserving-non-traditional-framework-estimating-reserves-and-reserve-uncertainty>.
This package provides constructions of series of partially balanced incomplete block designs (PBIB) based on the combinatory method S, introduced by Rezgui et al. (2014) <doi:10.3844/jmssp.2014.45.48>. This package also offers the associated U-type designs. Version 1.1-1 generalizes the approach to designs with v = wnl treatments. It includes various rectangular and generalized rectangular right angular association schemes with 4, 5, and 7 associated classes.
This package provides methods of computerized adaptive testing for survey researchers. See Montgomery and Rossiter (2020) <doi:10.1093/jssam/smz027>. Includes functionality for data fit with the classic item response methods including the latent trait model, the Birnbaum three parameter model, the graded response, and the generalized partial credit model. Additionally, includes several ability parameter estimation and item selection routines. During item selection, all calculations are done in compiled C++ code.
Bindings for additional classification models for use with the parsnip package. Models include flavors of discriminant analysis, such as linear (Fisher (1936) <doi:10.1111/j.1469-1809.1936.tb02137.x>), regularized (Friedman (1989) <doi:10.1080/01621459.1989.10478752>), and flexible (Hastie, Tibshirani, and Buja (1994) <doi:10.1080/01621459.1994.10476866>), as well as naive Bayes classifiers (Hand and Yu (2007) <doi:10.1111/j.1751-5823.2001.tb00465.x>).
Connect to Elasticsearch', a NoSQL database built on the Java Virtual Machine. Interacts with the Elasticsearch HTTP API (<https://www.elastic.co/elasticsearch/>), including functions for setting connection details to Elasticsearch instances, loading bulk data, searching for documents with both HTTP query variables and JSON based body requests. In addition, elastic provides functions for interacting with API's for indices', documents, nodes, clusters, an interface to the cat API, and more.
Figures, data sets and examples from the book "A practical guide to ecological modelling - using R as a simulation platform" by Karline Soetaert and Peter MJ Herman (2009). Springer. All figures from chapter x can be generated by "demo(chapx)", where x = 1 to 11. The R-scripts of the model examples discussed in the book are in subdirectory "examples", ordered per chapter. Solutions to model projects are in the same subdirectories.
Computational representations of glycan compositions and structures, including details such as linkages, anomers, and substituents. Supports varying levels of monosaccharide specificity (e.g., "Hex" or "Gal") and ambiguous linkages. Provides robust parsing and generation of IUPAC-condensed structure strings. Optimized for vectorized operations on glycan structures, with efficient handling of duplications. As the cornerstone of the glycoverse ecosystem, this package delivers the foundational data structures that power glycomics and glycoproteomics analysis workflows.
Creation of imprecise classification trees. They rely on probability estimation within each node by means of either the imprecise Dirichlet model or the nonparametric predictive inference approach. The splitting variable is selected by the strategy presented in Fink and Crossman (2013) <http://www.sipta.org/isipta13/index.php?id=paper&paper=014.html>, but also the original imprecise information gain of Abellan and Moral (2003) <doi:10.1002/int.10143> is covered.
This package implements transfer learning methods for low-rank matrix estimation. These methods leverage similarity in the latent row and column spaces between the source and target populations to improve estimation in the target population. The methods include the LatEnt spAce-based tRaNsfer lEaRning (LEARNER) method and the direct projection LEARNER (D-LEARNER) method described by McGrath et al. (2024) <doi:10.48550/arXiv.2412.20605>.
This package provides functions for fitting a functional principal components logit regression model in four different situations: ordinary and filtered functional principal components of functional predictors, included in the model according to their variability explanation power, and according to their prediction ability by stepwise methods. The proposed methods were developed in Escabias et al (2004) <doi:10.1080/10485250310001624738> and Escabias et al (2005) <doi:10.1016/j.csda.2005.03.011>.
Fits multivariate (Brownian Motion, Early Burst, ACDC, Ornstein-Uhlenbeck and Shifts) models of continuous traits evolution on trees and time series. mvMORPH also proposes high-dimensional multivariate comparative tools (linear models using Generalized Least Squares and multivariate tests) based on penalized likelihood. See Clavel et al. (2015) <DOI:10.1111/2041-210X.12420>, Clavel et al. (2019) <DOI:10.1093/sysbio/syy045>, and Clavel & Morlon (2020) <DOI:10.1093/sysbio/syaa010>.
Measure productivity and efficiency using Data Envelopment Analysis (DEA). Available methods include DEA under different technology assumptions, bootstrapping of efficiency scores and calculation of the Malmquist productivity index. Analyses can be performed either in the console or with the provided shiny app. See Banker, R.; Charnes, A.; Cooper, W.W. (1984) <doi:10.1287/mnsc.30.9.1078>, Färe, R.; Grosskopf, S. (1996) <doi:10.1007/978-94-009-1816-0>.
Spatiotemporal individual-level model of seasonal infectious disease transmission within the Susceptible-Exposed-Infectious-Recovered-Susceptible (SEIRS) framework are applied to model seasonal infectious disease transmission. This package employs a likelihood based Monte Carlo Expectation Conditional Maximization (MCECM) algorithm for estimating model parameters. In addition to model fitting and parameter estimation, the package offers functions for calculating AIC using real pandemic data and conducting simulation studies customized to user-specified model configurations.
This package provides a toolkit of tidy data manipulation verbs with data.table as the backend. Combining the merits of syntax elegance from dplyr and computing performance from data.table', tidyfst intends to provide users with state-of-the-art data manipulation tools with least pain. This package is an extension of data.table'. While enjoying a tidy syntax, it also wraps combinations of efficient functions to facilitate frequently-used data operations.
Fit species distribution models (SDMs) using the tidymodels framework, which provides a standardised interface to define models and process their outputs. tidysdm expands tidymodels by providing methods for spatial objects, models and metrics specific to SDMs, as well as a number of specialised functions to process occurrences for contemporary and palaeo datasets. The full functionalities of the package are described in Leonardi et al. (2023) <doi:10.1101/2023.07.24.550358>.
This package provides a variety of tools to allow the quantification of videos of the lymphatic vasculature taken under an operating microscope. Lymphatic vessels that have been injected with a variety of blue dyes can be tracked throughout the video to determine their width over time. Code is optimised for efficient processing of multiple large video files. Functions to calculate physiologically relevant parameters and generate graphs from these values are also included.
This is a package that includes pre-processing and quality control functions that can remove margin events, compensate and transform the data and that will use PeacoQCSignalStability for quality control. This last function will first detect peaks in each channel of the flowframe. It will remove anomalies based on the IsolationTree function and the MAD outlier detection method. This package can be used for both flow- and mass cytometry data.