Estimation of latent variable models using Bayesian methods. Currently estimates the loglinear cognitive diagnosis model of Henson, Templin, and Willse (2009) <doi:10.1007/s11336-008-9089-5>.
Using numeric or raster data, this package contains functions to calculate: complete water balance, bioclimatic balance, bioclimatic intensities, reports for individual locations, multi-layered rasters for spatial analysis.
Binford's hunter-gatherer data includes more than 200 variables coding aspects of hunter-gatherer subsistence, mobility, and social organization for 339 ethnographically documented groups of hunter-gatherers.
Automated method for doublet detection in flow or mass cytometry data, based on simulating doublets and finding events whose protein expression patterns are similar to the simulated doublets.
Datasets for the book entitled "Modelling Survival Data in Medical Research" by Collett (2023) <doi:10.1201/9781003282525>. The datasets provide extensive examples of time-to-event data.
Computer algebra via the SymPy
library (<https://www.sympy.org/>). This makes it possible to solve equations symbolically, find symbolic integrals, symbolic sums and other important quantities.
Sample size estimation in cluster (group) randomized trials. Contains traditional power-based methods, empirical smoothing (Rotondi and Donner, 2009), and updated meta-analysis techniques (Rotondi and Donner, 2012).
Calculate multiple or pairwise dissimilarity for orders q = 0-N (CqN
; Chao et al. 2008 <doi:10/fcvn63>) for a set of species assemblages or interaction networks.
This package provides functions that support stable prediction and classification with radiomics data through factor-analytic modeling. For details, see Peeters et al. (2019) <arXiv:1903.11696>
.
An efficient algorithm to fit and tune kernel quantile regression models based on the majorization-minimization (MM) method. It can also fit multiple quantile curves simultaneously without crossing.
Routines for exploratory and descriptive analysis of functional data such as depth measurements, atypical curves detection, regression models, supervised classification, unsupervised classification and functional analysis of variance.
Computes interference color tables and plots customized Michel-Levy or Raith-Sorensen charts. Automatic interpretation of polarized-light microscopy images is still under development and will come soon.
Adds standardized regression coefficients to objects created by lm'. Also extends the S3 methods print', summary and coef with additional boolean argument standardized and provides xtable'-support.
Implementations of Hurst exponent estimators based on the relationship between wavelet lifting scales and wavelet energy of Knight et al (2017) <doi:10.1007/s11222-016-9698-2>.
Applying the methodology from Manuel et al. to estimate parameters using a matched case control data with a mismeasured exposure variable that is accompanied by instrumental variables (Submitted).
Fit (by Maximum Likelihood or MCMC/Bayesian), simulate, and forecast various Markov-Switching GARCH models as described in Ardia et al. (2019) <doi:10.18637/jss.v091.i04>.
Defines classes and methods to learn models and use them to predict binary outcomes. These are generic tools, but we also include specific examples for many common classifiers.
This package provides a set of tools to extract bibliographic content from PubMed
database using NCBI REST API <https://www.ncbi.nlm.nih.gov/home/develop/api/>.
Allow to run pylint on Python files with a R command or a RStudio addin. The report appears in the RStudio viewer pane as a formatted HTML file.
This package contains all phrasal verbs listed in <https://www.englishclub.com/ref/Phrasal_Verbs/> as data frame. Useful for educational purpose as well as for text mining.
Efficient algorithm for estimating piecewise exponential hazard models for right-censored data, and is useful for reliable power calculation, study design, and event/timeline prediction for study monitoring.
This package provides functionality for the prior and posterior projected Polya tree for the analysis of circular data (Nieto-Barajas and Nunez-Antonio (2019) <arXiv:1902.06020>
).
Set of tools aimed at wrapping some of the functionalities of the packages tools, utils and codetools into a nicer format so that an IDE can use them.
Selects invalid instruments amongst a candidate of potentially bad instruments. The algorithm selects potentially invalid instruments and provides an estimate of the causal effect between exposure and outcome.