The Bayesian Markov renewal mixed models take sequentially observed categorical data with continuous duration times, being either state duration or inter-state duration. These models comprehensively analyze the stochastic dynamics of both state transitions and duration times under the influence of multiple exogenous factors and random individual effect. The default setting flexibly models the transition probabilities using Dirichlet mixtures and the duration times using gamma mixtures. It also provides the flexibility of modeling the categorical sequences using Bayesian Markov mixed models alone, either ignoring the duration times altogether or dividing duration time into multiples of an additional category in the sequence by a user-specific unit. The package allows extensive inference of the state transition probabilities and the duration times as well as relevant plots and graphs. It also includes a synthetic data set to demonstrate the desired format of input data set and the utility of various functions. Methods for Bayesian Markov renewal mixed models are as described in: Abhra Sarkar et al., (2018) <doi:10.1080/01621459.2018.1423986> and Yutong Wu et al., (2022) <doi:10.1093/biostatistics/kxac050>.
We solve non linear least squares problems with optional equality and/or inequality constraints. Non linear iterations are globalized with back-tracking method. Linear problems are solved by dense QR decomposition from LAPACK which can limit the size of treated problems. On the other side, we avoid condition number degradation which happens in classical quadratic programming approach. Inequality constraints treatment on each non linear iteration is based on NNLS method (by Lawson and Hanson). We provide an original function lsi_ln for solving linear least squares problem with inequality constraints in least norm sens. Thus if Jacobian of the problem is rank deficient a solution still can be provided. However, truncation errors are probable in this case. Equality constraints are treated by using a basis of Null-space. User defined function calculating residuals must return a list having residual vector (not their squared sum) and Jacobian. If Jacobian is not in the returned list, package numDeriv
is used to calculated finite difference version of Jacobian. The NLSIC method was fist published in Sokol et al. (2012) <doi:10.1093/bioinformatics/btr716>.
This package provides a framework for specifying spatially, temporally and spatially-and-temporally varying coefficient models using Generalized Additive Models with smooths. The smooths are parameterised with location, time and predictor variables. The framework supports the investigation of the presence and nature of any space-time dependencies in the data by evaluating multiple model forms (specifications) using a Generalized Cross-Validation score. The workflow sequence is to: i) Prepare the data by lengthening it to have a single location and time variables for each observation. ii) Evaluate all possible spatial and/or temporal models in which each predictor is specified in different ways. iii) Evaluate each model and pick the best one. iv) Create the final model. v) Calculate the varying coefficient estimates to quantify how the relationships between the target and predictor variables vary over space, time or space-time. vi) Create maps, time series plots etc. For more details see: Comber et al (2023) <doi:10.4230/LIPIcs.GIScience.2023.22>, Comber et al (2024) <doi:10.1080/13658816.2023.2270285> and Comber et al (2004) <doi:10.3390/ijgi13120459>.
Whole genome single-cell DNA sequencing (scDNA-seq
) enables characterization of copy number profiles at the cellular level. This circumvents the averaging effects associated with bulk-tissue sequencing and has increased resolution yet decreased ambiguity in deconvolving cancer subclones and elucidating cancer evolutionary history. ScDNA-seq
data is, however, sparse, noisy, and highly variable even within a homogeneous cell population, due to the biases and artifacts that are introduced during the library preparation and sequencing procedure. Here, we propose SCOPE, a normalization and copy number estimation method for scDNA-seq
data. The distinguishing features of SCOPE include: (i) utilization of cell-specific Gini coefficients for quality controls and for identification of normal/diploid cells, which are further used as negative control samples in a Poisson latent factor model for normalization; (ii) modeling of GC content bias using an expectation-maximization algorithm embedded in the Poisson generalized linear models, which accounts for the different copy number states along the genome; (iii) a cross-sample iterative segmentation procedure to identify breakpoints that are shared across cells from the same genetic background.
Compiles functions to trim, bin, visualise, and analyse activity/sleep time-series data collected from the Drosophila Activity Monitor (DAM) system (Trikinetics, USA). The following methods were used to compute periodograms - Chi-square periodogram: Sokolove and Bushell (1978) <doi:10.1016/0022-5193(78)90022-X>, Lomb-Scargle periodogram: Lomb (1976) <doi:10.1007/BF00648343>, Scargle (1982) <doi:10.1086/160554> and Ruf (1999) <doi:10.1076/brhm.30.2.178.1422>, and Autocorrelation: Eijzenbach et al. (1986) <doi:10.1111/j.1440-1681.1986.tb00943.x>. Identification of activity peaks is done after using a Savitzky-Golay filter (Savitzky and Golay (1964) <doi:10.1021/ac60214a047>) to smooth raw activity data. Three methods to estimate anticipation of activity are used based on the following papers - Slope method: Fernandez et al. (2020) <doi:10.1016/j.cub.2020.04.025>, Harrisingh method: Harrisingh et al. (2007) <doi:10.1523/JNEUROSCI.3680-07.2007>, and Stoleru method: Stoleru et al. (2004) <doi:10.1038/nature02926>. Rose plots and circular analysis are based on methods from - Batschelet (1981) <ISBN:0120810506> and Zar (2010) <ISBN:0321656865>.
Positive predictive value (PPV) defined as the conditional probability of clinical trial assay (CTA) being positive given Companion diagnostic device (CDx) being positive is a key performance parameter for evaluating the clinical validity utility of a companion diagnostic test in clinical bridging studies. When bridging study patients are enrolled based on CTA assay results, Binomial-based confidence intervals (CI) may are not appropriate for PPV CI estimation. Bootstrap CIs which are not restricted by the Binomial assumption may be used for PPV CI estimation only when PPV is not 100%. Bootstrap CI is not valid when PPV is 100% and becomes a single value of [1, 1]. We proposed a risk ratio-based method for constructing CI for PPV. By simulation we illustrated that the coverage probability of the proposed CI is close to the nominal value even when PPV is high and negative percent agreement (NPA) is close to 100%. There is a lack of R package for PPV CI calculation. we developed a publicly available R package along with this shiny app to implement the proposed approach and some other existing methods.
Obtains lists of files of remote sensing collections for Southern Ocean surface properties. Commonly used data sources of sea surface temperature, sea ice concentration, and altimetry products such as sea surface height and sea surface currents are cached in object storage on the Pawsey Supercomputing Research Centre facility. Patterns of working to retrieve data from these object storage catalogues are described. The catalogues include complete collections of datasets Reynolds et al. (2008) "NOAA Optimum Interpolation Sea Surface Temperature (OISST) Analysis, Version 2.1" <doi:10.7289/V5SQ8XB5>, Spreen et al. (2008) "Artist Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) sea ice concentration" <doi:10.1029/2005JC003384>. In future releases helpers will be added to identify particular data collections and target specific dates for earth observation data for reading, as well as helpers to retrieve data set citation and provenance details. This work was supported by resources provided by the Pawsey Supercomputing Research Centre with funding from the Australian Government and the Government of Western Australia. This software was developed by the Integrated Digital East Antarctica program of the Australian Antarctic Division.
Hierarchical continuous (and discrete) time state space modelling, for linear and nonlinear systems measured by continuous variables, with limited support for binary data. The subject specific dynamic system is modelled as a stochastic differential equation (SDE) or difference equation, measurement models are typically multivariate normal factor models. Linear mixed effects SDE's estimated via maximum likelihood and optimization are the default. Nonlinearities, (state dependent parameters) and random effects on all parameters are possible, using either max likelihood / max a posteriori optimization (with optional importance sampling) or Stan's Hamiltonian Monte Carlo sampling. See <https://github.com/cdriveraus/ctsem/raw/master/vignettes/hierarchicalmanual.pdf> for details. Priors may be used. For the conceptual overview of the hierarchical Bayesian linear SDE approach, see <https://www.researchgate.net/publication/324093594_Hierarchical_Bayesian_Continuous_Time_Dynamic_Modeling>. Exogenous inputs may also be included, for an overview of such possibilities see <https://www.researchgate.net/publication/328221807_Understanding_the_Time_Course_of_Interventions_with_Continuous_Time_Dynamic_Models> . Stan based functions are not available on 32 bit Windows systems at present. <https://cdriver.netlify.app/> contains some tutorial blog posts.
The PBIB designs are important type of incomplete block designs having wide area of their applications for example in agricultural experiments, in plant breeding, in sample surveys etc. This package constructs various series of PBIB designs and assists in checking all the necessary conditions of PBIB designs and the association scheme on which these designs are based on. It also assists in calculating the efficiencies of PBIB designs with any number of associate classes. The package also constructs Youden-m square designs which are Row-Column designs for the two-way elimination of heterogeneity. The incomplete columns of these Youden-m square designs constitute PBIB designs. With the present functionality, the package will be of immense importance for the researchers as it will help them to construct PBIB designs, to check if their PBIB designs and association scheme satisfy various necessary conditions for the existence, to calculate the efficiencies of PBIB designs based on any association scheme and to construct Youden-m square designs for the two-way elimination of heterogeneity. R. C. Bose and K. R. Nair (1939) <http://www.jstor.org/stable/40383923>.
Single-cell Interpretable Tensor Decomposition (scITD
) employs the Tucker tensor decomposition to extract multicell-type gene expression patterns that vary across donors/individuals. This tool is geared for use with single-cell RNA-sequencing datasets consisting of many source donors. The method has a wide range of potential applications, including the study of inter-individual variation at the population-level, patient sub-grouping/stratification, and the analysis of sample-level batch effects. Each "multicellular process" that is extracted consists of (A) a multi cell type gene loadings matrix and (B) a corresponding donor scores vector indicating the level at which the corresponding loadings matrix is expressed in each donor. Additional methods are implemented to aid in selecting an appropriate number of factors and to evaluate stability of the decomposition. Additional tools are provided for downstream analysis, including integration of gene set enrichment analysis and ligand-receptor analysis. Tucker, L.R. (1966) <doi:10.1007/BF02289464>. Unkel, S., Hannachi, A., Trendafilov, N. T., & Jolliffe, I. T. (2011) <doi:10.1007/s13253-011-0055-9>. Zhou, G., & Cichocki, A. (2012) <doi:10.2478/v10175-012-0051-4>.
Additional nonlinear regression functions using self-start (SS) algorithms. One of the functions is the Beta growth function proposed by Yin et al. (2003) <doi:10.1093/aob/mcg029>. There are several other functions with breakpoints (e.g. linear-plateau, plateau-linear, exponential-plateau, plateau-exponential, quadratic-plateau, plateau-quadratic and bilinear), a non-rectangular hyperbola and a bell-shaped curve. Twenty eight (28) new self-start (SS) functions in total. This package also supports the publication Nonlinear regression Models and applications in agricultural research by Archontoulis and Miguez (2015) <doi:10.2134/agronj2012.0506>, a book chapter with similar material <doi:10.2134/appliedstatistics.2016.0003.c15> and a publication by Oddi et. al. (2019) in Ecology and Evolution <doi:10.1002/ece3.5543>. The function nlsLMList
uses nlsLM
for fitting, but it is otherwise almost identical to nlme::nlsList'.In
addition, this release of the package provides functions for conducting simulations for nlme and gnls objects as well as bootstrapping. These functions are intended to work with the modeling framework of the nlme package. It also provides four vignettes with extended examples.
An implementation of popular screening methods that are commonly employed in ultra-high and high dimensional data. Through this publicly available package, we provide a unified framework to carry out model-free screening procedures including SIS (Fan and Lv (2008) <doi:10.1111/j.1467-9868.2008.00674.x>), SIRS (Zhu et al. (2011)<doi:10.1198/jasa.2011.tm10563>), DC-SIS (Li et al. (2012) <doi:10.1080/01621459.2012.695654>), MDC-SIS (Shao and Zhang (2014) <doi:10.1080/01621459.2014.887012>), Bcor-SIS (Pan et al. (2019) <doi:10.1080/01621459.2018.1462709>), PC-Screen (Liu et al. (2020) <doi:10.1080/01621459.2020.1783274>), WLS (Zhong et al.(2021) <doi:10.1080/01621459.2021.1918554>), Kfilter (Mai and Zou (2015) <doi:10.1214/14-AOS1303>), MVSIS (Cui et al. (2015) <doi:10.1080/01621459.2014.920256>), PSIS (Pan et al. (2016) <doi:10.1080/01621459.2014.998760>), CAS (Xie et al. (2020) <doi:10.1080/01621459.2019.1573734>), CI-SIS (Cheng and Wang. (2023) <doi:10.1016/j.cmpb.2022.107269>) and CSIS (Cheng et al. (2023) <doi:10.1007/s00180-023-01399-5>).
The Genetic Algorithm (GA) is a type of optimization method of Evolutionary Algorithms. It uses the biologically inspired operators such as mutation, crossover, selection and replacement.Because of their global search and robustness abilities, GAs have been widely utilized in machine learning, expert systems, data science, engineering, life sciences and many other areas of research and business. However, the regular GAs need the techniques to improve their efficiency in computing time and performance in finding global optimum using some adaptation and hybridization strategies. The adaptive GAs (AGA) increase the convergence speed and success of regular GAs by setting the parameters crossover and mutation probabilities dynamically. The hybrid GAs combine the exploration strength of a stochastic GAs with the exact convergence ability of any type of deterministic local search algorithms such as simulated-annealing, in addition to other nature-inspired algorithms such as ant colony optimization, particle swarm optimization etc. The package adana includes a rich working environment with its many functions that make possible to build and work regular GA, adaptive GA, hybrid GA and hybrid adaptive GA for any kind of optimization problems. Cebeci, Z. (2021, ISBN: 9786254397448).
We provide two algorithms for monitoring change points with online matrix-valued time series, under the assumption of a two-way factor structure. The algorithms are based on different calculations of the second moment matrices. One is based on stacking the columns of matrix observations, while another is by a more delicate projected approach. A well-known fact is that, in the presence of a change point, a factor model can be rewritten as a model with a larger number of common factors. In turn, this entails that, in the presence of a change point, the number of spiked eigenvalues in the second moment matrix of the data increases. Based on this, we propose two families of procedures - one based on the fluctuations of partial sums, and one based on extreme value theory - to monitor whether the first non-spiked eigenvalue diverges after a point in time in the monitoring horizon, thereby indicating the presence of a change point. This package also provides some simple functions for detecting and removing outliers, imputing missing entries and testing moments. See more details in He et al. (2021)<doi:10.48550/arXiv.2112.13479>
.
Construction and analysis of multivalued zero-sum matrix games over the abstract space of probability distributions, which describe the losses in each scenario of defense vs. attack action. The distributions can be compiled directly from expert opinions or other empirical data (insofar available). The package implements the methods put forth in the EU project HyRiM
(Hybrid Risk Management for Utility Networks), FP7 EU Project Number 608090. The method has been published in Rass, S., König, S., Schauer, S., 2016. Decisions with Uncertain Consequences-A Total Ordering on Loss-Distributions. PLoS
ONE 11, e0168583. <doi:10.1371/journal.pone.0168583>, and applied for advanced persistent thread modeling in Rass, S., König, S., Schauer, S., 2017. Defending Against Advanced Persistent Threats Using Game-Theory. PLoS
ONE 12, e0168675. <doi:10.1371/journal.pone.0168675>. A volume covering the wider range of aspects of risk management, partially based on the theory implemented in the package is the book edited by S. Rass and S. Schauer, 2018. Game Theory for Security and Risk Management: From Theory to Practice. Springer, <doi:10.1007/978-3-319-75268-6>, ISBN 978-3-319-75267-9.
To implement a general framework to quantitatively infer Community Assembly Mechanisms by Phylogenetic-bin-based null model analysis, abbreviated as iCAMP
(Ning et al 2020) <doi:10.1038/s41467-020-18560-z>. It can quantitatively assess the relative importance of different community assembly processes, such as selection, dispersal, and drift, for both communities and each phylogenetic group ('bin'). Each bin usually consists of different taxa from a family or an order. The package also provides functions to implement some other published methods, including neutral taxa percentage (Burns et al 2016) <doi:10.1038/ismej.2015.142> based on neutral theory model and quantifying assembly processes based on entire-community null models ('QPEN', Stegen et al 2013) <doi:10.1038/ismej.2013.93>. It also includes some handy functions, particularly for big datasets, such as phylogenetic and taxonomic null model analysis at both community and bin levels, between-taxa niche difference and phylogenetic distance calculation, phylogenetic signal test within phylogenetic groups, midpoint root of big trees, etc. Version 1.3.x mainly improved the function for QPEN and added function icamp.cate()
to summarize iCAMP
results for different categories of taxa (e.g. core versus rare taxa).
Pooling, backward and forward selection of linear, logistic and Cox regression models in multiply imputed datasets. Backward and forward selection can be done from the pooled model using Rubin's Rules (RR), the D1, D2, D3, D4 and the median p-values method. This is also possible for Mixed models. The models can contain continuous, dichotomous, categorical and restricted cubic spline predictors and interaction terms between all these type of predictors. The stability of the models can be evaluated using (cluster) bootstrapping. The package further contains functions to pool model performance measures as ROC/AUC, Reclassification, R-squared, scaled Brier score, H&L test and calibration plots for logistic regression models. Internal validation can be done across multiply imputed datasets with cross-validation or bootstrapping. The adjusted intercept after shrinkage of pooled regression coefficients can be obtained. Backward and forward selection as part of internal validation is possible. A function to externally validate logistic prediction models in multiple imputed datasets is available and a function to compare models. For Cox models a strata variable can be included. Eekhout (2017) <doi:10.1186/s12874-017-0404-7>. Wiel (2009) <doi:10.1093/biostatistics/kxp011>. Marshall (2009) <doi:10.1186/1471-2288-9-57>.
This package provides a framework of interoperable R6 classes (Chang, 2020, <https://CRAN.R-project.org/package=R6>) for building ensembles of viable models via the pattern-oriented modeling (POM) approach (Grimm et al.,2005, <doi:10.1126/science.1116681>). The package includes classes for encapsulating and generating model parameters, and managing the POM workflow. The workflow includes: model setup; generating model parameters via Latin hyper-cube sampling (Iman & Conover, 1980, <doi:10.1080/03610928008827996>); running multiple sampled model simulations; collating summary results; and validating and selecting an ensemble of models that best match known patterns. By default, model validation and selection utilizes an approximate Bayesian computation (ABC) approach (Beaumont et al., 2002, <doi:10.1093/genetics/162.4.2025>), although alternative user-defined functionality could be employed. The package includes a spatially explicit demographic population model simulation engine, which incorporates default functionality for density dependence, correlated environmental stochasticity, stage-based transitions, and distance-based dispersal. The user may customize the simulator by defining functionality for translocations, harvesting, mortality, and other processes, as well as defining the sequence order for the simulator processes. The framework could also be adapted for use with other model simulators by utilizing its extendable (inheritable) base classes.
While some non-coding RNAs (ncRNAs
) are assigned critical regulatory roles, most remain functionally uncharacterized. This presents a challenge whenever an interesting set of ncRNAs
needs to be analyzed in a functional context. Transcripts located close-by on the genome are often regulated together. This genomic proximity on the sequence can hint to a functional association. We present a tool, NoRCE
, that performs cis enrichment analysis for a given set of ncRNAs
. Enrichment is carried out using the functional annotations of the coding genes located proximal to the input ncRNAs
. Other biologically relevant information such as topologically associating domain (TAD) boundaries, co-expression patterns, and miRNA
target prediction information can be incorporated to conduct a richer enrichment analysis. To this end, NoRCE
includes several relevant datasets as part of its data repository, including cell-line specific TAD boundaries, functional gene sets, and expression data for coding & ncRNAs
specific to cancer. Additionally, the users can utilize custom data files in their investigation. Enrichment results can be retrieved in a tabular format or visualized in several different ways. NoRCE
is currently available for the following species: human, mouse, rat, zebrafish, fruit fly, worm, and yeast.
This package provides functions are provided that facilitate the import and analysis of SNP (single nucleotide polymorphism) and silicodart (presence/absence) data. The main focus is on data generated by DarT
(Diversity Arrays Technology), however, data from other sequencing platforms can be used once SNP or related fragment presence/absence data from any source is imported. Genetic datasets are stored in a derived genlight format (package adegenet'), that allows for a very compact storage of data and metadata. Functions are available for importing and exporting of SNP and silicodart data, for reporting on and filtering on various criteria (e.g. CallRate
', heterozygosity, reproducibility, maximum allele frequency). Additional functions are available for visualization (e.g. Principle Coordinate Analysis) and creating a spatial representation using maps. dartR
supports also the analysis of 3rd party software package such as newhybrid', structure', NeEstimator
and blast'. Since version 2.0.3 we also implemented simulation functions, that allow to forward simulate SNP dynamics under different population and evolutionary dynamics. Comprehensive tutorials and support can be found at our github repository: github.com/green-striped-gecko/dartR
/. If you want to cite dartR
', you find the information by typing citation('dartR
') in the console.
This package provides a unified method for designing and analysing dose-finding trials in paediatrics, while bridging information from adults, is proposed in the dfped package. The dose range can be calculated under three extrapolation methods: linear, allometry and maturation adjustment, using pharmacokinetic (PK) data. To do this, it is assumed that target exposures are the same in both populations. The working model and prior distribution parameters of the dose-toxicity and dose-efficacy relationships can be obtained using early phase adult toxicity and efficacy data at several dose levels through dfped package. Priors are used into the dose finding process through a Bayesian model selection or adaptive priors, to facilitate adjusting the amount of prior information to differences between adults and children. This calibrates the model to adjust for misspecification if the adult and paediatric data are very different. User can use his/her own Bayesian model written in Stan code through the dfped package. A template of this model is proposed in the examples of the corresponding R functions in the package. Finally, in this package you can find a simulation function for one trial or for more than one trial. These methods are proposed by Petit et al, (2016) <doi:10.1177/0962280216671348>.
Develops algorithms for fitting, prediction, simulation and initialization of the following models (1)- hidden hybrid Markov/semi-Markov model, introduced by Guedon (2005) <doi:10.1016/j.csda.2004.05.033>, (2)- nonparametric mixture of B-splines emissions (Langrock et al., 2015 <doi:10.1111/biom.12282>), (3)- regime switching regression model (Kim et al., 2008 <doi:10.1016/j.jeconom.2007.10.002>) and auto-regressive hidden hybrid Markov/semi-Markov model, (4)- spline-based nonparametric estimation of additive state-switching models (Langrock et al., 2018 <doi:10.1111/stan.12133>) (5)- robust emission model proposed by Qin et al, 2024 <doi:10.1007/s10479-024-05989-4> (6)- several emission distributions, including mixture of multivariate normal (which can also handle missing data using EM algorithm) and multi-nomial emission (for modeling polymer or DNA sequences) (7)- tools for prediction of future state sequence, computing the score of a new sequence, splitting the samples and sequences to train and test sets, computing the information measures of the models, computing the residual useful lifetime (reliability) and many other useful tools ... (read for more description: Amini et al., 2022 <doi:10.1007/s00180-022-01248-x> and its arxiv version: <doi:10.48550/arXiv.2109.12489>
).
Datasets and functions for the book "Statistiques pour lâ économie et la gestion", "Théorie et applications en entreprise", F. Bertrand, Ch. Derquenne, G. Dufrénot, F. Jawadi and M. Maumy, C. Borsenberger editor, (2021, ISBN:9782807319448, De Boeck Supérieur, Louvain-la-Neuve). The first chapter of the book is dedicated to an introduction to statistics and their world. The second chapter deals with univariate exploratory statistics and graphics. The third chapter deals with bivariate and multivariate exploratory statistics and graphics. The fourth chapter is dedicated to data exploration with Principal Component Analysis. The fifth chapter is dedicated to data exploration with Correspondance Analysis. The sixth chapter is dedicated to data exploration with Multiple Correspondance Analysis. The seventh chapter is dedicated to data exploration with automatic clustering. The eighth chapter is dedicated to an introduction to probability theory and classical probability distributions. The ninth chapter is dedicated to an estimation theory, one-sample and two-sample tests. The tenth chapter is dedicated to an Gaussian linear model. The eleventh chapter is dedicated to an introduction to time series. The twelfth chapter is dedicated to an introduction to probit and logit models. Various example datasets are shipped with the package as well as some new functions.
This package performs the Cram method, a general and efficient approach to simultaneous learning and evaluation using a generic machine learning algorithm. In a single pass of batched data, the proposed method repeatedly trains a machine learning algorithm and tests its empirical performance. Because it utilizes the entire sample for both learning and evaluation, cramming is significantly more data-efficient than sample-splitting. Unlike cross-validation, Cram evaluates the final learned model directly, providing sharper inference aligned with real-world deployment. The method naturally applies to both policy learning and contextual bandits, where decisions are based on individual features to maximize outcomes. The package includes cram_policy()
for learning and evaluating individualized binary treatment rules, cram_ml()
to train and assess the population-level performance of machine learning models, and cram_bandit()
for on-policy evaluation of contextual bandit algorithms. For all three functions, the package provides estimates of the average outcome that would result if the model were deployed, along with standard errors and confidence intervals for these estimates. Details of the method are described in Jia, Imai, and Li (2024) <https://www.hbs.edu/ris/Publication%20Files/2403.07031v1_a83462e0-145b-4675-99d5-9754aa65d786.pdf> and Jia et al. (2025) <doi:10.48550/arXiv.2403.07031>
.