Facilitates the import and analysis of SNP (single nucleotide polymorphism') and silicodart (presence/absence) data. The main focus is on data generated by DarT (Diversity Arrays Technology), however, data from other sequencing platforms can be used once SNP or related fragment presence/absence data from any source is imported. Genetic datasets are stored in a derived genlight format (package adegenet'), that allows for a very compact storage of data and metadata. Functions are available for importing and exporting of SNP and silicodart data, for reporting on and filtering on various criteria (e.g. callrate', heterozygosity', reproducibility', maximum allele frequency). Additional functions are available for visualization (e.g. Principle Coordinate Analysis) and creating a spatial representation using maps. dartR.base is the base package of the dartRverse suits of packages. To install the other packages, we recommend to install the dartRverse package, that supports the installation of all packages in the dartRverse'. If you want to cite dartR', you find the information by typing citation('dartR.base') in the console.
Recent gcc and clang compiler versions provide functionality to test for memory violations and other undefined behaviour; this is often referred to as "Address Sanitizer" (or ASAN') and "Undefined Behaviour Sanitizer" ('UBSAN'). The Writing R Extension manual describes this in some detail in Section 4.3 title "Checking Memory Access". . This feature has to be enabled in the corresponding binary, eg in R, which is somewhat involved as it also required a current compiler toolchain which is not yet widely available, or in the case of Windows, not available at all (via the common Rtools mechanism). . As an alternative, pre-built Docker containers such as the Rocker container r-devel-san or the multi-purpose container r-debug can be used. . This package then provides a means of testing the compiler setup as the known code failures provides in the sample code here should be detected correctly, whereas a default build of R will let the package pass. . The code samples are based on the examples from the Address Sanitizer Wiki at <https://github.com/google/sanitizers/wiki>.
This package provides methods for analyzing (cell) motion in two or three dimensions. Available measures include displacement, confinement ratio, autocorrelation, straightness, turning angle, and fractal dimension. Measures can be applied to entire tracks, steps, or subtracks with varying length. While the methodology has been developed for cell trajectory analysis, it is applicable to anything that moves including animals, people, or vehicles. Some of the methodology implemented in this packages was described by: Beauchemin, Dixit, and Perelson (2007) <doi:10.4049/jimmunol.178.9.5505>, Beltman, Maree, and de Boer (2009) <doi:10.1038/nri2638>, Gneiting and Schlather (2004) <doi:10.1137/S0036144501394387>, Mokhtari, Mech, Zitzmann, Hasenberg, Gunzer, and Figge (2013) <doi:10.1371/journal.pone.0080808>, Moreau, Lemaitre, Terriac, Azar, Piel, Lennon-Dumenil, and Bousso (2012) <doi:10.1016/j.immuni.2012.05.014>, Textor, Peixoto, Henrickson, Sinn, von Andrian, and Westermann (2011) <doi:10.1073/pnas.1102288108>, Textor, Sinn, and de Boer (2013) <doi:10.1186/1471-2105-14-S6-S10>, Textor, Henrickson, Mandl, von Andrian, Westermann, de Boer, and Beltman (2014) <doi:10.1371/journal.pcbi.1003752>.
This package provides a Bayesian approach to using predictive probability in an ANOVA construct with a continuous normal response, when threshold values must be obtained for the question of interest to be evaluated as successful (Sieck and Christensen (2021) <doi:10.1002/qre.2802>). The Bayesian Mission Mean (BMM) is used to evaluate a question of interest (that is, a mean that randomly selects combination of factor levels based on their probability of occurring instead of averaging over the factor levels, as in the grand mean). Under this construct, in contrast to a Gibbs sampler (or Metropolis-within-Gibbs sampler), a two-stage sampling method is required. The nested sampler determines the conditional posterior distribution of the model parameters, given Y, and the outside sampler determines the marginal posterior distribution of Y (also commonly called the predictive distribution for Y). This approach provides a sample from the joint posterior distribution of Y and the model parameters, while also accounting for the threshold value that must be obtained in order for the question of interest to be evaluated as successful.
Takes as input a stable oxygen isotope (d18O) profile measured in growth direction (D) through a shell + uncertainties in both variables (d18O_err & D_err). It then models the seasonality in the d18O record by fitting a combination of a growth and temperature sine wave to year-length chunks of the data (see Judd et al., (2018) <doi:10.1016/j.palaeo.2017.09.034>). This modeling is carried out along a sliding window through the data and yields estimates of the day of the year (Julian Day) and local growth rate for each data point. Uncertainties in both modeling routine and the data itself are propagated and pooled to obtain a confidence envelope around the age of each data point in the shell. The end result is a shell chronology consisting of estimated ages of shell formation relative to the annual cycle with their uncertainties. All formulae in the package serve this purpose, but the user can customize the model (e.g. number of days in a year and the mineralogy of the shell carbonate) through input parameters.
This package provides a set of exploratory data analysis (EDA) tools for visualizing trends, diagnosing data types for beginner-friendly workflows, and automatically routing to suitable statistical tests or trend exploration models. Includes unified plotting functions for trend lines, grouped boxplots, and comparative scatterplots; automated statistical testing (e.g., t-test, Wilcoxon, ANOVA, Kruskal-Wallis, Tukey, Dunn) with optional effect size calculation; and model-based trend analysis using generalized additive models (GAM) for count data, generalized linear models (GLM) for continuous data, and zero-inflated models (ZIP/ZINB) for count data with potential zero-inflation. Also supports time-window continuity checks, cross-year handling in compare_monthly_cases(), and ARIMA-ready preparation with stationarity diagnostics, ensuring consistent parameter styles for reproducible research and user-friendly workflows.Methods are based on R Core Team (2024) <https://www.R-project.org/>, Wood, S.N.(2017, ISBN:978-1498728331), Hyndman RJ, Khandakar Y (2008) <doi:10.18637/jss.v027.i03>, Simon Jackman (2024) <https://github.com/atahk/pscl/>, Achim Zeileis, Christian Kleiber, Simon Jackman (2008) <doi:10.18637/jss.v027.i08>.
This package provides functions to classify mass spectra in known categories and to determine discriminant mass-to-charge values (m/z). Includes easy-to-use preprocessing pipelines for Matrix Assisted Laser Desorption Ionisation - Time Of Flight Mass Spectrometry (MALDI-TOF) mass spectra, methods to select discriminant m/z from labelled libraries, and tools to predict categories (species, phenotypes, etc.) from selected features. Also provides utilities to build design matrices from peak intensities and labels. While this package was developed with the aim of identifying very similar species or phenotypes of bacteria from MALDI-TOF MS, the functions of this package can also be used to classify other categories associated to mass spectra; or from mass spectra obtained with other mass spectrometry techniques. Parallelized processing and optional C++-accelerated functions are available (notably to deal with large datasets) from version 0.5.0. If you use this package in your research, please cite the associated publication (<doi:10.1016/j.eswa.2025.128796>). For a comprehensive guide, additional applications, and detailed examples, see <https://github.com/agodmer/MSclassifR_examples>.
This package provides functions for basic hydraulic calculations related to water flow in circular pipes both flowing full (under pressure), and partially full (gravity flow), and trapezoidal open channels. For pressure flow this includes friction loss calculations by solving the Darcy-Weisbach equation for head loss, flow or diameter, plotting a Moody diagram, matching a pump characteristic curve to a system curve, and solving for flows in a pipe network using the Hardy-Cross method. The Darcy-Weisbach friction factor is calculated using the Colebrook (or Colebrook-White equation), the basis of the Moody diagram, the original citation being Colebrook (1939) <doi:10.1680/ijoti.1939.13150>. For gravity flow, the Manning equation is used, again solving for missing parameters. The derivation of and solutions using the Darcy-Weisbach equation and the Manning equation are outlined in many fluid mechanics texts such as Finnemore and Maurer (2024, ISBN:978-1-264-78729-6). Some gradually- and rapidly-varied flow functions are included. For the Manning equation solutions, this package uses modifications of original code from the iemisc package by Irucka Embry.
Self-Consistent Field(SCF) calculation method is one of the most important steps in the calculation methods of quantum chemistry. Ehrenreich, H., & Cohen, M. H. (1959). <doi:10.1103/PhysRev.115.786> However, the most prevailing software in this area, Gaussian''s SCF convergence process is hard to monitor, especially while the job is still running, causing researchers difficulty in knowing whether the oscillation has started or not, wasting time and energy on useless configurations or abandoning the jobs that can actually work. M.J. Frisch, G.W. Trucks, H.B. Schlegel et al. (2016). <https://gaussian.com> SCFMonitor enables Gaussian quantum chemistry calculation software users to easily read the Gaussian .log files and monitor the SCF convergence and geometry optimization process with little effort and clear, beautiful, and clean outputs. It can generate graphs using tidyverse to let users check SCF convergence and geometry optimization processes in real-time. The software supports processing .log files remotely using with rbase::url(). This software is a suitcase for saving time and energy for the researchers, supporting multiple versions of Gaussian'.
This package provides functions for forward population genetic simulation in asexual populations, with special focus on cancer progression. Fitness can be an arbitrary function of genetic interactions between multiple genes or modules of genes, including epistasis, order restrictions in mutation accumulation, and order effects. Fitness (including just birth, just death, or both birth and death) can also be a function of the relative and absolute frequencies of other genotypes (i.e., frequency-dependent fitness). Mutation rates can differ between genes, and we can include mutator/antimutator genes (to model mutator phenotypes). Simulating multi-species scenarios and therapeutic interventions, including adaptive therapy, is also possible. Simulations use continuous-time models and can include driver and passenger genes and modules. Also included are functions for: simulating random DAGs of the type found in Oncogenetic Trees, Conjunctive Bayesian Networks, and other cancer progression models; plotting and sampling from single or multiple realizations of the simulations, including single-cell sampling; plotting the parent-child relationships of the clones; generating random fitness landscapes (Rough Mount Fuji, House of Cards, additive, NK, Ising, and Eggbox models) and plotting them.
Fits ordinal regression models with elastic net penalty. Supported model families include cumulative probability, stopping ratio, continuation ratio, and adjacent category. These families are a subset of vector glm's which belong to a model class we call the elementwise link multinomial-ordinal (ELMO) class. Each family in this class links a vector of covariates to a vector of class probabilities. Each of these families has a parallel form, which is appropriate for ordinal response data, as well as a nonparallel form that is appropriate for an unordered categorical response, or as a more flexible model for ordinal data. The parallel model has a single set of coefficients, whereas the nonparallel model has a set of coefficients for each response category except the baseline category. It is also possible to fit a model with both parallel and nonparallel terms, which we call the semi-parallel model. The semi-parallel model has the flexibility of the nonparallel model, but the elastic net penalty shrinks it toward the parallel model. For details, refer to Wurm, Hanlon, and Rathouz (2021) <doi:10.18637/jss.v099.i06>.
Symbolic data analysis methods: importing/exporting data from ASSO XML Files, distance calculation for symbolic data (Ichino-Yaguchi, de Carvalho measure), zoom star plot, 3d interval plot, multidimensional scaling for symbolic interval data, dynamic clustering based on distance matrix, HINoV method for symbolic data, Ichino's feature selection method, principal component analysis for symbolic interval data, decision trees for symbolic data based on optimal split with bagging, boosting and random forest approach (+visualization), kernel discriminant analysis for symbolic data, Kohonen's self-organizing maps for symbolic data, replication and profiling, artificial symbolic data generation. (Milligan, G.W., Cooper, M.C. (1985) <doi:10.1007/BF02294245>, Breiman, L. (1996), <doi:10.1007/BF00058655>, Hubert, L., Arabie, P. (1985), <doi:10.1007%2FBF01908075>, Ichino, M., & Yaguchi, H. (1994), <doi:10.1109/21.286391>, Rand, W.M. (1971) <doi:10.1080/01621459.1971.10482356>, Breckenridge, J.N. (2000) <doi:10.1207/S15327906MBR3502_5>, Groenen, P.J.F, Winsberg, S., Rodriguez, O., Diday, E. (2006) <doi:10.1016/j.csda.2006.04.003>, Dudek, A. (2007), <doi:10.1007/978-3-540-70981-7_4>).
Efficient sampling of truncated multivariate (scale) mixtures of normals under linear inequality constraints is nontrivial due to the analytically intractable normalizing constant. Meanwhile, traditional methods may subject to numerical issues, especially when the dimension is high and dependence is strong. Algorithms proposed by Li and Ghosh (2015) <doi: 10.1080/15598608.2014.996690> are adopted for overcoming difficulties in simulating truncated distributions. Efficient rejection sampling for simulating truncated univariate normal distribution is included in the package, which shows superiority in terms of acceptance rate and numerical stability compared to existing methods and R packages. An efficient function for sampling from truncated multivariate normal distribution subject to convex polytope restriction regions based on Gibbs sampler for conditional truncated univariate distribution is provided. By extending the sampling method, a function for sampling truncated multivariate Student's t distribution is also developed. Moreover, the proposed method and computation remain valid for high dimensional and strong dependence scenarios. Empirical results in Li and Ghosh (2015) <doi: 10.1080/15598608.2014.996690> illustrated the superior performance in terms of various criteria (e.g. mixing and integrated auto-correlation time).
Thematic quality indices are provided to facilitate the evaluation and quality control of geospatial data products (e.g. thematic maps, remote sensing classifications, etc.). The indices offered are based on the so-called confusion matrix. This matrix is constructed by comparing the assigned classes or attributes of a set of pairs of positions or objects in the product and the ground truth. In this package it is considered that the classes of the ground truth correspond to the columns and that the classes of the product to be valued correspond to the rows. The package offers two object classes with their methods: ConfMatrix (Confusion matrix) and QCCS (Quality Control Columns Set). The ConfMatrix class of objects offers more than 20 methods based on the confusion matrix. The QCCS class of objects offers a different perspective in which the ground truth is considered to allow the values of the column marginals to be fixed, see Ariza López et al. (2019) <doi:10.3390/app9204240> and Canran Liu et al. (2007) <doi:10.1016/j.rse.2006.10.010> for more details. The package was created with R6'.
Calculation of distances, shortest paths and isochrones on weighted graphs using several variants of Dijkstra algorithm. Proposed algorithms are unidirectional Dijkstra (Dijkstra, E. W. (1959) <doi:10.1007/BF01386390>), bidirectional Dijkstra (Goldberg, Andrew & Fonseca F. Werneck, Renato (2005) <https://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/EPP%20shortest%20path%20algorithms.pdf>), A* search (P. E. Hart, N. J. Nilsson et B. Raphael (1968) <doi:10.1109/TSSC.1968.300136>), new bidirectional A* (Pijls & Post (2009) <https://repub.eur.nl/pub/16100/ei2009-10.pdf>), Contraction hierarchies (R. Geisberger, P. Sanders, D. Schultes and D. Delling (2008) <doi:10.1007/978-3-540-68552-4_24>), PHAST (D. Delling, A.Goldberg, A. Nowatzyk, R. Werneck (2011) <doi:10.1016/j.jpdc.2012.02.007>). Algorithms for solving the traffic assignment problem are All-or-Nothing assignment, Method of Successive Averages, Frank-Wolfe algorithm (M. Fukushima (1984) <doi:10.1016/0191-2615(84)90029-8>), Conjugate and Bi-Conjugate Frank-Wolfe algorithms (M. Mitradjieva, P. O. Lindberg (2012) <doi:10.1287/trsc.1120.0409>), Algorithm-B (R. B. Dial (2006) <doi:10.1016/j.trb.2006.02.008>).
Since their introduction by Bose and Nair (1939) <https://www.jstor.org/stable/40383923>, partially balanced incomplete block (PBIB) designs remain an important class of incomplete block designs. The concept of association scheme was used by Bose and Shimamoto (1952) <doi:10.1080/01621459.1952.10501161> for the classification of these designs. The constraint of resources always motivates the experimenter to advance towards PBIB designs, more specifically to higher associate class PBIB designs from balanced incomplete block designs. It is interesting to note that many times higher associate PBIB designs perform better than their counterpart lower associate PBIB designs for the same set of parameters v, b, r, k and lambda_i (i=1,2...m). This package contains functions named GETD() for generating m-associate (m>=2) class PBIB designs along with parameters (v, b, r, k and lambda_i, i = 1, 2,â ¦,m) based on Generalized Triangular (GT) Association Scheme. It also calculates the Information matrix, Average variance factor and canonical efficiency factor of the generated design. These designs, besides having good efficiency, require smaller number of replications and smallest possible concurrence of treatment pairs.
Different inference procedures are proposed in the literature to correct for selection bias that might be introduced with non-random selection mechanisms. A class of methods to correct for selection bias is to apply a statistical model to predict the units not in the sample (super-population modeling). Other studies use calibration or Statistical Matching (statistically match nonprobability and probability samples). To date, the more relevant methods are weighting by Propensity Score Adjustment (PSA). The Propensity Score Adjustment method was originally developed to construct weights by estimating response probabilities and using them in Horvitzâ Thompson type estimators. This method is usually used by combining a non-probability sample with a reference sample to construct propensity models for the non-probability sample. Calibration can be used in a posterior way to adding information of auxiliary variables. Propensity scores in PSA are usually estimated using logistic regression models. Machine learning classification algorithms can be used as alternatives for logistic regression as a technique to estimate propensities. The package NonProbEst implements some of these methods and thus provides a wide options to work with data coming from a non-probabilistic sample.
This package provides a collection of standard factor retention methods in Exploratory Factor Analysis (EFA), making it easier to determine the number of factors. Traditional methods such as the scree plot by Cattell (1966) <doi:10.1207/s15327906mbr0102_10>, Kaiser-Guttman Criterion (KGC) by Guttman (1954) <doi:10.1007/BF02289162> and Kaiser (1960) <doi:10.1177/001316446002000116>, and flexible Parallel Analysis (PA) by Horn (1965) <doi:10.1007/BF02289447> based on eigenvalues form PCA or EFA are readily available. This package also implements several newer methods, such as the Empirical Kaiser Criterion (EKC) by Braeken and van Assen (2017) <doi:10.1037/met0000074>, Comparison Data (CD) by Ruscio and Roche (2012) <doi:10.1037/a0025697>, and Hull method by Lorenzo-Seva et al. (2011) <doi:10.1080/00273171.2011.564527>, as well as some AI-based methods like Comparison Data Forest (CDF) by Goretzko and Ruscio (2024) <doi:10.3758/s13428-023-02122-4> and Factor Forest (FF) by Goretzko and Buhner (2020) <doi:10.1037/met0000262>. Additionally, it includes a deep neural network (DNN) trained on large-scale datasets that can efficiently and reliably determine the number of factors.
Publication bias, the fact that studies identified for inclusion in a meta analysis do not represent all studies on the topic of interest, is commonly recognized as a threat to the validity of the results of a meta analysis. One way to explicitly model publication bias is via selection models or weighted probability distributions. In this package we provide implementations of several parametric and nonparametric weight functions. The novelty in Rufibach (2011) is the proposal of a non-increasing variant of the nonparametric weight function of Dear & Begg (1992). The new approach potentially offers more insight in the selection process than other methods, but is more flexible than parametric approaches. To maximize the log-likelihood function proposed by Dear & Begg (1992) under a monotonicity constraint we use a differential evolution algorithm proposed by Ardia et al (2010a, b) and implemented in Mullen et al (2009). In addition, we offer a method to compute a confidence interval for the overall effect size theta, adjusted for selection bias as well as a function that computes the simulation-based p-value to assess the null hypothesis of no selection as described in Rufibach (2011, Section 6).
Incorporates functions for image preprocessing, filtering and image recognition. The package takes advantage of RcppArmadillo to speed up computationally intensive functions. The histogram of oriented gradients descriptor is a modification of the findHOGFeatures function of the SimpleCV computer vision platform, the average_hash(), dhash() and phash() functions are based on the ImageHash python library. The Gabor Feature Extraction functions are based on Matlab code of the paper, "CloudID: Trustworthy cloud-based and cross-enterprise biometric identification" by M. Haghighat, S. Zonouz, M. Abdel-Mottaleb, Expert Systems with Applications, vol. 42, no. 21, pp. 7905-7916, 2015, <doi:10.1016/j.eswa.2015.06.025>. The SLIC and SLICO superpixel algorithms were explained in detail in (i) "SLIC Superpixels Compared to State-of-the-art Superpixel Methods", Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Suesstrunk, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, num. 11, p. 2274-2282, May 2012, <doi:10.1109/TPAMI.2012.120> and (ii) "SLIC Superpixels", Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Suesstrunk, EPFL Technical Report no. 149300, June 2010.
Neighbour-balanced designs ensure that no treatment is disadvantaged unfairly by its surroundings. The treatment allocation in these designs is such that every treatment appears equally often as a neighbour with every other treatment. Neighbour Balanced Designs are employed when there is a possibility of neighbour effects from treatments used in adjacent experimental units. In the literature, a vast number of such designs have been developed. This package generates some efficient neighbour balanced block designs which are balanced and partially variance balanced for estimating the contrast pertaining to direct and neighbour effects, as well as provides a function for analysing the data obtained from such trials (Azais, J.M., Bailey, R.A. and Monod, H. (1993). "A catalogue of efficient neighbour designs with border plots". Biometrics, 49, 1252-1261 ; Tomar, J. S., Jaggi, Seema and Varghese, Cini (2005)<DOI: 10.1080/0266476042000305177>. "On totally balanced block designs for competition effects"). This package contains functions named nbbd1(),nbbd2(),nbbd3(),pnbbd1() and pnbbd2() which generates neighbour balanced block designs within a specified range of number of treatment (v). It contains another function named anlys()for performing the analysis of data generated from such trials.
The permubiome R package was created to perform a permutation-based non-parametric analysis on microbiome data for biomarker discovery aims. This test executes thousands of comparisons in a pairwise manner, after a random shuffling of data into the different groups of study with a prior selection of the microbiome features with the largest variation among groups. Previous to the permutation test itself, data can be normalized according to different methods proposed to handle microbiome data ('proportions or Anders'). The median-based differences between groups resulting from the multiple simulations are fitted to a normal distribution with the aim to calculate their significance. A multiple testing correction based on Benjamini-Hochberg method (fdr) is finally applied to extract the differentially presented features between groups of your dataset. LATEST UPDATES: v1.1 and olders incorporates function to parse COLUMN format; v1.2 and olders incorporates -optimize- function to maximize evaluation of features with largest inter-class variation; v1.3 and olders includes the -size.effect- function to perform estimation statistics using the bootstrap-coupled approach implemented in the dabestr (>=0.3.0) R package. Current v1.3.2 fixed bug with "Class" recognition and updated dabestr functions.
This package provides an all-in-one solution for automatic classification of sound events using convolutional neural networks (CNN). The main purpose is to provide a sound classification workflow, from annotating sound events in recordings to training and automating model usage in real-life situations. Using the package requires a pre-compiled collection of recordings with sound events of interest and it can be employed for: 1) Annotation: create a database of annotated recordings, 2) Training: prepare train data from annotated recordings and fit CNN models, 3) Classification: automate the use of the fitted model for classifying new recordings. By using automatic feature selection and a user-friendly GUI for managing data and training/deploying models, this package is intended to be used by a broad audience as it does not require specific expertise in statistics, programming or sound analysis. Please refer to the vignette for further information. Gibb, R., et al. (2019) <doi:10.1111/2041-210X.13101> Mac Aodha, O., et al. (2018) <doi:10.1371/journal.pcbi.1005995> Stowell, D., et al. (2019) <doi:10.1111/2041-210X.13103> LeCun, Y., et al. (2012) <doi:10.1007/978-3-642-35289-8_3>.
Systematic conservation prioritization using mixed integer linear programming (MILP). It provides a flexible interface for building and solving conservation planning problems. Once built, conservation planning problems can be solved using a variety of commercial and open-source exact algorithm solvers. By using exact algorithm solvers, solutions can be generated that are guaranteed to be optimal (or within a pre-specified optimality gap). Furthermore, conservation problems can be constructed to optimize the spatial allocation of different management actions or zones, meaning that conservation practitioners can identify solutions that benefit multiple stakeholders. To solve large-scale or complex conservation planning problems, users should install the Gurobi optimization software (available from <https://www.gurobi.com/>) and the gurobi R package (see Gurobi Installation Guide vignette for details). Users can also install the IBM CPLEX software (<https://www.ibm.com/products/ilog-cplex-optimization-studio/cplex-optimizer>) and the cplexAPI R package (available at <https://github.com/cran/cplexAPI>). Additionally, the rcbc R package (available at <https://github.com/dirkschumacher/rcbc>) can be used to generate solutions using the CBC optimization software (<https://github.com/coin-or/Cbc>). For further details, see Hanson et al. (2025) <doi:10.1111/cobi.14376>.