Genetic algorithm are a class of optimization algorithms inspired by the process of natural selection and genetics. This package is for learning purposes and allows users to optimize various functions or parameters by mimicking biological evolution processes such as selection, crossover, and mutation. Ideal for tasks like machine learning parameter tuning, mathematical function optimization, and solving an optimization problem that involves finding the best solution in a discrete space.
The curatedMetagenomicData package provides standardized, curated human microbiome data for novel analyses. It includes gene families, marker abundance, marker presence, pathway abundance, pathway coverage, and relative abundance for samples collected from different body sites. The bacterial, fungal, and archaeal taxonomic abundances for each sample were calculated with MetaPhlAn3, and metabolic functional potential was calculated with HUMAnN3. The manually curated sample metadata and standardized metagenomic data are available as (Tree)SummarizedExperiment objects.
U-Boot is a bootloader used mostly for ARM boards. It also initializes the boards (RAM etc).
It allows network booting and uses the device-tree from the firmware, allowing the usage of overlays. It can act as an EFI firmware for the grub-efi-netboot-removable-bootloader. This is a common 64-bit build of U-Boot for all 64-bit capable Raspberry Pi variants.
This package only contains the file u-boot.bin.
This package implements a novel Bayesian disaggregation framework that combines Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) dimension reduction of prior weight matrices with deterministic Bayesian updating rules. The method provides Markov Chain Monte Carlo (MCMC) free posterior estimation with built-in diagnostic metrics. While based on established PCA (Jolliffe, 2002) <doi:10.1007/b98835> and Bayesian principles (Gelman et al., 2013) <doi:10.1201/b16018>, the specific integration for economic disaggregation represents an original methodological contribution.
Utilize the shiny interface for visualizing results from a pyDarwin (<https://certara.github.io/pyDarwin/>) machine learning pharmacometric model search. It generates Goodness-of-Fit plots and summary tables for selected models, allowing users to customize diagnostic outputs within the interface. The underlying R code for generating plots and tables can be extracted for use outside the interactive session. Model diagnostics can also be incorporated into an R Markdown document and rendered in various output formats.
This package provides a user friendly way to create patient level prediction models using the Observational Medical Outcomes Partnership Common Data Model. Given a cohort of interest and an outcome of interest, the package can use data in the Common Data Model to build a large set of features. These features can then be used to fit a predictive model with a number of machine learning algorithms. This is further described in Reps (2017) <doi:10.1093/jamia/ocy032>.
This package provides some tabulated data to be be referred to in a discussion in a vignette accompanying my upcoming R package playWholeHandDriverPassParams'. In addition to that specific purpose, these may also provide data and illustrate some computational approaches that are relevant to card games like hearts or bridge.This package refers to authentic data from Gregory Stoll <https://gregstoll.com/~gregstoll/bridge/math.html>, and details of performing the probability calculations from Jeremy L. Martin <https://jlmartin.ku.edu/~jlmartin/bridge/basics.pdf>.
Simplify your portfolio optimization process by applying a contemporary modeling way to model and solve your portfolio problems. While most approaches and packages are rather complicated this one tries to simplify things and is agnostic regarding risk measures as well as optimization solvers. Some of the methods implemented are described by Konno and Yamazaki (1991) <doi:10.1287/mnsc.37.5.519>, Rockafellar and Uryasev (2001) <doi:10.21314/JOR.2000.038> and Markowitz (1952) <doi:10.1111/j.1540-6261.1952.tb01525.x>.
Analyzes and modifies metabolomics raw data (generated using Gas Chromatography-Atmospheric Pressure Chemical Ionization-Mass Spectrometry) to correct overloaded signals, i.e. ion intensities exceeding detector saturation leading to a cut-off peak. Data in xcmsRaw format are accepted as input and mzXML files can be processed alternatively. Overloaded signals are detected automatically and modified using an Gaussian or an Isotopic-Ratio approach. Quality control plots are generated and corrected data are stored within the original xcmsRaw or mzXML respectively to allow further processing.
This package provides a hypothesis test and variable selection algorithm for use in time-varying, concurrent regression models. The hypothesis test function is also accompanied by a plotting function which will show the estimated beta(s) and confidence band(s) from the hypothesis test. The hypothesis test function helps the user identify significant covariates within the scope of a time-varying concurrent model. The plots will show the amount of area that falls outside the confidence band(s) which is used for the test statistic within the hypothesis test.
This package provides a client library for Vipul's Razor. Vipul's Razor is a distributed, collaborative, spam detection and filtering network. Through user contribution, Razor establishes a distributed and constantly updating catalogue of spam in propagation that is consulted by email clients to filter out known spam. Detection is done with statistical and randomized signatures that efficiently spot mutating spam content. User input is validated through reputation assignments based on consensus on report and revoke assertions which in turn is used for computing confidence values associated with individual signatures.
Supports propensity score-based methodsâ including matching, stratification, and weightingâ for estimating causal treatment effects. It also implements calibration using negative control outcomes to enhance robustness. debiasedTrialEmulation facilitates effect estimation for both binary and time-to-event outcomes, supporting risk ratio (RR), odds ratio (OR), and hazard ratio (HR) as effect measures. It integrates statistical modeling and visualization tools to assess covariate balance, equipoise, and bias calibration. Additional methodsâ including approaches to address immortal time bias, information bias, selection bias, and informative censoringâ are under development. Users interested in these extended features are encouraged to contact the package authors.
This package provides functions for evaluating and visualizing predictive model performance (specifically: binary classifiers) in the field of customer scoring. These metrics include lift, lift index, gain percentage, top-decile lift, F1-score, expected misclassification cost and absolute misclassification cost. See Berry & Linoff (2004, ISBN:0-471-47064-3), Witten and Frank (2005, 0-12-088407-0) and Blattberg, Kim & Neslin (2008, ISBN:978â 0â 387â 72578â 9) for details. Visualization functions are included for lift charts and gain percentage charts. All metrics that require class predictions offer the possibility to dynamically determine cutoff values for transforming real-valued probability predictions into class predictions.
This package implements the calibrated sensitivity analysis approach for matched observational studies. Our sensitivity analysis framework views matched sets as drawn from a super-population. The unmeasured confounder is modeled as a random variable. We combine matching and model-based covariate-adjustment methods to estimate the treatment effect. The hypothesized unmeasured confounder enters the picture as a missing covariate. We adopt a state-of-art Expectation Maximization (EM) algorithm to handle this missing covariate problem in generalized linear models (GLMs). As our method also estimates the effect of each observed covariate on the outcome and treatment assignment, we are able to calibrate the unmeasured confounder to observed covariates. Zhang, B., Small, D. S. (2018). <arXiv:1812.00215>.
Time series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing. References: Nayak et al. (2024) <doi:10.1007/s40808-023-01944-7> and Nayak et al. (2024) <doi:10.1016/j.simpa.2024.100716>.
DifferentialRegulation is a method for detecting differentially regulated genes between two groups of samples (e.g., healthy vs. disease, or treated vs. untreated samples), by targeting differences in the balance of spliced and unspliced mRNA abundances, obtained from single-cell RNA-sequencing (scRNA-seq) data. From a mathematical point of view, DifferentialRegulation accounts for the sample-to-sample variability, and embeds multiple samples in a Bayesian hierarchical model. Furthermore, our method also deals with two major sources of mapping uncertainty: i) ambiguous reads, compatible with both spliced and unspliced versions of a gene, and ii) reads mapping to multiple genes. In particular, ambiguous reads are treated separately from spliced and unsplced reads, while reads that are compatible with multiple genes are allocated to the gene of origin. Parameters are inferred via Markov chain Monte Carlo (MCMC) techniques (Metropolis-within-Gibbs).
Sensitivity analysis for case-control studies in which some cases may meet a more narrow definition of being a case compared to other cases which only meet a broad definition. The sensitivity analyses are described in Small, Cheng, Halloran and Rosenbaum (2013, "Case Definition and Sensitivity Analysis", Journal of the American Statistical Association, 1457-1468). The functions sens.analysis.mh and sens.analysis.aberrant.rank provide sensitivity analyses based on the Mantel-Haenszel test statistic and aberrant rank test statistic as described in Rosenbaum (1991, "Sensitivity Analysis for Matched Case Control Studies", Biometrics); see also Section 1 of Small et al. The function adaptive.case.test provides adaptive inferences as described in Section 5 of Small et al. The function adaptive.noether.brown provides a sensitivity analysis for a matched cohort study based on an adaptive test. The other functions in the package are internal functions.
This package implements a regularized Bayesian estimator that optimizes the estimation of between-group coefficients for multilevel latent variable models by minimizing mean squared error (MSE) and balancing variance and bias. The package provides more reliable estimates in scenarios with limited data, offering a robust solution for accurate parameter estimation in two-level latent variable models. It is designed for researchers in psychology, education, and related fields who face challenges in estimating between-group effects under small sample sizes and low intraclass correlation coefficients. The package includes comprehensive S3 methods for result objects: print(), summary(), coef(), se(), vcov(), confint(), as.data.frame(), dim(), length(), names(), and update() for enhanced usability and integration with standard R workflows. Dashuk et al. (2025a) <doi:10.1017/psy.2025.10045> derived the optimal regularized Bayesian estimator; Dashuk et al. (2025b) <doi:10.1007/s41237-025-00264-7> extended it to the multivariate case; and Luedtke et al. (2008) <doi:10.1037/a0012869> formalized the two-level latent variable framework.
This package contains functions to implement automated covariate selection using methods described in the high-dimensional propensity score (HDPS) algorithm by Schneeweiss et.al. Covariate adjustment in real-world-observational-data (RWD) is important for for estimating adjusted outcomes and this can be done by using methods such as, but not limited to, propensity score matching, propensity score weighting and regression analysis. While these methods strive to statistically adjust for confounding, the major challenge is in selecting the potential covariates that can bias the outcomes comparison estimates in observational RWD (Real-World-Data). This is where the utility of automated covariate selection comes in. The functions in this package help to implement the three major steps of automated covariate selection as described by Schneeweiss et. al elsewhere. These three functions, in order of the steps required to execute automated covariate selection are, get_candidate_covariates(), get_recurrence_covariates() and get_prioritised_covariates(). In addition to these functions, a sample real-world-data from publicly available de-identified medical claims data is also available for running examples and also for further exploration. The original article where the algorithm is described by Schneeweiss et.al. (2009) <doi:10.1097/EDE.0b013e3181a663cc> .
Determination of rainfall-runoff erosivity factor.
Documentation at https://melpa.org/#/run-command-recipes
Documentation at https://melpa.org/#/replace-from-region
This package provides an ESS-like binding to send lines or regions to a REPL from Racket buffers.