Simple result caching in R based on R.cache. The global environment is not considered when caching results simplifying moving files between multiple instances of R. Relies on more base functions than R.cache (e.g. cached results are saved using saveRDS() and readRDS()).
This package provides tools for Genotype by Environment Interaction (GEI) analysis, using statistical models and visualizations to assess genotype performance across environments. It helps researchers explore interaction effects, stability, and adaptability in multi-environment trials, identifying the best-performing genotypes in different conditions. Which Win Where!
This package provides a system for personalized exercise plan recommendations for T2D (Type 2 Diabetes) patients based on the primary outcome of HbA1c (Glycated Hemoglobin). You provide the individual's information, and T2DFitTailor details the exercise plan and predicts the intervention's effectiveness.
Allows users to analyze text and classify emotions such as happiness, sadness, anger, fear, and neutrality. It combines text preprocessing, TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction, and Random Forest classification to predict emotions and map them to corresponding emojis for enhanced sentiment visualization.
Perform a regression analysis, generate a regression table, create a scatter plot, and download the results. It uses stargazer for generating regression tables and ggplot2 for creating plots. With just two lines of code, you can perform a regression analysis, visualize the results, and save the output. It is part of my make R easy project where one doesn't need to know how to use various packages in order to get results and makes it easily accessible to beginners. This is a part of my make R easy project. Help from ChatGPT was taken. References were Wickham (2016) <doi:10.1007/978-3-319-24277-4>.
Bayesian density estimates for univariate continuous random samples are provided using the Bayesian inference engine paradigm. The engine options are: Hamiltonian Monte Carlo, the no U-turn sampler, semiparametric mean field variational Bayes and slice sampling. The methodology is described in Wand and Yu (2020), arXiv:2009.06182.
Extend the bigmemory package with various analytics. Functions bigkmeans and binit may also be used with native R objects. For tapply'-like functions, the bigtabulate package may also be helpful. For linear algebra support, see bigalgebra'. For mutex (locking) support for advanced shared-memory usage, see synchronicity'.
Automated data exploration process for analytic tasks and predictive modeling, so that users could focus on understanding data and extracting insights. The package scans and analyzes each variable, and visualizes them with typical graphical techniques. Common data processing methods are also available to treat and format data.
This package provides functions to extract and process data from the FDA Adverse Event Reporting System (FAERS). It facilitates the conversion of raw FAERS data published after 2014Q3 into structured formats for analysis. See Yang et al. (2022) <doi:10.3389/fphar.2021.772768> for related information.
Selective Sweep can be calculated by five significant Population Genetics Statistics such as "Pi", "Wattersons_theta", "Tajima_D", "Kelly_ZnS" and "Omega" Statistics in specified chromosomal region. It has been developed by using the concept of "Kern" and "Schrider" (2018)<doi:10.1534/g3.118.200262>.
Second-order summary statistics K- and pair-correlation functions describe interactions in point pattern data. This package provides computations to estimate those statistics on inhomogeneous point processes, using the methods of in T Shaw, J Møller, R Waagepetersen, 2020 <doi:10.48550/arXiv.2004.00527>.
This package provides functions to make inference about the standardized mortality ratio (SMR) when evaluating the effect of a screening program. The package is based on methods described in Sasieni (2003) <doi: 10.1097/00001648-200301000-00026> and Talbot et al. (2011) <doi: 10.1002/sim.4334>.
Matches a data set with semi-structured address data, e.g., street and house number as a concatenated string, wrongly spelled street names or non-existing house numbers to a reference index. The methods are specifically designed for German municipalities ('KOR'-community) and German address schemes.
Simulate lobster catch process in a trap fishery. Factors such as lobster density on ocean floor, their movement, trap saturation and bait shrinkage rate can be modeled. Details of the methods for modeling those processes can be found in: Addison and Bell (1997) <doi:10.1071/MF97169>.
Generates data based on latent factor models. Data can be continuous, polytomous, dichotomous, or mixed. Skews, cross-loadings, wording effects, population errors, and local dependencies can be added. All parameters can be manipulated. Data categorization is based on Garrido, Abad, and Ponsoda (2011) <doi:10.1177/0013164410389489>.
Macros to generate nimble code from a concise syntax. Included are macros for generating linear modeling code using a formula-based syntax and for building for() loops. For more details review the nimble manual: <https://r-nimble.org/html_manual/cha-writing-models.html#subsec:macros>.
Sometimes it's useful to know some information about your user in a Shiny app. The available information is: browser name (such as Chrome or Safari') and version, device type (mobile or desktop), operating system (such as Windows or Mac or Android') and version, and browser dimensions.
This gadget allows you to use the recipes package belonging to tidymodels to carry out the data preprocessing tasks in an interactive way. Build your recipe by dragging the variables, visually analyze your data to decide which steps to use, add those steps and preprocess your data.
This package implements the Topic Testlet Model (TTM) as described by Xiong et al. (2025) <doi:10.1111/jedm.70001>. The package integrates Latent Dirichlet Allocation (LDA) with the Partial Credit Model to account for local item dependence in testlets using latent topics from student textual responses.
This is a companion package for the text2sdg package. It contains the trained ensemble models needed by the detect_sdg function from the text2sdg package. See Wulff, Meier and Mata (2023) <arXiv:2301.11353> and Meier, Wulff and Mata (2021) <arXiv:2110.05856> for reference.
This package provides functionalities based on the paper "Time Varying Dictionary and the Predictive Power of FED Minutes" (Lima, 2018) <doi:10.2139/ssrn.3312483>. It selects the most predictive terms, that we call time-varying dictionary using supervised machine learning techniques as lasso and elastic net.
This package adductomicsR processes data generated by the second stage of mass spectrometry (MS2) to identify potentially adducted peptides from spectra that has been corrected for mass drift and retention time drift and quantifies level mass spectral peaks from first stage of mass spectrometry (MS1) data.
This package manages rda files of multiple ontologies that are used in the ontoProc package. These ontologies were originally downloaded as owl or obo files and converted into Rda files. The files were downloaded at various times but most of them were downloaded on August 08 2022.
This package provides a post hoc cell type classification tool to fine-tune cell type annotations generated by any cell type classification procedure with semi-supervised learning algorithm AdaSampling technique. The current version of scReClassify supports Support Vector Machine and Random Forest as a base classifier.