Perform a regression analysis, generate a regression table, create a scatter plot, and download the results. It uses stargazer for generating regression tables and ggplot2 for creating plots. With just two lines of code, you can perform a regression analysis, visualize the results, and save the output. It is part of my make R easy project where one doesn't need to know how to use various packages in order to get results and makes it easily accessible to beginners. This is a part of my make R easy project. Help from ChatGPT was taken. References were Wickham (2016) <doi:10.1007/978-3-319-24277-4>.
Extend the bigmemory package with various analytics. Functions bigkmeans and binit may also be used with native R objects. For tapply'-like functions, the bigtabulate package may also be helpful. For linear algebra support, see bigalgebra'. For mutex (locking) support for advanced shared-memory usage, see synchronicity'.
Automated data exploration process for analytic tasks and predictive modeling, so that users could focus on understanding data and extracting insights. The package scans and analyzes each variable, and visualizes them with typical graphical techniques. Common data processing methods are also available to treat and format data.
This package provides functions to extract and process data from the FDA Adverse Event Reporting System (FAERS). It facilitates the conversion of raw FAERS data published after 2014Q3 into structured formats for analysis. See Yang et al. (2022) <doi:10.3389/fphar.2021.772768> for related information.
Second-order summary statistics K- and pair-correlation functions describe interactions in point pattern data. This package provides computations to estimate those statistics on inhomogeneous point processes, using the methods of in T Shaw, J Møller, R Waagepetersen, 2020 <doi:10.48550/arXiv.2004.00527>.
Selective Sweep can be calculated by five significant Population Genetics Statistics such as "Pi", "Wattersons_theta", "Tajima_D", "Kelly_ZnS" and "Omega" Statistics in specified chromosomal region. It has been developed by using the concept of "Kern" and "Schrider" (2018)<doi:10.1534/g3.118.200262>.
This package provides functions to make inference about the standardized mortality ratio (SMR) when evaluating the effect of a screening program. The package is based on methods described in Sasieni (2003) <doi: 10.1097/00001648-200301000-00026> and Talbot et al. (2011) <doi: 10.1002/sim.4334>.
Matches a data set with semi-structured address data, e.g., street and house number as a concatenated string, wrongly spelled street names or non-existing house numbers to a reference index. The methods are specifically designed for German municipalities ('KOR'-community) and German address schemes.
Simulate lobster catch process in a trap fishery. Factors such as lobster density on ocean floor, their movement, trap saturation and bait shrinkage rate can be modeled. Details of the methods for modeling those processes can be found in: Addison and Bell (1997) <doi:10.1071/MF97169>.
Generates data based on latent factor models. Data can be continuous, polytomous, dichotomous, or mixed. Skews, cross-loadings, wording effects, population errors, and local dependencies can be added. All parameters can be manipulated. Data categorization is based on Garrido, Abad, and Ponsoda (2011) <doi:10.1177/0013164410389489>.
Macros to generate nimble code from a concise syntax. Included are macros for generating linear modeling code using a formula-based syntax and for building for() loops. For more details review the nimble manual: <https://r-nimble.org/html_manual/cha-writing-models.html#subsec:macros>.
This gadget allows you to use the recipes package belonging to tidymodels to carry out the data preprocessing tasks in an interactive way. Build your recipe by dragging the variables, visually analyze your data to decide which steps to use, add those steps and preprocess your data.
Sometimes it's useful to know some information about your user in a Shiny app. The available information is: browser name (such as Chrome or Safari') and version, device type (mobile or desktop), operating system (such as Windows or Mac or Android') and version, and browser dimensions.
This package provides functionalities based on the paper "Time Varying Dictionary and the Predictive Power of FED Minutes" (Lima, 2018) <doi:10.2139/ssrn.3312483>. It selects the most predictive terms, that we call time-varying dictionary using supervised machine learning techniques as lasso and elastic net.
This is a companion package for the text2sdg package. It contains the trained ensemble models needed by the detect_sdg function from the text2sdg package. See Wulff, Meier and Mata (2023) <arXiv:2301.11353> and Meier, Wulff and Mata (2021) <arXiv:2110.05856> for reference.
Bayesian density estimates for univariate continuous random samples are provided using the Bayesian inference engine paradigm. The engine options are: Hamiltonian Monte Carlo, the no U-turn sampler, semiparametric mean field variational Bayes and slice sampling. The methodology is described in Wand and Yu (2020), arXiv:2009.06182.
Enhances the ini package by adding the ability to interpolate variables. The INI configuration file is read into an R6 ConfigParser object (loosely inspired by Pythons ConfigParser module) and the keys can be read, where %(....)s instances are interpolated by other included options or outside variables.
Allows generating heatmap-like visualisations for data frames. Funky heatmaps can be fine-tuned by providing annotations of the columns and rows, which allows assigning multiple palettes or geometries or grouping rows and columns together in categories. Saelens et al. (2019) <doi:10.1038/s41587-019-0071-9>.
S4 classes and methods to deal with fuzzy numbers. They allow for computing any arithmetic operations (e.g., by using the Zadeh extension principle), performing approximation of arbitrary fuzzy numbers by trapezoidal and piecewise linear ones, preparing plots for publications, computing possibility and necessity values for comparisons, etc.
Read and write Frictionless Data Packages. A Data Package (<https://specs.frictionlessdata.io/data-package/>) is a simple container format and standard to describe and package a collection of (tabular) data. It is typically used to publish FAIR (<https://www.go-fair.org/fair-principles/>) and open datasets.
This package provides a variety of improved shrinkage estimators in the area of statistical analysis: unrestricted; restricted; preliminary test; improved preliminary test; Stein; and positive-rule Stein. More details can be found in chapter 7 of Saleh, A. K. Md. E. (2006) <ISBN: 978-0-471-56375-4>.
Statistical tests for validating multispecies coalescent gene tree simulators, using pairwise distances and rooted triple counts. See Allman ES, Baños HD, Rhodes JA 2023. Testing multispecies coalescent simulators using summary statistics, IEEE/ACM Trans Comput Biol Bioinformat, 20(2):1613â 1618. <doi:10.1109/TCBB.2022.3177956>.
This package provides tools for retrieving and analyzing air quality data from PurpleAir sensors through their API. Functions enable downloading historical measurements, accessing sensor metadata, and managing API request limitations through chunked data retrieval. For more information about the PurpleAir API, see <https://api.purpleair.com/>.
Farmer, J., D. Jacobs (2108) <DOI:10.1371/journal.pone.0196937>. A multivariate nonparametric density estimator based on the maximum-entropy method. Accurately predicts a probability density function (PDF) for random data using a novel iterative scoring function to determine the best fit without overfitting to the sample.