This package provides functions for extracting text and tables from PDF-based order documents. It provides an n-gram-based approach for identifying the language of an order document. It furthermore uses R-package pdftools to extract the text from an order document. In the case that the PDF document is only including an image (because it is scanned document), R package tesseract is used for OCR. Furthermore, the package provides functionality for identifying and extracting order position tables in order documents based on a clustering approach.
This is an R package for the imputation of left-censored data under a compositional approach. The implemented methods consider aspects of relevance for a compositional approach such as scale invariance, subcompositional coherence or preserving the multivariate relative structure of the data. Based on solid statistical frameworks, it comprises the ability to deal with single and varying censoring thresholds, consistent treatment of closed and non-closed data, exploratory tools, multiple imputation, Markov Chain Monte Carlo (MCMC), robust and non-parametric alternatives, and recent proposals for count data.
This package provides methods to estimate the optimal treatment regime among all linear regimes via smoothed estimation methods, and construct element-wise confidence intervals for the optimal linear treatment regime vector, as well as the confidence interval for the optimal value via wild bootstrap procedures, if the population follows treatments recommended by the optimal linear regime. See more details in: Wu, Y. and Wang, L. (2021), "Resampling-based Confidence Intervals for Model-free Robust Inference on Optimal Treatment Regimes", Biometrics, 77: 465â 476, <doi:10.1111/biom.13337>.
Realization of published methods to analyze visual field (VF) progression. Introduction to the plotting methods (designed by author TE) for VF output visualization. A sample dataset for two eyes, each with 10 follow-ups is included. The VF analysis methods could be found in -- Musch et al. (1999) <doi:10.1016/S0161-6420(99)90147-1>, Nouri-Mahdavi et at. (2012) <doi:10.1167/iovs.11-9021>, Schell et at. (2014) <doi:10.1016/j.ophtha.2014.02.021>, Aptel et al. (2015) <doi:10.1111/aos.12788>.
This package provides tools for model selection and model averaging of PerMANOVA
models using Akaike Information Criterion corrected for small sample sizes (AICc) and Information Theoretic criteria principles. The package is built around the PERMANOVA analysis from the vegan package and provides a streamlined workflow for generating and comparing models, obtaining model weights, and summarizing results using model averaging approaches. The methods implemented in this package are based on the practical information- theoretic approach described by Burnham, K. P. and Anderson, D. R. (2002) (<doi:10.1007/b97636>).
We implement causal decomposition analysis using the methods proposed by Park, Lee, and Qin (2020) and Park, Kang, and Lee (2021+) <arXiv:2109.06940>
. This package allows researchers to use the multiple-mediator-imputation, single-mediator-imputation, and product-of-coefficients regression methods to estimate the initial disparity, disparity reduction, and disparity remaining. It also allows to make the inference conditional on baseline covariates. We also implement sensitivity analysis for the causal decomposition analysis using R-squared values as sensitivity parameters (Park, Kang, Lee, and Ma, 2023).
Implementation of the EPA's Ecological Exposure Research Division (EERD) tools (discontinued in 1999) for Probit and Trimmed Spearman-Karber Analysis. Probit and Spearman-Karber methods from Finney's book "Probit analysis a statistical treatment of the sigmoid response curve" with options for most accurate results or identical results to the book. Probit and all the tables from Finney's book (code-generated, not copied) with the generating functions included. Control correction: Abbott, Schneider-Orelli, Henderson-Tilton, Sun-Shepard. Toxicity scales: Horsfall-Barratt, Archer, Gauhl-Stover, Fullerton-Olsen, etc.
The general workflow of most imputation methods is quite similar. The aim of this package is to provide parts of this general workflow to make the implementation of imputation methods easier. The heart of an imputation method is normally the used model. These models can be defined using the parsnip package or customized specifications. The rest of an imputation method are more technical specification e.g. which columns and rows should be used for imputation and in which order. These technical specifications can be set inside the imputation functions.
Prepare objects to implement models over spatial and spacetime domains with the INLA package (<https://www.r-inla.org>). These objects contain data to for the cgeneric interface in INLA', enabling fast parallel computations. We implemented the spatial barrier model, see Bakka et. al. (2019) <doi:10.1016/j.spasta.2019.01.002>, and some of the spatio-temporal models proposed in Lindgren et. al. (2023) <https://www.idescat.cat/sort/sort481/48.1.1.Lindgren-etal.pdf>. Details are provided in the available vignettes and from the URL bellow.
Guile-Reader is a simple framework for building readers for GNU Guile.
The idea is to make it easy to build procedures that extend Guile’s read procedure. Readers supporting various syntax variants can easily be written, possibly by re-using existing “token readers” of a standard Scheme readers. For example, it is used to implement Skribilo’s R5RS-derived document syntax.
Guile-Reader’s approach is similar to Common Lisp’s “read table”, but hopefully more powerful and flexible (for instance, one may instantiate as many readers as needed).
This package provides a very fast and robust interface to ArcGIS
Geocoding Services'. Provides capabilities for reverse geocoding, finding address candidates, character-by-character search autosuggestion, and batch geocoding. The public ArcGIS
World Geocoder is accessible for free use via arcgisgeocode for all services except batch geocoding. arcgisgeocode also integrates with arcgisutils to provide access to custom locators or private ArcGIS
World Geocoder hosted on ArcGIS
Enterprise'. Learn more in the Geocode service API reference <https://developers.arcgis.com/rest/geocode/api-reference/overview-world-geocoding-service.htm>.
Stock, Options and Futures Trading Strategies for Traders and Investors with Bullish Outlook are represented here through their Graphs. The graphic indicators, strategies, calculations, functions and all the discussions are for academic, research, and educational purposes only and should not be construed as investment advice and come with absolutely no Liability. Guy Cohen (â The Bible of Options Strategies (2nd ed.)â , 2015, ISBN: 9780133964028). Zura Kakushadze, Juan A. Serur (â 151 Trading Strategiesâ , 2018, ISBN: 9783030027919). John C. Hull (â Options, Futures, and Other Derivatives (11th ed.)â , 2022, ISBN: 9780136939979).
Pacote para a analise de experimentos havendo duas variaveis explicativas quantitativas e uma variavel dependente quantitativa. Os experimentos podem ser sem repeticoes ou com delineamento estatistico. Sao ajustados 12 modelos de regressao multipla e plotados graficos de superficie resposta (Hair JF, 2016) <ISBN:13:978-0138132637>.(Package for the analysis of experiments having two explanatory quantitative variables and one quantitative dependent variable. The experiments can be without repetitions or with a statistical design. Twelve multiple regression models are fitted and response surface graphs are plotted (Hair JF, 2016) <ISBN:13:978-0138132637>).
The main purpose of this package is to generate the structure of the analysis of variance (ANOVA) table of the two-phase experiments. The user only need to input the design and the relationships of the random and fixed factors using the Wilkinson-Rogers syntax, this package can then quickly generate the structure of the ANOVA table with the coefficients of the variance components for the expected mean squares. Thus, the balanced incomplete block design and provides the efficiency factors of the fixed effects can also be studied and compared much easily.
This package provides a tool for interactive exploration of the results from omics experiments to facilitate novel discoveries from high-throughput biology. The software includes R functions for the bioinformatician to deposit study metadata and the outputs from statistical analyses (e.g. differential expression, enrichment). These results are then exported to an interactive JavaScript
dashboard that can be interrogated on the user's local machine or deployed online to be explored by collaborators. The dashboard includes sortable tables, interactive plots including network visualization, and fine-grained filtering based on statistical significance.
Extras and extensions for xaringan slides. Navigate your slides with tile view. Make your slides editable, live! Announce slide changes with subtle tones. Animate slide transitions with animate.css'. Add tabbed panels to slides with panelset'. Use the Tachyons CSS utility toolkit for rapid slide development. Scribble on your slides. Add a copy button to your code chunks with clipboard'. Add a logo or top or bottom banner to every slide. Broadcast slides to stay in sync with remote viewers. Include yourself in your slides with webcam'. Plus a whole lot more!
DoubletFinder identifies doublets by generating artificial doublets from existing scRNA-seq data and defining which real cells preferentially co-localize with artificial doublets in gene expression space. Other DoubletFinder package functions are used for fitting DoubletFinder to different scRNA-seq datasets. For example, ideal DoubletFinder performance in real-world contexts requires optimal pK selection and homotypic doublet proportion estimation. pK selection is achieved using pN-pK parameter sweeps and maxima identification in mean-variance-normalized bimodality coefficient distributions. Homotypic doublet proportion estimation is achieved by finding the sum of squared cell annotation frequencies.
The leader clustering algorithm provides a means for clustering a set of data points. Unlike many other clustering algorithms it does not require the user to specify the number of clusters, but instead requires the approximate radius of a cluster as its primary tuning parameter. The package provides a fast implementation of this algorithm in n-dimensions using Lp-distances (with special cases for p=1,2, and infinity) as well as for spatial data using the Haversine formula, which takes latitude/longitude pairs as inputs and clusters based on great circle distances.
Market odds from from Pinnacle, an online sports betting bookmaker (see <https://www.pinnacle.com> for more information). Included are datasets for the Major League Baseball (MLB) 2016 season and the USA election 2016. These datasets can be used to build models and compare statistical information with the information from prediction markets.The Major League Baseball (MLB) 2016 dataset can be used for sabermetrics analysis and also can be used in conjunction with other popular Major League Baseball (MLB) datasets such as Retrosheets or the Lahman package by merging by GameID
.
An application for analysis of Adverse Events, as described in Chen, et al., (2023) <doi:10.3390/cancers15092521>. The required data for the application includes demographics, follow up, adverse event, drug administration and optional tumor measurement data. The app can produce swimmers plots of adverse events, Kaplan-Meier plots and Cox Proportional Hazards model results for the association of adverse event biomarkers and overall survival and progression free survival. The adverse event biomarkers include occurrence of grade 3, low grade (1-2), and treatment related adverse events. Plots and tables of results are downloadable.
Noise Repellent is an LV2 plugin to reduce noise. It has the following features:
Spectral gating and spectral subtraction suppression rule
Adaptive and manual noise thresholds estimation
Adjustable noise floor
Adjustable offset of thresholds to perform over-subtraction
Time smoothing and a masking estimation to reduce artifacts
Basic onset detector to avoid transients suppression
Whitening of the noise floor to mask artifacts and to recover higher frequencies
Option to listen to the residual signal
Soft bypass
Noise profile saved with the session
Exploratory analysis of a data base. Using the functions of this package is possible to filter the data set detecting atypical values (outliers) and to perform exploratory analysis through visual inspection or dispersion measures. With this package you can explore the structure of your data using several parameters at the same time joining statistical parameters with different graphics. Finally, this package aid to confirm or reject the hypothesis that your data structure presents a normal distribution. Therefore this package is useful to get a previous insight of your data before to carry out statistical analysis.
This package performs modeling and forecasting of park visitor counts using social media data and (partial) on-site visitor counts. Specifically, the model is built based on an automatic decomposition of the trend and seasonal components of the social media-based park visitor counts, from which short-term forecasts of the visitor counts and percent changes in the visitor counts can be made. A reference for the underlying model that VisitorCounts
uses can be found at Russell Goebel, Austin Schmaltz, Beth Ann Brackett, Spencer A. Wood, Kimihiro Noguchi (2023) <doi:10.1002/for.2965> .
This package implements Bayesian brain mapping models, including the prior ICA (independent components analysis) model proposed in Mejia et al. (2020) <doi:10.1080/01621459.2019.1679638> and the spatial prior ICA model proposed in proposed in Mejia et al. (2022) <doi:10.1080/10618600.2022.2104289>. Both models estimate subject-level brain as deviations from known population-level networks, which are estimated using standard ICA algorithms. Both models employ an expectation-maximization algorithm for estimation of the latent brain networks and unknown model parameters. Includes direct support for CIFTI', GIFTI', and NIFTI neuroimaging file formats.