This package provides R bindings to the dockview JavaScript library <https://dockview.dev/>. Create fully customizable grid layouts (docks) in seconds to include in interactive R reports with R Markdown or Quarto or in shiny apps <https://shiny.posit.co/>. In shiny mode, modify docks by dynamically adding, removing or moving panels or groups of panels from the server function. Choose among 8 stunning themes (dark and light), serialise the state of a dock to restore it later.
Using variational techniques we address some epidemiological problems as the incidence curve decomposition by inverting the renewal equation as described in Alvarez et al. (2021) <doi:10.1073/pnas.2105112118> and Alvarez et al. (2022) <doi:10.3390/biology11040540> or the estimation of the functional relationship between epidemiological indicators. We also propose a learning method for the short time forecast of the trend incidence curve as described in Morel et al. (2022) <doi:10.1101/2022.11.05.22281904>.
This package contains four main functions (i.e., four pieces of furniture): table1() which produces a well-formatted table of descriptive statistics common as Table 1 in research articles, tableC() which produces a well-formatted table of correlations, tableF() which provides frequency counts, and washer() which is helpful in cleaning up the data. These furniture-themed functions are designed to simplify common tasks in quantitative analysis. Other data summary and cleaning tools are also available.
When you prepare a presentation or a report, you often need to manage a large number of ggplot figures. You need to change the figure size, modify the title, label, themes, etc. It is inconvenient to go back to the original code to make these changes. This package provides a simple way to manage ggplot figures. You can easily add the figure to the database and update them later using CLI (command line interface) or GUI (graphical user interface).
This package creates presentation-ready tables summarizing data sets, regression models, and more. The code to create the tables is concise and highly customizable. Data frames can be summarized with any function, e.g. mean(), median(), even user-written functions. Regression models are summarized and include the reference rows for categorical variables. Common regression models, such as logistic regression and Cox proportional hazards regression, are automatically identified and the tables are pre-filled with appropriate column headers.
Bayesian inference analysis for bivariate meta-analysis of diagnostic test studies using integrated nested Laplace approximation with INLA. A purpose built graphic user interface is available. The installation of R package INLA is compulsory for successful usage. The INLA package can be obtained from <https://www.r-inla.org>. We recommend the testing version, which can be downloaded by running: install.packages("INLA", repos=c(getOption("repos"), INLA="https://inla.r-inla-download.org/R/testing"), dep=TRUE).
Offers an easy and automated way to scale up individual-level space use analysis to that of groups. Contains a function from the move package to calculate a dynamic Brownian bridge movement model from movement data for individual animals, as well as functions to visualize and quantify space use for individuals aggregated in groups. Originally written with passive acoustic telemetry in mind, this package also provides functionality to account for unbalanced acoustic receiver array designs, and satellite tag data.
Splits initial strata into refined strata that optimize covariate balance. For more information, please see Brumberg, Small, and Rosenbaum (2024) <doi:10.1093/biomtc/ujae061>. To solve the linear program, the Gurobi commercial optimization software is recommended, but not required. The gurobi R package can be installed following the instructions at <https://docs.gurobi.com/projects/optimizer/en/current/reference/r/setup.html> after claiming your free academic license at <https://www.gurobi.com/academia/academic-program-and-licenses/>.
This package provides a set of functions to build a scoring model from beginning to end, leading the user to follow an efficient and organized development process, reducing significantly the time spent on data exploration, variable selection, feature engineering, binning and model selection among other recurrent tasks. The package also incorporates monotonic and customized binning, scaling capabilities that transforms logistic coefficients into points for a better business understanding and calculates and visualizes classic performance metrics of a classification model.
Simulation tools to evaluate the long-term effects of salmon management strategies, including a combination of habitat, harvest, and habitat actions. The stochastic age-structured operating model accommodates complex life histories, including freshwater survival across early life stages, juvenile survival and fishery exploitation in the marine life stage, partial maturity by age class, and fitness impacts of hatchery programs on natural spawning populations. salmonMSE also provides an age-structured conditioning model to develop operating models fitted to data.
The `scorecard` package makes the development of credit risk scorecard easier and efficient by providing functions for some common tasks, such as data partition, variable selection, woe binning, scorecard scaling, performance evaluation and report generation. These functions can also used in the development of machine learning models. The references including: 1. Refaat, M. (2011, ISBN: 9781447511199). Credit Risk Scorecard: Development and Implementation Using SAS. 2. Siddiqi, N. (2006, ISBN: 9780471754510). Credit risk scorecards. Developing and Implementing Intelligent Credit Scoring.
This package provides tools for obtaining, processing, and visualizing spectral reflectance data for the user-defined land or water surface classes for visual exploring in which wavelength the classes differ. Input should be a shapefile with polygons of surface classes (it might be different habitat types, crops, vegetation, etc.). The Sentinel-2 L2A satellite mission optical bands pixel data are obtained through the Google Earth Engine service (<https://earthengine.google.com/>) and used as a source of spectral data.
Fits a semiparametric spatiotemporal model for data with mixed frequencies, specifically where the response variable is observed at a lower frequency than some covariates. The estimation uses an iterative backfitting algorithm that combines a non-parametric smoothing spline for high-frequency data, parametric estimation for low-frequency and spatial neighborhood effects, and an autoregressive error structure. Methodology based on Malabanan, Lansangan, and Barrios (2022) <https://scienggj.org/2022/SciEnggJ%202022-vol15-no02-p90-107-Malabanan%20et%20al.pdf>.
Collection of phylogenetic tree statistics, collected throughout the literature. All functions have been written to maximize computation speed. The package includes umbrella functions to calculate all statistics, all balance associated statistics, or all branching time related statistics. Furthermore, the treestats package supports summary statistic calculations on Ltables, provides speed-improved coding of branching times, Ltable conversion and includes algorithms to create intermediately balanced trees. Full description can be found in Janzen (2024) <doi:10.1016/j.ympev.2024.108168>.
This package provides tools for analyzing the relationship between direct prices (based on labor values) and prices of production using Bayesian generalized linear models, panel data methods, partial least squares regression, canonical correlation analysis, and panel vector autoregression. Includes functions for model comparison, out-of-sample validation, and structural break detection. Here, methods use raw accounting data with explicit temporal structure, following Gomez Julian (2023) <doi:10.17605/OSF.IO/7J8KF> and standard econometric techniques for panel data analysis.
An implementation of the 1-Sample Wilcoxon Sign rank test for medians. It includes 2 functions, W_stat(), which computes the exact probabilities of the Wilcoxon Sign Rank Test Statistic, W. The second function, Wilcox.m.test() allows the user to conduct the 1-Sample Wilcoxon Sign Rank hypothesis test for medians, this also allows the user to conduct the hypothesis test for the normal approximation, based on the techniques of Bickel and Doksum (1973, ISBN:013850363X).
The AHP method (Analytic Hierarchy Process) is a multi-criteria decision-making method addressing choice and outranking problems. The method enables to perform the analysis of alternatives in each type of criterion and then provides a global performance of each alternative in the decision context. The main difference of this package is the possibility of evaluating the alternatives using quantitative data, by numerical representation, and qualitative data, using the Saaty scale, providing preference relation between variables by a pairwise evaluation.
Computation of the alpha-shape and alpha-convex hull of a given sample of points in the plane. The concepts of alpha-shape and alpha-convex hull generalize the definition of the convex hull of a finite set of points. The programming is based on the duality between the Voronoi diagram and Delaunay triangulation. The package also includes a function that returns the Delaunay mesh of a given sample of points and its dual Voronoi diagram in one single object.
The "Hit and Run" Markov Chain Monte Carlo method for sampling uniformly from convex shapes defined by linear constraints, and the "Shake and Bake" method for sampling from the boundary of such shapes. Includes specialized functions for sampling normalized weights with arbitrary linear constraints. Tervonen, T., van Valkenhoef, G., Basturk, N., and Postmus, D. (2012) <doi:10.1016/j.ejor.2012.08.026>. van Valkenhoef, G., Tervonen, T., and Postmus, D. (2014) <doi:10.1016/j.ejor.2014.06.036>.
An implementation for multivariate functional additive mixed models (multiFAMM), see Volkmann et al. (2021, <arXiv:2103.06606>). It builds on developed methods for univariate sparse functional regression models and multivariate functional principal component analysis. This package contains the function to run a multiFAMM and some convenience functions useful when working with large models. An additional package on GitHub contains more convenience functions to reproduce the analyses of the corresponding paper (<https://github.com/alexvolkmann/multifammPaper>).
The National Ecological Observatory Network (NEON) provides access to its numerous data products through its REST API, <https://data.neonscience.org/data-api/>. This package provides a high-level user interface for downloading and storing NEON data products. Unlike neonUtilities', this package will avoid repeated downloading, provides persistent storage, and improves performance. neonstore can also construct a local duckdb database of stacked tables, making it possible to work with tables that are far to big to fit into memory.
Extend the tidymodels ecosystem <https://www.tidymodels.org/> to enable the creation of predictive models with offset terms. Models with offsets are most useful when working with count data or when fitting an adjustment model on top of an existing model with a prior expectation. The former situation is common in insurance where data is often weighted by exposures. The latter is common in life insurance where industry mortality tables are often used as a starting point for setting assumptions.
Generates Plus Code of geometric objects or data frames that contain them, giving the possibility to specify the precision of the area. The main feature of the package comes from the open-source code developed by Google Inc. present in the repository <https://github.com/google/open-location-code/blob/main/java/src/main/java/com/google/openlocationcode/OpenLocationCode.java>. For details about Plus Code', visit <https://maps.google.com/pluscodes/> or <https://github.com/google/open-location-code>.
It estimates the parameters of spatio-temporal models with censored or missing data using the SAEM algorithm (Delyon et al., 1999). This algorithm is a stochastic approximation of the widely used EM algorithm and is particularly valuable for models in which the E-step lacks a closed-form expression. It also provides a function to compute the observed information matrix using the method developed by Louis (1982). To assess the performance of the fitted model, case-deletion diagnostics are provided.