Download and process public domain works in the Project Gutenberg collection <https://www.gutenberg.org/>. Includes metadata for all Project Gutenberg works, so that they can be searched and retrieved.
Use GTFS (General Transit Feed Specification) data for routing from nominated start and end stations, for extracting isochrones', and travel times from any nominated start station to all other stations.
Makes it easy to extract and combine variables from the HILDA (Household, Income and Labour Dynamics in Australia) survey maintained by the Melbourne Institute <https://melbourneinstitute.unimelb.edu.au/hilda>.
R interface to access the web services of the ICES (International Council for the Exploration of the Sea) DATRAS trawl survey database <https://datras.ices.dk/WebServices/Webservices.aspx>.
Datasets and wrapper functions for tidyverse-friendly introductory linear regression, used in "Statistical Inference via Data Science: A ModernDive into R and the Tidyverse" available at <https://moderndive.com/>.
An API wrapper for the Monash University Probabilistic Footy Tipping Competition <https://probabilistic-footy.monash.edu/~footy/index.shtml>. Allows users to submit tips directly to the competition from R.
Implementation of two p-value combination techniques (inverse normal and Fisher methods). A vignette is provided to explain how to perform a meta-analysis from two independent RNA-seq experiments.
The Needleman-Wunsch global alignment algorithm can be used to find approximate matches between sample names in different data sets. See Wang et al. (2010) <doi:10.4137/CIN.S5613>.
Computes probability-scale residuals and residual correlations for continuous, ordinal, binary, count, and time-to-event data Qi Liu, Bryan Shepherd, Chun Li (2020) <doi:10.18637/jss.v094.i12>.
Generation of count (assuming Poisson distribution) and continuous data (using Fleishman polynomials) simultaneously. The details of the method are explained in Demirtas et al. (2012) <DOI:10.1002/sim.5362>.
Tokenizers break text into pieces that are more usable by machine learning models. Many tokenizers share some preparation steps. This package provides those shared steps, along with a simple tokenizer.
This package provides a series of checks to identify common issues in Study Data Tabulation Model (SDTM) datasets. These checks are intended to be generalizable, actionable, and meaningful for analysis.
This package provides methods for sampling contact matrices from diary data for use in infectious disease modelling, as discussed in Mossong et al. (2008) <doi:10.1371/journal.pmed.0050074>.
This package provides a small set of functions wrapping up the call stack and command line inspection needed to determine a running script's filename from within the script itself.
Two- and three-dimensional morphometric maps of enamel and dentine thickness and multivariate analysis. Volume calculation of dental materials. Principal component analysis of thickness maps with associated morphometric map variations.
This package contains functions to standardize tracheid profiles using the traditional method (Vaganov) and a new method to standardize tracheidograms based on the relative position of tracheids within tree rings.
This package provides a flexible and streamlined pipeline for formatting, analyzing, and visualizing omics data, regardless of omics type (e.g. transcriptomics, proteomics, metabolomics). The package includes tools for shaping input data into analysis-ready structures, fitting linear or mixed-effect models, extracting key contrasts, and generating a rich variety of ready-to-use publication-quality plots. Designed for transparency and reproducibility across a wide range of study designs, with customizable components for statistical modeling.
This package provides a useful statistical tool for the construction and analysis of Honeycomb Selection Designs. More information about this type of designs: Fasoula V. (2013) <doi:10.1002/9781118497869.ch6> Fasoula V.A., and Tokatlidis I.S. (2012) <doi:10.1007/s13593-011-0034-0> Fasoulas A.C., and Fasoula V.A. (1995) <doi:10.1002/9780470650059.ch3> Tokatlidis I. (2016) <doi:10.1017/S0014479715000150> Tokatlidis I., and Vlachostergios D. (2016) <doi:10.3390/d8040029>.
Implementation of a Principal Component Analysis (PCA) in the torus via density ridge estimation. The main function, ridge_pca(), obtains the relevant density ridge for bivariate sine von Mises and bivariate wrapped Cauchy distribution models and provides the associated scores and variance decomposition. Auxiliary functions for evaluating, fitting, and sampling these models are also provided. The package provides replicability to Garcà a-Portugués and Prieto-Tirado (2023) <doi:10.1007/s11222-023-10273-9>.
The data within this package is a panel of four samples, each with 3000 cells. There are two samples which are bone marrow (BM), and two samples which are cord blood (CB).
Functions implemented in this package allow coercing (i.e. convert) network data between classes provided by other R packages. Currently supported classes are those defined in packages network and igraph.
This package provides functions for prior and likelihood sensitivity analysis in Bayesian models. It implements methods to determine the sensitivity of the posterior to power-scaling perturbations of the prior and likelihood.
This package provides some very simple method functions for confidence interval calculation and to distill pertinent information from a potentially complex object; primarily used in common with the packages extRemes and SpatialVx.
This package provides a lightweight package to easily manipulate, clean, transform, and prepare your data for analysis. It also forms the data wrangling backend for the packages in the easystats ecosystem.