This package provides a C++ backend for multivariate phylogenetic comparative models implemented in the R-package PCMBase'. Can be used in combination with PCMBase to enable fast and parallel likelihood calculation. Implements the pruning likelihood calculation algorithm described in Mitov et al. (2020) <doi:10.1016/j.tpb.2019.11.005>. Uses the SPLITT C++ library for parallel tree traversal described in Mitov and Stadler (2018) <doi:10.1111/2041-210X.13136>.
Computes normalized cycle threshold (Ct) values (delta Ct) from raw quantitative polymerase chain reaction (qPCR
) Ct values and conducts test of significance using t.test()
. Plots expression values based from log2(2^(-1*delta delta Ct)) across groups per gene of interest. Methods for calculation of delta delta Ct and relative expression (2^(-1*delta delta Ct)) values are described in: Livak & Schmittgen, (2001) <doi:10.1006/meth.2001.1262>.
Empowers users to fuzzily-merge data frames with millions or tens of millions of rows in minutes with low memory usage. The package uses the locality sensitive hashing algorithms developed by Datar, Immorlica, Indyk and Mirrokni (2004) <doi:10.1145/997817.997857>, and Broder (1998) <doi:10.1109/SEQUEN.1997.666900> to avoid having to compare every pair of records in each dataset, resulting in fuzzy-merges that finish in linear time.
The Hashery is a tight collection of Hash
-like classes. Included are the auto-sorting Dictionary
class, the efficient LRUHash
, the flexible OpenHash
and the convenient KeyHash
. Nearly every class is a subclass of the CRUDHash
which defines a CRUD (Create, Read, Update and Delete) model on top of Ruby's standard Hash
making it possible to subclass and augment to fit any specific use case.
Wrapper for widely used SUNDIALS software (SUite of Nonlinear and DIfferential/ALgebraic Equation Solvers) and more precisely to its CVODES solver. It is aiming to solve ordinary differential equations (ODE) and optionally pending forward sensitivity problem. The wrapper is made R friendly by allowing to pass custom parameters to user's callback functions. Such functions can be both written in R and in C++ ('RcppArmadillo
flavor). In case of C++', performance is greatly improved so this option is highly advisable when performance matters. If provided, Jacobian matrix can be calculated either in dense or sparse format. In the latter case rmumps package is used to solve corresponding linear systems. Root finding and pending event management are optional and can be specified as R or C++ functions too. This makes them a very flexible tool for controlling the ODE system during the time course simulation. SUNDIALS library was published in Hindmarsh et al. (2005) <doi:10.1145/1089014.1089020>.
Manage dependencies during package development. This can retrieve all dependencies that are used in ".R" files in the "R/" directory, in ".Rmd" files in "vignettes/" directory and in roxygen2 documentation of functions. There is a function to update the "DESCRIPTION" file of your package with CRAN packages or any other remote package. All functions to retrieve dependencies of ".R" scripts and ".Rmd" or ".qmd" files can be used independently of a package development.
An implementation of the bridge distribution with logit-link in R. In Wang and Louis (2003) <DOI:10.1093/biomet/90.4.765>, such a univariate bridge distribution was derived as the distribution of the random intercept that bridged a marginal logistic regression and a conditional logistic regression. The conditional and marginal regression coefficients are a scalar multiple of each other. Such is not the case if the random intercept distribution was Gaussian.
Run other estimation and simulation software via the nlmixr2 (Fidler et al (2019) <doi:10.1002/psp4.12445>) interface including PKNCA', NONMEM and Monolix'. While not required, you can get/install the lixoftConnectors
package in the Monolix installation, as described at the following url <https://monolixsuite.slp-software.com/r-functions/2024R1/installation-and-initialization>. When lixoftConnectors
is available, Monolix can be run directly instead of setting up command line usage.
This package provides functions for the calculation and plotting of synchrony in tree growth from tree-ring width chronologies (TRW index). It combines variance-covariance (VCOV) mixed modelling with functions that quantify the degree to which the TRW chronologies contain a common temporal signal. It also implements temporal trends in spatial synchrony using a moving window. These methods can also be used with other kind of ecological variables that have temporal autocorrelation corrected.
This algorithm provides a numerical solution to the problem of unconstrained local minimization (or maximization). It is particularly suited for complex problems and more efficient than the Gauss-Newton-like algorithm when starting from points very far from the final minimum (or maximum). Each iteration is parallelized and convergence relies on a stringent stopping criterion based on the first and second derivatives. See Philipps et al, 2021 <doi:10.32614/RJ-2021-089>.
Data sets from a variety of biological sample matrices, analysed using a number of mass spectrometry based metabolomic analytical techniques. The example data sets are stored remotely using GitHub
releases <https://github.com/aberHRML/metaboData/releases>
which can be accessed from R using the package. The package also includes the abr1 FIE-MS data set from the FIEmspro package <https://users.aber.ac.uk/jhd/> <doi:10.1038/nprot.2007.511>.
Multiple Imputation has been shown to be a flexible method to impute missing values by Van Buuren (2007) <doi:10.1177/0962280206074463>. Expanding on this, random forests have been shown to be an accurate model by Stekhoven and Buhlmann <arXiv:1105.0828>
to impute missing values in datasets. They have the added benefits of returning out of bag error and variable importance estimates, as well as being simple to run in parallel.
Calculate ocean wave height summary statistics and process data from bottom-mounted pressure sensor data loggers. Derived primarily from MATLAB functions provided by U. Neumeier at <http://neumeier.perso.ch/matlab/waves.html>. Wave number calculation based on the algorithm in Hunt, J. N. (1979, ISSN:0148-9895) "Direct Solution of Wave Dispersion Equation", American Society of Civil Engineers Journal of the Waterway, Port, Coastal, and Ocean Division, Vol 105, pp 457-459.
Validate data in data frames, tibble objects, Spark DataFrames
', and database tables. Validation pipelines can be made using easily-readable, consecutive validation steps. Upon execution of the validation plan, several reporting options are available. User-defined thresholds for failure rates allow for the determination of appropriate reporting actions. Many other workflows are available including an information management workflow, where the aim is to record, collect, and generate useful information on data tables.
Phenotypic analysis of data coming from high throughput phenotyping (HTP) platforms, including different types of outlier detection, spatial analysis, and parameter estimation. The package is being developed within the EPPN2020 project (<https://cordis.europa.eu/project/id/731013>). Some functions have been created to be used in conjunction with the R package asreml for the ASReml software, which can be obtained upon purchase from VSN international (<https://vsni.co.uk/software/asreml-r/>).
This package provides some easy-to-use functions to extract and visualize the output of multivariate data analyses, including PCA
(Principal Component Analysis), CA
(Correspondence Analysis), MCA
(Multiple Correspondence Analysis), FAMD
(Factor Analysis of Mixed Data), MFA
(Multiple Factor Analysis) and HMFA
(Hierarchical Multiple Factor Analysis) functions from different R packages. It contains also functions for simplifying some clustering analysis steps and provides ggplot2-based elegant data visualization.
As a successor of the packages BatchJobs and BatchExperiments, this package provides a parallel implementation of the Map function for high performance computing systems managed by various schedulers. A multicore and socket mode allow the parallelization on a local machines, and multiple machines can be hooked up via SSH to create a makeshift cluster. Moreover, the package provides an abstraction mechanism to define large-scale computer experiments in a well-organized and reproducible way.
xwayland-run
contains a set of small utilities revolving around running Xwayland
and various Wayland compositor headless, namely:
xwayland-run
: Spawn X11 client within its own dedicatedXwayland
rootful instance.wlheadless-run
: Run Wayland client on a set of supported Wayland headless compositors.xwfb-run
: Combination of above two tools to be used as a direct replacement forxvfb-run
specifically.
This package provides tools to create, validate, and export BioCompute
Objects described in King et al. (2019) <doi:10.17605/osf.io/h59uh>. Users can encode information in data frames, and compose BioCompute
Objects from the domains defined by the standard. A checksum validator and a JSON schema validator are provided. This package also supports exporting BioCompute
Objects as JSON, PDF, HTML, or Word documents, and exporting to cloud-based platforms.
Represents generalized geometric ellipsoids with the "(U,D)" representation. It allows degenerate and/or unbounded ellipsoids, together with methods for linear and duality transformations, and for plotting. Thus ellipsoids are naturally extended to include lines, hyperplanes, points, cylinders, etc. This permits exploration of a variety to statistical issues that can be visualized using ellipsoids as discussed by Friendly, Fox & Monette (2013), Elliptical Insights: Understanding Statistical Methods Through Elliptical Geometry <doi:10.1214/12-STS402>.
Multivariate outlier detection is performed using invariant coordinates where the package offers different methods to choose the appropriate components. ICS is a general multivariate technique with many applications in multivariate analysis. ICSOutlier offers a selection of functions for automated detection of outliers in the data based on a fitted ICS object or by specifying the dataset and the scatters of interest. The current implementation targets data sets with only a small percentage of outliers.
Keras Tuner <https://keras-team.github.io/keras-tuner/> is a hypertuning framework made for humans. It aims at making the life of AI practitioners, hypertuner algorithm creators and model designers as simple as possible by providing them with a clean and easy to use API for hypertuning. Keras Tuner makes moving from a base model to a hypertuned one quick and easy by only requiring you to change a few lines of code.
Integration of the units and errors packages for a complete quantity calculus system for R vectors, matrices and arrays, with automatic propagation, conversion, derivation and simplification of magnitudes and uncertainties. Documentation about units and errors is provided in the papers by Pebesma, Mailund & Hiebert (2016, <doi:10.32614/RJ-2016-061>) and by Ucar, Pebesma & Azcorra (2018, <doi:10.32614/RJ-2018-075>), included in those packages as vignettes; see citation("quantities") for details.
Given independent and identically distributed observations X(1), ..., X(n) from a Generalized Pareto distribution with shape parameter gamma in [-1,0], offers several estimates to compute estimates of gamma. The estimates are based on the principle of replacing the order statistics by quantiles of a distribution function based on a log--concave density function. This procedure is justified by the fact that the GPD density is log--concave for gamma in [-1,0].