Detects spatial and temporal groups in GPS relocations (Robitaille et al. (2019) <doi:10.1111/2041-210X.13215>). It can be used to convert GPS relocations to gambit-of-the-group format to build proximity-based social networks In addition, the randomizations function provides data-stream randomization methods suitable for GPS data.
Goodness of Fit and Forecast Evaluation Tests for timeseries models. Includes, among others, the Generalized Method of Moments (GMM) Orthogonality Test of Hansen (1982), the Nyblom (1989) parameter constancy test, the sign-bias test of Engle and Ng (1993), and a range of tests for value at risk and expected shortfall evaluation.
Recursive partitioning for varying coefficient generalized linear models and ordinal linear mixed models. Special features are coefficient-wise partitioning, non-varying coefficients and partitioning of time-varying variables in longitudinal regression. A description of a part of this package was published by Burgin and Ritschard (2017) <doi:10.18637/jss.v080.i06>.
Rosenpass is free and open-source software based on the latest research in the field of cryptography. It is intended to be used with WireGuard VPN, but can work with all software that uses pre-shared keys. It uses two cryptographic methods (Classic McEliece and Kyber) to secure systems against attacks with quantum computers.
This package analyzes and creates plots of array CGH data. Also, it allows usage of CBS, wavelet-based smoothing, HMM, BioHMM, GLAD, CGHseg. Most computations are parallelized (either via forking or with clusters, including MPI and sockets clusters) and use ff
for storing data.
This package contains the functions to find the gene expression modules that represent the drivers of Kauffman's attractor landscape. The modules are the core attractor pathways that discriminate between different cell types of groups of interest. Each pathway has a set of synexpression groups, which show transcriptionally-coordinated changes in gene expression.
This package provides a collection of functions to explore and to investigate basic properties of financial returns and related quantities. The covered fields include techniques of explorative data analysis and the investigation of distributional properties, including parameter estimation and hypothesis testing. Even more, there are several utility functions for data handling and management.
This crate contains utility functions for path, file and directory handling. There are multiple main modules for fsio:
fsio::path
: Holds path related functions and traits.fsio::file
: File utility functions such as read_file, write_file, etc.fsio::directory
: Directory specific utility functions.
This package provides a single key function, Require that makes rerun-tolerant versions of install.packages and `require` for CRAN packages, packages no longer on CRAN (i.e., archived), specific versions of packages, and GitHub
packages. This approach is developed to create reproducible workflows that are flexible and fast enough to use while in development stages, while able to build snapshots once a stable package collection is found. As with other functions in a reproducible workflow, this package emphasizes functions that return the same result whether it is the first or subsequent times running the function, with subsequent times being sufficiently fast that they can be run every time without undue waiting burden on the user or developer.
We rewrite of RAMpath software developed by John McArdle
and Steven Boker as an R package. In addition to performing regular SEM analysis through the R package lavaan, RAMpath has unique features. First, it can generate path diagrams according to a given model. Second, it can display path tracing rules through path diagrams and decompose total effects into their respective direct and indirect effects as well as decompose variance and covariance into individual bridges. Furthermore, RAMpath can fit dynamic system models automatically based on latent change scores and generate vector field plots based upon results obtained from a bivariate dynamic system. Starting version 0.4, RAMpath can conduct power analysis for both univariate and bivariate latent change score models.
Nuclear Decay Data for Dosimetric Calculations from the International Commission on Radiological Protection from ICRP Publication 107. Ann. ICRP 38 (3). Eckerman, Keith and Endo, Akira 2008 <doi:10.1016/j.icrp.2008.10.004> <https://www.icrp.org/publication.asp?id=ICRP%20Publication%20107>. This is a database of the physical data needed in calculations of radionuclide-specific protection and operational quantities. The data is prescribed by the ICRP, the international authority on radiation dose standards, for estimating dose from the intake of or exposure to radionuclides in the workplace and the environment. The database contains information on the half-lives, decay chains, and yields and energies of radiations emitted in nuclear transformations of 1252 radionuclides of 97 elements.
Predicts antimicrobial peptides using random forests trained on the n-gram encoded peptides. The implemented algorithm can be accessed from both the command line and shiny-based GUI. The AmpGram
model is too large for CRAN and it has to be downloaded separately from the repository: <https://github.com/michbur/AmpGramModel>
.
Best subset glm using information criteria or cross-validation, carried by using leaps algorithm (Furnival and Wilson, 1974) <doi:10.2307/1267601> or complete enumeration (Morgan and Tatar, 1972) <doi:10.1080/00401706.1972.10488918>. Implements PCR and PLS using AIC/BIC. Implements one-standard deviation rule for use with the caret package.
Toolkit for processing and calling interactions in capture Hi-C data. Converts BAM files into counts of reads linking restriction fragments, and identifies pairs of fragments that interact more than expected by chance. Significant interactions are identified by comparing the observed read count to the expected background rate from a count regression model.
Perform additional multiple testing procedure methods to p.adjust()
, such as weighted Hochberg (Tamhane, A. C., & Liu, L., 2008) <doi:10.1093/biomet/asn018>, ICC adjusted Bonferroni method (Shi, Q., Pavey, E. S., & Carter, R. E., 2012) <doi:10.1002/pst.1514> and a new correlation corrected weighted Hochberg for correlated endpoints.
Provide standard tables, listings, and graphs (TLGs) libraries used in clinical trials. This package implements a structure to reformat the data with dunlin', create reporting tables using rtables and tern with standardized input arguments to enable quick generation of standard outputs. In addition, it also provides comprehensive data checks and script generation functionality.
This package provides a concise check of the format of one or multiple input arguments (data type, length or value) is provided. Since multiple input arguments can be tested simultaneously, a lengthly list of checks at the beginning of your function can be avoided, hereby enhancing the readability and maintainability of your code.
Biotracers and stomach content analyses are combined in a Bayesian hierarchical model to estimate a probabilistic topology matrix (all trophic link probabilities) and a diet matrix (all diet proportions). The package relies on the JAGS software and the jagsUI
package to run a Markov chain Monte Carlo approximation of the different variables.
The epilogi variable selection algorithm is implemented for the case of continuous response and predictor variables. The relevant paper is: Lakiotaki K., Papadovasilakis Z., Lagani V., Fafalios S., Charonyktakis P., Tsagris M. and Tsamardinos I. (2023). "Automated machine learning for Genome Wide Association Studies". Bioinformatics, 39(9): btad545. <doi:10.1093/bioinformatics/btad545>.
This package provides a toolkit for calculating forest and canopy structural complexity metrics from terrestrial LiDAR
(light detection and ranging). References: Atkins et al. 2018 <doi:10.1111/2041-210X.13061>; Hardiman et al. 2013 <doi:10.3390/f4030537>; Parker et al. 2004 <doi:10.1111/j.0021-8901.2004.00925.x>.
Extra geoms and scales for ggplot2', including geom_cloud()
, a Normal density cloud replacement for errorbars; transforms ssqrt_trans and pseudolog10_trans, which are loglike but appropriate for negative data; interp_trans()
and warp_trans()
which provide scale transforms based on interpolation; and an infix compose operator for scale transforms.
Offers a convenient way to compute parameters in the framework of the theory of vocational choice introduced by J.L. Holland, (1997). A comprehensive summary to this theory of vocational choice is given in Holland, J.L. (1997). Making vocational choices. A theory of vocational personalities and work environments. Lutz, FL: Psychological Assessment.
Method for the calculation of copy numbers and calling of copy number alterations. The algorithm uses coverage data from amplicon sequencing of a sample cohort as input. The method includes significance assessment, correction for multiple testing and does not depend on normal DNA controls. Budczies (2016 Mar 15) <doi:10.18632/oncotarget.7451>.
This package provides a collection of shiny applications for the tesselle packages <https://www.tesselle.org/>. This package provides applications for archaeological data analysis and visualization. These mainly, but not exclusively, include applications for chronological modelling (e.g. matrix seriation, aoristic analysis) and count data analysis (e.g. diversity measures, compositional data analysis).