Ensmallen is a templated C++ mathematical optimization library (by the MLPACK team) that provides a simple set of abstractions for writing an objective function to optimize. Provided within are various standard and cutting-edge optimizers that include full-batch gradient descent techniques, small-batch techniques, gradient-free optimizers, and constrained optimization. The RcppEnsmallen
package includes the header files from the Ensmallen library and pairs the appropriate header files from armadillo through the RcppArmadillo
package. Therefore, users do not need to install Ensmallen nor Armadillo to use RcppEnsmallen
'. Note that Ensmallen is licensed under 3-Clause BSD, Armadillo starting from 7.800.0 is licensed under Apache License 2, RcppArmadillo
(the Rcpp bindings/bridge to Armadillo') is licensed under the GNU GPL version 2 or later. Thus, RcppEnsmallen
is also licensed under similar terms. Note that Ensmallen requires a compiler that supports C++14 and Armadillo 10.8.2 or later.
This package provides tools to process the information obtained from experiments conducted in the Barnes Maze. These tools enable the detection of trajectories generated by subjects during trials, as well as the acquisition of precise coordinates and relevant statistical data regarding the results. Through this approach, it aims to facilitate the analysis and interpretation of observed behaviors, thereby contributing to a deeper understanding of learning and memory processes in such experiments.
Analysis workflow for finding geographic boundaries of ecological or landscape traits and comparing the placement of geographic boundaries of two traits. If data are trait values, trait data are transformed to boundary intensities based on approximate first derivatives across latitude and longitude. The package includes functions to create custom null models based on the input data. The boundary statistics are described in: Fortin, Drapeau, and Jacquez (1996) <doi:10.2307/3545584>.
Analysis of preprocessed dramatic texts, with respect to literary research. The package provides functions to analyze and visualize information about characters, stage directions, the dramatic structure and the text itself. The dramatic texts are expected to be in CSV format, which can be installed from within the package, sample texts are provided. The package and the reasoning behind it are described in Reiter et al. (2017) <doi:10.18420/in2017_119>.
Imports time series data from the Quandl database <https://data.nasdaq.com/>. The package uses the json api at <https://data.nasdaq.com/search>, local caching ('memoise package) and the tidy format by default. Also allows queries of databases, allowing the user to see which time series are available for each database id. In short, it is an alternative to package Quandl', with faster data importation in the tidy/long format.
This package provides tools to sort, edit and prune pedigrees and to extract the inbreeding coefficients and the relationship matrix (includes code for pedigrees from self-pollinated species). The use of pedigree data is central to genetics research within the animal and plant breeding communities to predict breeding values. The relationship matrix between the individuals can be derived from pedigree structure ('Vazquez et al., 2010') <doi:10.2527/jas.2009-1952>.
This package provides a range of functions for the design and analysis of disease surveillance activities. These functions were originally developed for animal health surveillance activities but can be equally applied to aquatic animal, wildlife, plant and human health surveillance activities. Utilities are included for sample size calculation and analysis of representative surveys for disease freedom, risk-based studies for disease freedom and for prevalence estimation. This package is based on Cameron A., Conraths F., Frohlich A., Schauer B., Schulz K., Sergeant E., Sonnenburg J., Staubach C. (2015). R package of functions for risk-based surveillance. Deliverable 6.24, WP 6 - Decision making tools for implementing risk-based surveillance, Grant Number no. 310806, RISKSUR (<https://www.fp7-risksur.eu/sites/default/files/documents/Deliverables/RISKSUR_%28310806%29_D6.24.pdf>). Many of the RSurveillance functions are incorporated into the epitools website: Sergeant, ESG, 2019. Epitools epidemiological calculators. Ausvet Pty Ltd. Available at: <http://epitools.ausvet.com.au>.
Implementations of Bayesian parametric, nonparametric and semiparametric procedures for univariate and multivariate time series. The package is based on the methods presented in C. Kirch et al (2018) <doi:10.1214/18-BA1126>, A. Meier (2018) <https://opendata.uni-halle.de//handle/1981185920/13470> and Y. Tang et al (2023) <doi:10.48550/arXiv.2303.11561>
. It was supported by DFG grants KI 1443/3-1 and KI 1443/3-2.
Analysis of large datasets of fixed coupon bonds, allowing for irregular first and last coupon periods and various day count conventions. With this package you can compute the yield to maturity, the modified and MacAulay
durations and the convexity of fixed-rate bonds. It provides the function AnnivDates
, which can be used to evaluate the quality of the data and return time-invariant properties and temporal structure of a bond.
Sparklines are small plots (about one line of text high), made popular by Edward Tufte. This package is the interface from R to the LaTeX
package sparklines by Andreas Loeffer and Dan Luecking (<http://www.ctan.org/pkg/sparklines>). It can work with Sweave or knitr or other engines that produce TeX
. The package can be used to plot vectors, matrices, data frames, time series (in ts or zoo format).
This package provides a comprehensive and curated collection of datasets related to the lungs, respiratory system, and associated diseases. This package includes epidemiological, clinical, experimental, and simulated datasets on conditions such as lung cancer, asthma, Chronic Obstructive Pulmonary Disease (COPD), tuberculosis, whooping cough, pneumonia, influenza, and other respiratory illnesses. It is designed to support data exploration, statistical modeling, teaching, and research in pulmonary medicine, public health, environmental epidemiology, and respiratory disease surveillance.
We implement functions to estimate and perform sensitivity analysis to unobserved confounding of direct and indirect effects introduced in Lindmark, de Luna and Eriksson (2018) <doi:10.1002/sim.7620> and Lindmark (2022) <doi:10.1007/s10260-021-00611-4>. The estimation and sensitivity analysis are parametric, based on probit and/or linear regression models. Sensitivity analysis is implemented for unobserved confounding of the exposure-mediator, mediator-outcome and exposure-outcome relationships.
This Python module provides line editing functions similar to the default Emacs-style ones of GNU Readline. Unlike the Python standard library's readline
package, this one allows access to those capabilities in settings outside of a standard command-line interface. It is especially well-suited to interfacing with Urwid, due to a shared syntax for describing key inputs.
Currently, all stateless Readline commands are implemented. Yanking and history are not supported.
With Serverspec, you can write RSpec tests for checking your servers are configured correctly.
Serverspec tests your servers’ actual state by executing command locally, via SSH, via WinRM, via Docker API and so on. So you don’t need to install any agent softwares on your servers and can use any configuration management tools, Puppet, Ansible, CFEngine, Itamae and so on.
But the true aim of Serverspec is to help refactoring infrastructure code.
Catalogues of resolution IV regular fractional factorial designs in 128 runs are provided for up to 33 2-level factors. The catalogues are complete, excluding resolution IV designs without 5-letter words, because these do not add value for a search for unblocked clear designs. The previous package version 1.0 with complete catalogues up to 24 runs (24 runs and a namespace added later) can be downloaded from the authors website.
Access to several Numerical Weather Prediction services both in raster format and as a time series for a location. Currently it works with GFS <https://www.ncei.noaa.gov/products/weather-climate-models/global-forecast>, MeteoGalicia
<https://www.meteogalicia.gal/web/modelos/threddsIndex.action>
, NAM <https://www.ncei.noaa.gov/products/weather-climate-models/north-american-mesoscale>, and RAP <https://www.ncei.noaa.gov/products/weather-climate-models/rapid-refresh-update>.
The goal of statcodelists is to promote the reuse and exchange of statistical information and related metadata with making the internationally standardized SDMX code lists available for the R user. SDMX has been published as an ISO International Standard (ISO 17369). The metadata definitions, including the codelists are updated regularly according to the standard. The authoritative version of the code lists made available in this package is <https://sdmx.org/?page_id=3215/>.
This package implements an innovative approach to community detection in social networks using Association Rules Learning. The package provides tools for processing graph and rules objects, generating association rules, and detecting communities based on node interactions. Designed to facilitate advanced research in Social Network Analysis, this package leverages association rules learning for enhanced community detection. This approach is described in El-Moussaoui et al. (2021) <doi:10.1007/978-3-030-66840-2_3>.
Loads and displays images, selectively masks specified background colors, bins pixels by color using either data-dependent or automatically generated color bins, quantitatively measures color similarity among images using one of several distance metrics for comparing pixel color clusters, and clusters images by object color similarity. Uses CIELAB, RGB, or HSV color spaces. Originally written for use with organism coloration (reef fish color diversity, butterfly mimicry, etc), but easily applicable for any image set.
The user can directly compute and display false discovery rates from inputted p-values or z-scores under a variety of assumptions. p.fdr()
computes FDRs, adjusted p-values and decision reject vectors from inputted p-values or z-values. get.pi0()
estimates the proportion of data that are truly null. plot.p.fdr()
plots the FDRs, adjusted p-values, and the raw p-values points against their rejection threshold lines.
Check concordance of a vector of mutation impacts with standard dictionaries such as Sequence Ontology (SO) <http://www.sequenceontology.org/>, Mutation Annotation Format (MAF) <https://docs.gdc.cancer.gov/Encyclopedia/pages/Mutation_Annotation_Format_TCGAv2/> or Prediction and Annotation of Variant Effects (PAVE) <https://github.com/hartwigmedical/hmftools/tree/master/pave>. It enables conversion between SO/PAVE and MAF terms and selection of the most severe consequence where multiple ampersand (&) delimited impacts are given.
This package provides functions for testing randomness for a univariate time series with arbitrary distribution (discrete, continuous, mixture of both types) and for testing independence between random variables with arbitrary distributions. The test statistics are based on the multilinear empirical copula and multipliers are used to compute P-values. The test of independence between random variables appeared in Genest, Nešlehová, Rémillard & Murphy (2019) and the test of randomness appeared in Nasri (2022).
Split Knockoff is a data adaptive variable selection framework for controlling the (directional) false discovery rate (FDR) in structural sparsity, where variable selection on linear transformation of parameters is of concern. This proposed scheme relaxes the linear subspace constraint to its neighborhood, often known as variable splitting in optimization. Simulation experiments can be reproduced following the Vignette. Split Knockoffs is first defined in Cao et al. (2021) <doi:10.48550/arXiv.2103.16159>
.
Calculates federal and state income taxes in the United States. It acts as a wrapper to the NBER's TAXSIM 35 (<http://taxsim.nber.org/taxsim35/>) tax simulator. TAXSIM 35 conducts the calculations, while usincometaxes prepares the data for TAXSIM 35, sends the data to TAXSIM 35's server or communicates with the Web Assembly file, retrieves the data, and places it into a data frame. All without the user worrying about this process.