Automated backtesting of multiple portfolios over multiple datasets of stock prices in a rolling-window fashion. Intended for researchers and practitioners to backtest a set of different portfolios, as well as by a course instructor to assess the students in their portfolio design in a fully automated and convenient manner, with results conveniently formatted in tables and plots. Each portfolio design is easily defined as a function that takes as input a window of the stock prices and outputs the portfolio weights. Multiple portfolios can be easily specified as a list of functions or as files in a folder. Multiple datasets can be conveniently extracted randomly from different markets, different time periods, and different subsets of the stock universe. The results can be later assessed and ranked with tables based on a number of performance criteria (e.g., expected return, volatility, Sharpe ratio, drawdown, turnover rate, return on investment, computational time, etc.), as well as plotted in a number of ways with nice barplots and boxplots.
This package provides functions to read and write neuroimaging data in various file formats, with a focus on FreeSurfer <http://freesurfer.net/> formats. This includes, but is not limited to, the following file formats: 1) MGH/MGZ format files, which can contain multi-dimensional images or other data. Typically they contain time-series of three-dimensional brain scans acquired by magnetic resonance imaging (MRI). They can also contain vertex-wise measures of surface morphometry data. The MGH format is named after the Massachusetts General Hospital, and the MGZ format is a compressed version of the same format. 2) FreeSurfer morphometry data files in binary curv format. These contain vertex-wise surface measures, i.e., one scalar value for each vertex of a brain surface mesh. These are typically values like the cortical thickness or brain surface area at each vertex. 3) Annotation file format. This contains a brain surface parcellation derived from a cortical atlas. 4) Surface file format. Contains a brain surface mesh, given by a list of vertices and a list of faces.
The RNAseqCovarImpute package makes linear model analysis for RNA sequencing read counts compatible with multiple imputation (MI) of missing covariates. A major problem with implementing MI in RNA sequencing studies is that the outcome data must be included in the imputation prediction models to avoid bias. This is difficult in omics studies with high-dimensional data. The first method we developed in the RNAseqCovarImpute package surmounts the problem of high-dimensional outcome data by binning genes into smaller groups to analyze pseudo-independently. This method implements covariate MI in gene expression studies by 1) randomly binning genes into smaller groups, 2) creating M imputed datasets separately within each bin, where the imputation predictor matrix includes all covariates and the log counts per million (CPM) for the genes within each bin, 3) estimating gene expression changes using `limma::voom` followed by `limma::lmFit` functions, separately on each M imputed dataset within each gene bin, 4) un-binning the gene sets and stacking the M sets of model results before applying the `limma::squeezeVar` function to apply a variance shrinking Bayesian procedure to each M set of model results, 5) pooling the results with Rubins’ rules to produce combined coefficients, standard errors, and P-values, and 6) adjusting P-values for multiplicity to account for false discovery rate (FDR). A faster method uses principal component analysis (PCA) to avoid binning genes while still retaining outcome information in the MI models. Binning genes into smaller groups requires that the MI and limma-voom analysis is run many times (typically hundreds). The more computationally efficient MI PCA method implements covariate MI in gene expression studies by 1) performing PCA on the log CPM values for all genes using the Bioconductor `PCAtools` package, 2) creating M imputed datasets where the imputation predictor matrix includes all covariates and the optimum number of PCs to retain (e.g., based on Horn’s parallel analysis or the number of PCs that account for >80% explained variation), 3) conducting the standard limma-voom pipeline with the `voom` followed by `lmFit` followed by `eBayes` functions on each M imputed dataset, 4) pooling the results with Rubins’ rules to produce combined coefficients, standard errors, and P-values, and 5) adjusting P-values for multiplicity to account for false discovery rate (FDR).
Landsat satellites collect important data about global forest conditions. Documentation about Landsat's role in forest disturbance estimation is available at the site <https://landsat.gsfc.nasa.gov/>. By constrained quadratic B-splines, this package delivers an optimal shape-restricted trajectory to a time series of Landsat imagery for the purpose of modeling annual forest disturbance dynamics to behave in an ecologically sensible manner assuming one of seven possible "shapes", namely, flat, decreasing, one-jump (decreasing, jump up, decreasing), inverted vee (increasing then decreasing), vee (decreasing then increasing), linear increasing, and double-jump (decreasing, jump up, decreasing, jump up, decreasing). The main routine selects the best shape according to the minimum Bayes information criterion (BIC) or the cone information criterion (CIC), which is defined as the log of the estimated predictive squared error. The package also provides parameters summarizing the temporal pattern including year(s) of inflection, magnitude of change, pre- and post-inflection rates of growth or recovery. In addition, it contains routines for converting a flat map of disturbance agents to time-series disturbance maps and a graphical routine displaying the fitted trajectory of Landsat imagery.
This package provides functions for modeling, comparing, and visualizing photosynthetic light response curves using established mechanistic and empirical models like the rectangular hyperbola Michaelis-Menton based models ((eq1 (Baly (1935) <doi:10.1098/rspb.1935.0026>)) (eq2 (Kaipiainenn (2009) <doi:10.1134/S1021443709040025>)) (eq3 (Smith (1936) <doi:10.1073/pnas.22.8.504>))), hyperbolic tangent based models ((eq4 (Jassby & Platt (1976) <doi:10.4319/LO.1976.21.4.0540>)) (eq5 (Abe et al. (2009) <doi:10.1111/j.1444-2906.2008.01619.x>))), the non-rectangular hyperbola model (eq6 (Prioul & Chartier (1977) <doi:10.1093/oxfordjournals.aob.a085354>)), exponential based models ((eq8 (Webb et al. (1974) <doi:10.1007/BF00345747>)), (eq9 (Prado & de Moraes (1997) <doi:10.1007/BF02982542>))), and finally the Ye model (eq11 (Ye (2007) <doi:10.1007/s11099-007-0110-5>)). Each of these nonlinear least squares models are commonly used to express photosynthetic response under changing light conditions and has been well supported in the literature, but distinctions in each mathematical model represent moderately different assumptions about physiology and trait relationships which ultimately produce different calculated functional trait values. These models were all thoughtfully discussed and curated by Lobo et al. (2013) <doi:10.1007/s11099-013-0045-y> to express the importance of selecting an appropriate model for analysis, and methods were established in Davis et al. (in review) to evaluate the impact of analytical choice in phylogenetic analysis of the function-valued traits. Gas exchange data on 28 wild sunflower species from Davis et al.are included as an example data set here.
Providing the container for the DockerParallel package.
Provides Sprockets implementation for the Rails Asset Pipeline.
Documentation at https://melpa.org/#/rope-read-mode
Image data used as examples in the loon R package.
This package enhances the ROI with the lp_solve solver.
This package provides a Minimal Example Package which demonstrates mlpack use via C++ Code from R.
This package provides a fast implementation of the greedy algorithm for the set cover problem using Rcpp'.
Enhances the R Optimization Infrastructure ('ROI') package with the alabama solver for solving nonlinear optimization problems.
U-Boot is a bootloader used mostly for ARM boards. It also initializes the boards (RAM etc).
Yasnippets for React.
Reads data files acquired by Bruker Daltonics matrix-assisted laser desorption/ionization-time-of-flight mass spectrometer of the *flex series.
Facilitates mapping by making natural earth map data from http:// www.naturalearthdata.com/ more easily available to R users. Focuses on vector data.
This package provides a GUI for the orloca package is provided as a Rcmdr plug-in. The package deals with continuos planar location problems.
This is a collection of tools to allow the medical professional to calculate appropriate reference ranges (intervals) with confidence intervals around the limits for diagnostic purposes.
Software releasing made easy and repeatable.
Build regular expressions using grammar and functionality inspired by <https://github.com/VerbalExpressions>. Usage of the %>% is encouraged to build expressions in a chain-like fashion.