The gamma lasso algorithm provides regularization paths corresponding to a range of non-convex cost functions between L0 and L1 norms. As much as possible, usage for this package is analogous to that for the glmnet package (which does the same thing for penalization between L1 and L2 norms). For details see: Taddy (2017 JCGS), One-Step Estimator Paths for Concave Regularization', <arXiv:1308.5623>.
River hydrograph separation and daily runoff time series analysis. Provides various filters to separate baseflow and quickflow. Implements advanced separation technique by Rets et al. (2022) <doi:10.1134/S0097807822010146> which involves meteorological data to reveal genetic components of the runoff: ground, rain, thaw and spring (seasonal thaw). High-performance C++17 computation, annually aggregated variables, statistical testing and numerous plotting functions for high-quality visualization.
Implementation of some of the formulations for the thermodynamic and transport properties released by the International Association for the Properties of Water and Steam (IAPWS). More specifically, the releases R1-76(2014), R5-85(1994), R6-95(2018), R7-97(2012), R8-97, R9-97, R10-06(2009), R11-24, R12-08, R15-11, R16-17(2018), R17-20 and R18-21 at <https://iapws.org>.
The goal of jetty is to execute R functions and code snippets in an isolated R subprocess within a Docker container and return the evaluated results to the local R session. jetty can install necessary packages at runtime and seamlessly propagates errors and outputs from the Docker subprocess back to the main session. jetty is primarily designed for sandboxed testing and quick execution of example code.
Implementation of some unit and area level EBLUP estimators as well as the estimators of their MSE also under heteroscedasticity. The package further documents the publications Breidenbach and Astrup (2012) <DOI:10.1007/s10342-012-0596-7>, Breidenbach et al. (2016) <DOI:10.1016/j.rse.2015.07.026> and Breidenbach et al. (2018 in press). The vignette further explains the use of the implemented functions.
Shiny apps for the quantitative analysis of images from lateral flow assays (LFAs). The images are segmented and background corrected and color intensities are extracted. The apps can be used to import and export intensity data and to calibrate LFAs by means of linear, loess, or gam models. The calibration models can further be saved and applied to intensity data from new images for determining concentrations.
Print vectors (and data frames) of floating point numbers using a non-scientific format optimized for human readers. Vectors of numbers are rounded using significant digits, aligned at the decimal point, and all zeros trailing the decimal point are dropped. See: Wright (2016). Lucid: An R Package for Pretty-Printing Floating Point Numbers. In JSM Proceedings, Statistical Computing Section. Alexandria, VA: American Statistical Association. 2270-2279.
Three algorithms for estimating a Markov random field structure.Two of them are an exact version and a simulated annealing version of a penalized maximum conditional likelihood method similar to the Bayesian Information Criterion. These algorithm are described in Frondana (2016) <doi:10.11606/T.45.2018.tde-02022018-151123>.The third one is a greedy algorithm, described in Bresler (2015) <doi:10.1145/2746539.2746631).
Mica is a server application used to create data web portals for large-scale epidemiological studies or multiple-study consortia. Mica helps studies to provide scientifically robust data visibility and web presence without significant information technology effort. Mica provides a structured description of consortia, studies, annotated and searchable data dictionaries, and data access request management. This Mica client allows to perform data extraction for reporting purposes.
This package provides functions for working with (grouped) multivariate normal variance mixture distributions (evaluation of distribution functions and densities, random number generation and parameter estimation), including Student's t distribution for non-integer degrees-of-freedom as well as the grouped t distribution and copula with multiple degrees-of-freedom parameters. See <doi:10.18637/jss.v102.i02> for a high-level description of select functionality.
TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models. A module is a self-contained piece of a TensorFlow graph, along with its weights and assets, that can be reused across different tasks in a process known as transfer learning. Transfer learning train a model with a smaller dataset, improve generalization, and speed up training.
Descriptive statistics for large data tend to be low resolution on the tails. Whisker Odds generate a table of descriptive statistics for large data. This is the same as letter-values, but with an alternative naming of depths which allow for depths beyond 26. For a reference to letter-values see Heike Hofmann and Hadley Wickham and Karen Kafadar (2017) <doi:10.1080/10618600.2017.1305277>.
This package provides the usual distribution functions, maximum likelihood inference and model diagnostics for univariate stationary extreme value mixture models. Also, there are provided kernel density estimation including various boundary corrected kernel density estimation methods and a wide choice of kernels, with cross-validation likelihood based bandwidth estimator. Reasonable consistency with the base functions in the evd package is provided, so that users can safely interchange most code.
Zoltar is a website that provides a repository of model forecast results in a standardized format and a central location. It supports storing, retrieving, comparing, and analyzing time series forecasts for prediction challenges of interest to the modeling community. This package provides functions for working with the Zoltar API, including connecting and authenticating, getting information about projects, models, and forecasts, deleting and uploading forecast data, and downloading scores.
Roswell started out as a command-line tool with the aim to make installing and managing Common Lisp implementations really simple and easy. Roswell has now evolved into a full-stack environment for Common Lisp development, and has many features that makes it easy to test, share, and distribute your Lisp applications.
Roswell is still in beta. Despite this, the basic interfaces are stable and not likely to change.
HERON is a software package for analyzing peptide binding array data. In addition to identifying significant binding probes, HERON also provides functions for finding epitopes (string of consecutive peptides within a protein). HERON also calculates significance on the probe, epitope, and protein level by employing meta p-value methods. HERON is designed for obtaining calls on the sample level and calculates fractions of hits for different conditions.
This package performs outlier detection of sequences in a multiple sequence alignment using bootstrap of predefined distance metrics. Outlier sequences can make downstream analyses unreliable or make the alignments less accurate while they are being constructed. This package implements the OD-seq algorithm proposed by Jehl et al (doi 10.1186/s12859-015-0702-1) for aligned sequences and a variant using string kernels for unaligned sequences.
Stanford ATLAS (Advanced Temporal Search Engine) is a powerful tool that allows constructing cohorts of patients extremely quickly and efficiently. This package is designed to interface directly with an instance of ATLAS search engine and facilitates API queries and data dumps. Prerequisite is a good knowledge of the temporal language to be able to efficiently construct a query. More information available at <https://shahlab.stanford.edu/start>.
This package provides functions to combine data on voting blocs size, turnout, and vote choice to estimate each bloc's vote contributions to the Democratic and Republican parties. The package also includes functions for uncertainty estimation and plotting. Users may define voting blocs along a discrete or continuous variable. The package implements methods described in Grimmer, Marble, and Tanigawa-Lau (2023) <doi:10.31235/osf.io/c9fkg>.
CLUster Evaluation (CLUE) is a computational method for identifying optimal number of clusters in a given time-course dataset clustered by cmeans or kmeans algorithms and subsequently identify key kinases or pathways from each cluster. Its implementation in R is called ClueR. See README on <https://github.com/PYangLab/ClueR> for more details. P Yang et al. (2015) <doi:10.1371/journal.pcbi.1004403>.
This package provides a toolbox for developing applications, games, simulations, or agent-based models in the R terminal. Included functions allow users to move the cursor around the terminal screen, change text colors and attributes, clear the screen, hide and show the cursor, map key presses to functions, draw shapes and curves, among others. Most functionalities require users to be in a terminal (not the R GUI).
Compute distributional quantities for an Integrated Gamma (IG) or Integrated Gamma Limit (IGL) copula, such as a cdf and density. Compute corresponding conditional quantities such as the cdf and quantiles. Generate data from an IG or IGL copula. See the vignette for formulas, or for a derivation, see Coia, V (2017) "Forecasting of Nonlinear Extreme Quantiles Using Copula Models." PhD Dissertation, The University of British Columbia.
Gaussian process regression with an emphasis on kernels. Quantitative and qualitative inputs are accepted. Some pre-defined kernels are available, such as radial or tensor-sum for quantitative inputs, and compound symmetry, low rank, group kernel for qualitative inputs. The user can define new kernels and composite kernels through a formula mechanism. Useful methods include parameter estimation by maximum likelihood, simulation, prediction and leave-one-out validation.
This package provides a new method to implement clustering from multiple modality data of certain samples, the function M2SMF() jointly factorizes multiple similarity matrices into a shared sub-matrix and several modality private sub-matrices, which is further used for clustering. Along with this method, we also provide function to calculate the similarity matrix and function to evaluate the best cluster number from the original data.