Time series methods for intermittent demand forecasting. Includes Croston's method and its variants (Moving Average, SBA), and the TSB method. Users can obtain optimal parameters on a variety of loss functions, or use fixed ones (Kourenztes (2014) <doi:10.1016/j.ijpe.2014.06.007>). Intermittent time series classification methods and iMAPA that uses multiple temporal aggregation levels are also provided (Petropoulos & Kourenztes (2015) <doi:10.1057/jors.2014.62>).
Takes the MinT implementation of the hts'<https://cran.r-project.org/package=hts> package and adapts it to allow degenerate hierarchical structures. Instead of the "nodes" argument, this function takes an S matrix which is more versatile in the structures it allows. For a demo, see Steinmeister and Pauly (2024)<doi:10.15488/17729>. The MinT algorithm is based on Wickramasuriya et al. (2019)<doi:10.1080/01621459.2018.1448825>.
This package implements the routines to compare the survival curves with recurrent events, including the estimations of survival curves. The first model is a model for recurrent event, when the data are correlated or not correlated. It was proposed by Wang and Chang (1999) <doi:10.2307/2669690>. In the independent case, the survival function can be estimated by the generalization of the limit product model of Pena (2001) <doi:10.1198/016214501753381922>.
Calculates a Mahalanobis distance for every row of a set of outcome variables (Mahalanobis, 1936 <doi:10.1007/s13171-019-00164-5>). The conditional Mahalanobis distance is calculated using a conditional covariance matrix (i.e., a covariance matrix of the outcome variables after controlling for a set of predictors). Plotting the output of the cond_maha() function can help identify which elements of a profile are unusual after controlling for the predictors.
This package provides maximum likelihood estimation methods for eight modified Weibull-type distributions. It returns parameter estimates, log-likelihood, AIC, and BIC, and also supports model fitting, validation, and comparison across different distributional forms. These methods can be applied to reliability, survival, and lifetime data analysis, making the package useful for researchers and practitioners in statistics, engineering, and medicine. The following distributions are included: Rangoli2023, Peng2014, Lai2003, Xie1996, Sarhan2009, Rangoli2025, Mustafa2012, and Alwasel2009.
Redshift adjusts the color temperature according to the position of the sun. A different color temperature is set during night and daytime. During twilight and early morning, the color temperature transitions smoothly from night to daytime temperature to allow your eyes to slowly adapt. At night the color temperature should be set to match the lamps in your room.
This is a fork with added support for Wayland using the wlr-gamma-control protocol.
This package implements the estimation and inference methods for counterfactual analysis described in Chernozhukov, Fernandez-Val and Melly (2013) <DOI:10.3982/ECTA10582> "Inference on Counterfactual Distributions," Econometrica, 81(6). The counterfactual distributions considered are the result of changing either the marginal distribution of covariates related to the outcome variable of interest, or the conditional distribution of the outcome given the covariates. They can be applied to estimate quantile treatment effects and wage decompositions.
Instead of counting observations before and after a subset() call, the ExclusionTable() function reports the number before and after each subset() call together with the number of observations that have been excluded. This is especially useful in observational studies for keeping track how many observations have been excluded for each in-/ or exclusion criteria. You just need to provide ExclusionTable() with a dataset and a list of logical filter statements.
In the Cramérâ Lundberg risk process perturbed by a Wiener process, this packages provides approximations to the probability of ruin within a finite time horizon. Currently, there are three methods implemented: The first one uses saddlepoint approximation (two variants are provided), the second one uses importance sampling and the third one is based on the simulation of a dual process. This last method is not very accurate and only given here for completeness.
An implementation that combines trait data and a phylogenetic tree (or trees) into a single object of class treedata.table'. The resulting object can be easily manipulated to simultaneously change the trait- and tree-level sampling. Currently implemented functions allow users to use a data.table syntax when performing operations on the trait dataset within the treedata.table object. For more details see Roman-Palacios et al. (2021) <doi:10.7717/peerj.12450>.
Calculate several understandability metrics of BPMN models. BPMN stands for business process modelling notation and is a language for expressing business processes into business process diagrams. Examples of these understandability metrics are: average connector degree, maximum connector degree, sequentiality, cyclicity, diameter, depth, token split, control flow complexity, connector mismatch, connector heterogeneity, separability, structuredness and cross connectivity. See R documentation and paper on metric implementation included in this package for more information concerning the metrics.
An interface to Azure Cognitive Services <https://learn.microsoft.com/en-us/azure/cognitive-services/>. Both an Azure Resource Manager interface, for deploying Cognitive Services resources, and a client framework are supplied. While AzureCognitive can be called by the end-user, it is meant to provide a foundation for other packages that will support specific services, like Computer Vision, Custom Vision, language translation, and so on. Part of the AzureR family of packages.
Annotates Finnish textual survey responses into CoNLL-U format using Finnish treebanks from <https://universaldependencies.org/format.html> using UDPipe as described in Straka and Straková (2017) <doi:10.18653/v1/K17-3009>. Formatted data is then analysed using single or comparison n-gram plots, wordclouds, summary tables and Concept Network plots. The Concept Network plots use the TextRank algorithm as outlined in Mihalcea, Rada & Tarau, Paul (2004) <https://aclanthology.org/W04-3252/>.
It provides a framework and a fast and simple way for researchers to evaluate methods (particularly some data-driven methods or their own methods) and then select a best one for data normalization in the gene expression analysis, based on the consistency of metrics and the consistency of datasets. Zhenfeng Wu, Weixiang Liu, Xiufeng Jin, Deshui Yu, Hua Wang, Gustavo Glusman, Max Robinson, Lin Liu, Jishou Ruan and Shan Gao (2018) <doi:10.1101/251140>.
This package provides functions for interacting directly with the Nasdaq Data Link API to offer data in a number of formats usable in R, downloading a zip with all data from a Nasdaq Data Link database, and the ability to search. This R package uses the Nasdaq Data Link API. For more information go to <https://docs.data.nasdaq.com/>. For more help on the package itself go to <https://data.nasdaq.com/tools/r>.
This package builds on sangerseqR to allow users to create contigs from collections of Sanger sequencing reads. It provides a wide range of options for a number of commonly-performed actions including read trimming, detecting secondary peaks, and detecting indels using a reference sequence. All parameters can be adjusted interactively either in R or in the associated Shiny applications. There is extensive online documentation, and the package can outputs detailed HTML reports, including chromatograms.
This package provides a framework for fitting adaptive forecasting models. Provides a way to use forecasts as input to models, e.g. weather forecasts for energy related forecasting. The models can be fitted recursively and can easily be setup for updating parameters when new data arrives. See the included vignettes, the website <https://onlineforecasting.org> and the paper "onlineforecast: An R package for adaptive and recursive forecasting" <https://journal.r-project.org/articles/RJ-2023-031/>.
PROMETHEE (Preference Ranking Organisation METHod for Enrichment of Evaluations) based method assesses alternatives to obtain partial and complete rankings. The package also provides the GLNF (Global Local Net Flow) sorting algorithm to classify alternatives into ordered categories, as well as an index function to measure the classification quality. Barrera, F., Segura, M., & Maroto, C. (2023) <doi:10.1111/itor.13288>. Brans, J.P.; De Smet, Y., (2016) <doi:10.1007/978-1-4939-3094-4_6>.
This package provides functions to calculate some point estimators and estimate their variance under unequal probability sampling without replacement. Single and two-stage sampling designs are considered. Some approximations for the second-order inclusion probabilities (joint inclusion probabilities) are available (sample and population based). A variety of Jackknife variance estimators are implemented. Almost every function is written in C (compiled) code for faster results. The functions incorporate some performance improvements for faster results with large datasets.
The AlphaMissense publication <https://www.science.org/doi/epdf/10.1126/science.adg7492> outlines how a variant of AlphaFold / DeepMind was used to predict missense variant pathogenicity. Supporting data on Zenodo <https://zenodo.org/record/10813168> include, for instance, 71M variants across hg19 and hg38 genome builds. The AlphaMissenseR package allows ready access to the data, downloading individual files to DuckDB databases for exploration and integration into *R* and *Bioconductor* workflows.
MetaboDynamics is an R-package that provides a framework of probabilistic models to analyze longitudinal metabolomics data. It enables robust estimation of mean concentrations despite varying spread between timepoints and reports differences between timepoints as well as metabolite specific dynamics profiles that can be used for identifying "dynamics clusters" of metabolites of similar dynamics. Provides probabilistic over-representation analysis of KEGG functional modules and pathways as well as comparison between clusters of different experimental conditions.
This package provides a suite of methods to fit and predict case count data using a compartmental SIRS (Susceptible â Infectious â Recovered â Susceptible) model, based on an assumed specification of the effective reproduction number. The significance of this approach is that it relates epidemic progression to the average number of contacts of infected individuals, which decays as a function of the total susceptible fraction remaining in the population. The main functions are pred.curve(), which computes the epidemic curve for a set of parameters, and estimate.mle(), which finds the best fitting curve to observed data. The easiest way to pass arguments to the functions is via a config file, which contains input settings required for prediction, and the package offers two methods, navigate_to_config() which points the user to the configuration file, and re_predict() for starting the fit-predict process. The main model was published in Razvan G. Romanescu et al. <doi:10.1016/j.epidem.2023.100708>.
This package contains functions to help in selecting and exploring features ( or variables ) in binary classification problems. Provides functions to compute and display information value and weight of evidence (WoE) of the variables , and to convert numeric variables to categorical variables by binning. Functions are also provided to determine which levels ( or categories ) of a categorical variable can be collapsed (or combined ) based on their response rates. The functions provided only work for binary classification problems.
This package implements finite mixtures of matrix-variate contaminated normal distributions via expectation conditional-maximization algorithm for model-based clustering, as described in Tomarchio et al.(2020) <arXiv:2005.03861>. One key advantage of this model is the ability to automatically detect potential outlying matrices by computing their a posteriori probability of being typical or atypical points. Finite mixtures of matrix-variate t and matrix-variate normal distributions are also implemented by using expectation-maximization algorithms.