Solving the problem of project management using CPM (Critical Path Method), PERT (Program Evaluation and Review Technique) and LESS (Least Cost Estimating and Scheduling) methods. The package sets the critical path, schedule and Gantt chart. In addition, it allows to draw a graph even with marked critical activities. For more information about project management see: Taha H. A. "Operations Research. An Introduction" (2017, ISBN:978-1-292-16554-7), Rama Murthy P. "Operations Research" (2007, ISBN:978-81-224-2944-2), Yuval Cohen & Arik Sadeh (2006) "A New Approach for Constructing and Generating AOA Networks", Journal of Engineering, Computing and Architecture 1. 1-13, Konarzewska I., Jewczak M., Kucharski A. (2020, ISBN:978-83-8220-112-3), MiszczyÅ ska D., MiszczyÅ ski M. "Wybrane metody badaÅ operacyjnych" (2000, ISBN:83-907712-0-9).
Routines and tools for assessing the quality of content analysis on the basis of the Iota Reliability Concept. The concept is inspired by item response theory and can be applied to any kind of content analysis which uses a standardized coding scheme and discrete categories. It is also applicable for content analysis conducted by artificial intelligence. The package provides reliability measures for a complete scale as well as for every single category. Analysis of subgroup-invariance and error corrections are implemented. This information can support the development process of a coding scheme and allows a detailed inspection of the quality of the generated data. Equations and formulas working in this package are part of Berding et al. (2022)<doi:10.3389/feduc.2022.818365> and Berding and Pargmann (2022) <doi:10.30819/5581>.
The following methods are implemented to evaluate how sensitive the results of a meta-analysis are to potential bias in meta-analysis and to support Schwarzer et al. (2015) <DOI:10.1007/978-3-319-21416-0>, Chapter 5 Small-Study Effects in Meta-Analysis': - Copas selection model described in Copas & Shi (2001) <DOI:10.1177/096228020101000402>; - limit meta-analysis by Rücker et al. (2011) <DOI:10.1093/biostatistics/kxq046>; - upper bound for outcome reporting bias by Copas & Jackson (2004) <DOI:10.1111/j.0006-341X.2004.00161.x>; - imputation methods for missing binary data by Gamble & Hollis (2005) <DOI:10.1016/j.jclinepi.2004.09.013> and Higgins et al. (2008) <DOI:10.1177/1740774508091600>; - LFK index test and Doi plot by Furuya-Kanamori et al. (2018) <DOI:10.1097/XEB.0000000000000141>.
High-resolution movement-sensor tags typically include accelerometers to measure body posture and sudden movements or changes in speed, magnetometers to measure direction of travel, and pressure sensors to measure dive depth in aquatic or marine animals. The sensors in these tags usually sample many times per second. Some tags include sensors for speed, turning rate (gyroscopes), and sound. This package provides software tools to facilitate calibration, processing, and analysis of such data. Tools are provided for: data import/export; calibration (from raw data to calibrated data in scientific units); visualization (for example, multi-panel time-series plots); data processing (such as event detection, calculation of derived metrics like jerk and dynamic acceleration, dive detection, and dive parameter calculation); and statistical analysis (for example, track reconstruction, a rotation test, and Mahalanobis distance analysis).
We present a novel statistical framework for identifying differential distributions in single-cell RNA-sequencing (scRNA-seq) data between treatment conditions by modeling gene expression read counts using generalized linear models (GLMs). We model each gene independently under each treatment condition using error distributions Poisson (P), Negative Binomial (NB), Zero-inflated Poisson (ZIP) and Zero-inflated Negative Binomial (ZINB) with log link function and model based normalization for differences in sequencing depth. Since all four distributions considered in our framework belong to the same family of distributions, we first perform a Kolmogorov-Smirnov (KS) test to select genes belonging to the family of ZINB distributions. Genes passing the KS test will be then modeled using GLMs. Model selection is done by calculating the Bayesian Information Criterion (BIC) and likelihood ratio test (LRT) statistic.
The new yield tables developed by the Northwest German Forest Research Institute (NW-FVA) provide a forest management tool for the five main commercial tree species oak, beech, spruce, Douglas-fir and pine for northwestern Germany. The new method applied for deriving yield tables combines measurements of growth and yield trials with growth simulations using a state-of-the-art single-tree growth simulator. By doing so, the new yield tables reflect the current increment level and the recommended graduated thinning from above is the underlying management concept. The yield tables are provided along with methods for deriving the site index and for interpolating between age and site indices and extrapolating beyond age and site index ranges. The inter-/extrapolations are performed traditionally by the rule of proportion or with a functional approach.
Interpretation methods for analyzing the behavior and individual predictions of modern neural networks in a three-step procedure: Converting the model, running the interpretation method, and visualizing the results. Implemented methods are, e.g., Connection Weights described by Olden et al. (2004) <doi:10.1016/j.ecolmodel.2004.03.013>, layer-wise relevance propagation ('LRP') described by Bach et al. (2015) <doi:10.1371/journal.pone.0130140>, deep learning important features ('DeepLIFT') described by Shrikumar et al. (2017) <doi:10.48550/arXiv.1704.02685> and gradient-based methods like SmoothGrad described by Smilkov et al. (2017) <doi:10.48550/arXiv.1706.03825>, Gradient x Input or Vanilla Gradient'. Details can be found in the accompanying scientific paper: Koenen & Wright (2024, Journal of Statistical Software, <doi:10.18637/jss.v111.i08>).
Classical tests of goodness-of-fit aim to validate the conformity of a postulated model to the data under study. In their standard formulation, however, they do not allow exploring how the hypothesized model deviates from the truth nor do they provide any insight into how the rejected model could be improved to better fit the data. To overcome these shortcomings, we establish a comprehensive framework for goodness-of-fit which naturally integrates modeling, estimation, inference and graphics. In this package, the deviance tests and comparison density plots are performed to conduct the LP smoothed inference, where the letter L denotes nonparametric methods based on quantiles and P stands for polynomials. Simulations methods are used to perform variance estimation, inference and post-selection adjustments. Algeri S. and Zhang X. (2020) <arXiv:2005.13011>.
This package provides statistical methods for analytical method comparison and validation studies. Implements Bland-Altman analysis for assessing agreement between measurement methods (Bland & Altman (1986) <doi:10.1016/S0140-6736(86)90837-8>), Passing-Bablok regression for non-parametric method comparison (Passing & Bablok (1983) <doi:10.1515/cclm.1983.21.11.709>), and Deming regression accounting for measurement error in both variables (Linnet (1993) <doi:10.1093/clinchem/39.3.424>). Also includes tools for setting quality goals based on biological variation (Fraser & Petersen (1993) <doi:10.1093/clinchem/39.7.1447>) and calculating Six Sigma metrics, precision experiments with variance component analysis, precision profiles for functional sensitivity estimation (Kroll & Emancipator (1993) <https://pubmed.ncbi.nlm.nih.gov/8448849/>). Commonly used in clinical laboratory method validation. Provides publication-ready plots and comprehensive statistical summaries.
This package provides a universal, user friendly, single-cell and bulk RNA sequencing visualization toolkit that allows highly customizable creation of color blindness friendly, publication-quality figures. dittoSeq accepts both SingleCellExperiment (SCE) and Seurat objects, as well as the import and usage, via conversion to an SCE, of SummarizedExperiment or DGEList bulk data. Visualizations include dimensionality reduction plots, heatmaps, scatterplots, percent composition or expression across groups, and more. Customizations range from size and title adjustments to automatic generation of annotations for heatmaps, overlay of trajectory analysis onto any dimensionality reduciton plot, hidden data overlay upon cursor hovering via ggplotly conversion, and many more. All with simple, discrete inputs. Color blindness friendliness is powered by legend adjustments (enlarged keys), and by allowing the use of shapes or letter-overlay in addition to the carefully selected codedittoColors().
DepInfeR integrates two experimentally accessible input data matrices: the drug sensitivity profiles of cancer cell lines or primary tumors ex-vivo (X), and the drug affinities of a set of proteins (Y), to infer a matrix of molecular protein dependencies of the cancers (ß). DepInfeR deconvolutes the protein inhibition effect on the viability phenotype by using regularized multivariate linear regression. It assigns a “dependence coefficient” to each protein and each sample, and therefore could be used to gain a causal and accurate understanding of functional consequences of genomic aberrations in a heterogeneous disease, as well as to guide the choice of pharmacological intervention for a specific cancer type, sub-type, or an individual patient. For more information, please read out preprint on bioRxiv: https://doi.org/10.1101/2022.01.11.475864.
This package provides functions to compute coefficients measuring the dependence of two or more than two variables. The functions can be deployed to gain information about functional dependencies of the variables with emphasis on monotone functions. The statistics describe how well one response variable can be approximated by a monotone function of other variables. In regression analysis the variable selection is an important issue. In this framework the functions could be useful tools in modeling the regression function. Detailed explanations on the subject can be found in papers Liebscher (2014) <doi:10.2478/demo-2014-0004>; Liebscher (2017) <doi:10.1515/demo-2017-0012>; Liebscher (2021): <https://arfjournals.com/image/catalog/Journals%20Papers/AJSS/No%202%20(2021)/4-AJSS_123-150.pdf>; Liebscher (2021): Kendall regression coefficient. Computational Statistics and Data Analysis 157. 107140.
This package implements fast and exact computation of Gaussian stochastic process with the Matern kernel using forward filtering and backward smoothing algorithm. It includes efficient implementations of the inverse Kalman filter, with applications such as estimating particle interaction functions. These tools support models with or without noise. Additionally, the package offers algorithms for fast parameter estimation in latent factor models, where the factor loading matrix is orthogonal, and latent processes are modeled by Gaussian processes. See the references: 1) Mengyang Gu and Yanxun Xu (2020), Journal of Computational and Graphical Statistics; 2) Xinyi Fang and Mengyang Gu (2024), <doi:10.48550/arXiv.2407.10089>; 3) Mengyang Gu and Weining Shen (2020), Journal of Machine Learning Research; 4) Yizi Lin, Xubo Liu, Paul Segall and Mengyang Gu (2025), <doi:10.48550/arXiv.2501.01324>.
Facilitates estimation of full univariate and bivariate probability density functions and cumulative distribution functions along with full quantile functions (univariate) and nonparametric correlation (bivariate) using Hermite series based estimators. These estimators are particularly useful in the sequential setting (both stationary and non-stationary) and one-pass batch estimation setting for large data sets. Based on: Stephanou, Michael, Varughese, Melvin and Macdonald, Iain. "Sequential quantiles via Hermite series density estimation." Electronic Journal of Statistics 11.1 (2017): 570-607 <doi:10.1214/17-EJS1245>, Stephanou, Michael and Varughese, Melvin. "On the properties of Hermite series based distribution function estimators." Metrika (2020) <doi:10.1007/s00184-020-00785-z> and Stephanou, Michael and Varughese, Melvin. "Sequential estimation of Spearman rank correlation using Hermite series estimators." Journal of Multivariate Analysis (2021) <doi:10.1016/j.jmva.2021.104783>.
Testing and documenting code that communicates with remote servers can be painful. Dealing with authentication, server state, and other complications can make testing seem too costly to bother with. But it doesn't need to be that hard. This package enables one to test all of the logic on the R sides of the API in your package without requiring access to the remote service. Importantly, it provides three contexts that mock the network connection in different ways, as well as testing functions to assert that HTTP requests were---or were not---made. It also allows one to safely record real API responses to use as test fixtures. The ability to save responses and load them offline also enables one to write vignettes and other dynamic documents that can be distributed without access to a live server.
Age-Period-Cohort (APC) analyses are used to differentiate relevant drivers for long-term developments. The APCtools package offers visualization techniques and general routines to simplify the workflow of an APC analysis. Sophisticated functions are available both for descriptive and regression model-based analyses. For the former, we use density (or ridgeline) matrices and (hexagonally binned) heatmaps as innovative visualization techniques building on the concept of Lexis diagrams. Model-based analyses build on the separation of the temporal dimensions based on generalized additive models, where a tensor product interaction surface (usually between age and period) is utilized to represent the third dimension (usually cohort) on its diagonal. Such tensor product surfaces can also be estimated while accounting for further covariates in the regression model. See Weigert et al. (2021) <doi:10.1177/1354816620987198> for methodological details.
This package provides a collection of datasets and simplified functions for an introductory (geo)statistics module at University College London. Provides functionality for compositional, directional and spatial data, including ternary diagrams, Wulff and Schmidt stereonets, and ordinary kriging interpolation. Implements logistic and (additive and centred) logratio transformations. Computes vector averages and concentration parameters for the von-Mises distribution. Includes a collection of natural and synthetic fractals, and a simulator for deterministic chaos using a magnetic pendulum example. The main purpose of these functions is pedagogical. Researchers can find more complete alternatives for these tools in other packages such as compositions', robCompositions', sp', gstat and RFOC'. All the functions are written in plain R, with no compiled code and a minimal number of dependencies. Theoretical background and worked examples are available at <https://tinyurl.com/UCLgeostats/>.
Given two unbiased samples of patient level data on cost and effectiveness for a pair of treatments, make head-to-head treatment comparisons by (i) generating the bivariate bootstrap resampling distribution of ICE uncertainty for a specified value of the shadow price of health, lambda, (ii) form the wedge-shaped ICE confidence region with specified confidence fraction within [0.50, 0.99] that is equivariant with respect to changes in lambda, (iii) color the bootstrap outcomes within the above confidence wedge with economic preferences from an ICE map with specified values of lambda, beta and gamma parameters, (iv) display VAGR and ALICE acceptability curves, and (v) illustrate variation in ICE preferences by displaying potentially non-linear indifference(iso-preference) curves from an ICE map with specified values of lambda, beta and either gamma or eta parameters.
Biodiversity is a multifaceted concept covering different levels of organization from genes to ecosystems. iNEXT.3D extends iNEXT to include three dimensions (3D) of biodiversity, i.e., taxonomic diversity (TD), phylogenetic diversity (PD) and functional diversity (FD). This package provides functions to compute standardized 3D diversity estimates with a common sample size or sample coverage. A unified framework based on Hill numbers and their generalizations (Hill-Chao numbers) are used to quantify 3D. All 3D estimates are in the same units of species/lineage equivalents and can be meaningfully compared. The package features size- and coverage-based rarefaction and extrapolation sampling curves to facilitate rigorous comparison of 3D diversity across individual assemblages. Asymptotic 3D diversity estimates are also provided. See Chao et al. (2021) <doi:10.1111/2041-210X.13682> for more details.
This Haskell package is intended for those who are tired of keeping long lists of dependencies to the same essential libraries in each package as well as the endless imports of the same APIs all over again.
It also supports the modern tendencies in the language.
To solve those problems this package does the following:
Reexport the original APIs under the
Rebasenamespace.Export all the possible non-conflicting symbols from the
Rebase.Preludemodule.Give priority to the modern practices in the conflicting cases.
The policy behind the package is only to reexport the non-ambiguous and non-controversial APIs, which the community has obviously settled on. The package is intended to rapidly evolve with the contribution from the community, with the missing features being added with pull-requests.
The peak fitting of spectral data is performed by using the frame work of EM algorithm. We adapted the EM algorithm for the peak fitting of spectral data set by considering the weight of the intensity corresponding to the measurement energy steps (Matsumura, T., Nagamura, N., Akaho, S., Nagata, K., & Ando, Y. (2019, 2021 and 2023) <doi:10.1080/14686996.2019.1620123>, <doi:10.1080/27660400.2021.1899449> <doi:10.1080/27660400.2022.2159753>. The package efficiently estimates the parameters of Gaussian mixture model during iterative calculation between E-step and M-step, and the parameters are converged to a local optimal solution. This package can support the investigation of peak shift with two advantages: (1) a large amount of data can be processed at high speed; and (2) stable and automatic calculation can be easily performed.
Energy-Vorticity theory (EVT) is the fundamental theory to describe processes in the atmosphere by combining conserved quantities from hydrodynamics and thermodynamics. The package meteoEVT provides functions to calculate many energetic and vortical quantities, like potential vorticity, Bernoulli function and dynamic state index (DSI) [e.g. Weber and Nevir, 2008, <doi:10.1111/j.1600-0870.2007.00272.x>], for given gridded data, like ERA5 reanalyses. These quantities can be studied directly or can be used for many applications in meteorology, e.g., the objective identification of atmospheric fronts. For this purpose, separate function are provided that allow the detection of fronts based on the thermic front parameter [Hewson, 1998, <doi:10.1017/S1350482798000553>], the F diagnostic [Parfitt et al., 2017, <doi:10.1002/2017GL073662>] and the DSI [Mack et al., 2022, <arXiv:2208.11438>].
This is a C/C++ based package for advanced data transformation and statistical computing in R that is extremely fast, class-agnostic, robust and programmer friendly. Core functionality includes a rich set of S3 generic grouped and weighted statistical functions for vectors, matrices and data frames, which provide efficient low-level vectorizations, OpenMP multithreading, and skip missing values by default. These are integrated with fast grouping and ordering algorithms (also callable from C), and efficient data manipulation functions. The package also provides a flexible and rigorous approach to time series and panel data in R. It further includes fast functions for common statistical procedures, detailed (grouped, weighted) summary statistics, powerful tools to work with nested data, fast data object conversions, functions for memory efficient R programming, and helpers to effectively deal with variable labels, attributes, and missing data.
This package provides a recently proposed Bayesian BIN model disentangles the underlying processes that enable forecasters and forecasting methods to improve, decomposing forecasting accuracy into three components: bias, partial information, and noise. By describing the differences between two groups of forecasters, the model allows the user to carry out useful inference, such as calculating the posterior probabilities of the treatment reducing bias, diminishing noise, or increasing information. It also provides insight into how much tamping down bias and noise in judgment or enhancing the efficient extraction of valid information from the environment improves forecasting accuracy. This package provides easy access to the BIN model. For further information refer to the paper Ville A. Satopää, Marat Salikhov, Philip E. Tetlock, and Barbara Mellers (2021) "Bias, Information, Noise: The BIN Model of Forecasting" <doi:10.1287/mnsc.2020.3882>.