This package contains functions for non-parametric survival analysis of exact and interval-censored observations.
This package implements a nonparametric statistical test for rank or score data from partially-balanced incomplete block-design experiments.
Nonparametric smoothing methods for density and regression estimation involving circular data, including the estimation of the mean regression function and other conditional characteristics.
Dirichlet process mixture of multivariate normal, skew normal or skew t-distributions modeling oriented towards flow-cytometry data preprocessing applications. Method is detailed in: Hejblum, Alkhassimn, Gottardo, Caron & Thiebaut (2019) <doi: 10.1214/18-AOAS1209>.
Support the book: Wu CO and Tian X (2018). Nonparametric Models for Longitudinal Data. Chapman & Hall/CRC (to appear); and provide fit for using global and local smoothing methods for the conditional-mean and conditional-distribution based models with longitudinal Data.
This package performs nonparametric estimation in mixture cure models, and significance tests for the cure probability. For details, see López-Cheda et al. (2017a) <doi:10.1016/j.csda.2016.08.002> and López-Cheda et al. (2017b) <doi:10.1007/s11749-016-0515-1>.
Perform a stratified weighted log-rank test in a randomized controlled trial. Tests can be visualized as a difference in average score on the two treatment arms. These methods are described in Magirr and Burman (2018) <doi:10.48550/arXiv.1807.11097>
, Magirr (2020) <doi:10.48550/arXiv.2007.04767>
, and Magirr and Jimenez (2022) <doi:10.48550/arXiv.2201.10445>
.
Robust nonparametric bootstrap and permutation tests for location, correlation, and regression problems, as described in Helwig (2019a) <doi:10.1002/wics.1457> and Helwig (2019b) <doi:10.1016/j.neuroimage.2019.116030>. Univariate and multivariate tests are supported. For each problem, exact tests and Monte Carlo approximations are available. Five different nonparametric bootstrap confidence intervals are implemented. Parallel computing is implemented via the parallel package.
This package performs nonparametric analysis of longitudinal data in factorial experiments. Longitudinal data are those which are collected from the same subjects over time, and they frequently arise in biological sciences. Nonparametric methods do not require distributional assumptions, and are applicable to a variety of data types (continuous, discrete, purely ordinal, and dichotomous). Such methods are also robust with respect to outliers and for small sample sizes.
Calculates phenological cycle and anomalies using a non-parametric approach applied to time series of vegetation indices derived from remote sensing data or field measurements. The package implements basic and high-level functions for manipulating vector data (numerical series) and raster data (satellite derived products). Processing of very large raster files is supported. For more information, please check the following paper: Chávez et al. (2023) <doi:10.3390/rs15010073>.
Simulate demand and attributes for ready to launch new products during their life cycle, or during their introduction and growth phases. You provide the number of products, attributes, time periods and/or other parameters and npdsim can simulate for you the demand for each product during the considered time periods, and the attributes of each product. The simulation for the demand is based on the idea that each product has a shape and a level, where the level is the cumulative demand over the considered time periods, and the shape is the normalized demand across those time periods.
Analysis of multivariate data with two-way completely randomized factorial design. The analysis is based on fully nonparametric, rank-based methods and uses test statistics based on the Dempster's ANOVA, Wilk's Lambda, Lawley-Hotelling and Bartlett-Nanda-Pillai criteria. The multivariate response is allowed to be ordinal, quantitative, binary or a mixture of the different variable types. The package offers two functions performing the analysis, one for small and the other for large sample sizes. The underlying methodology is largely described in Bathke and Harrar (2016) <doi:10.1007/978-3-319-39065-9_7> and in Munzel and Brunner (2000) <doi:10.1016/S0378-3758(99)00212-8>.
Nonparametric maximum likelihood estimation or Gaussian quadrature for overdispersed generalized linear models and variance component models.
This package provides routines for plotting linkage and association results along a chromosome, with marker names displayed along the top border. There are also routines for generating BED and BedGraph
custom tracks for viewing in the UCSC genome browser. The data reformatting program Mega2 uses this package to plot output from a variety of programs.
Nonparametric tests for clustered data in pre-post intervention design documented in Cui and Harrar (2021) <doi:10.1002/bimj.201900310> and Harrar and Cui (2022) <doi:10.1016/j.jspi.2022.05.009>. Other than the main test results mentioned in the reference paper, this package also provides a function to calculate the sample size allocations for the input long format data set, and also a function for adjusted/unadjusted confidence intervals calculations. There are also functions to visualize the distribution of data across different intervention groups over time, and also the adjusted/unadjusted confidence intervals.
This package provides several novel exact hypothesis tests with minimal assumptions on the errors. The tests are exact, meaning that their p-values are correct for the given sample sizes (the p-values are not derived from asymptotic analysis). The test for stochastic inequality is for ordinal comparisons based on two independent samples and requires no assumptions on the errors. The other tests include tests for the mean and variance of a single sample and comparing means in independent samples. All these tests only require that the data has known bounds (such as percentages that lie in [0,100]. These bounds are part of the input.
Nonparametric Tests for Main Effects, Simple Effects and Interaction Effect with Censored Data and Two Factorial Influencing Variables.
This package performs nonparametric estimation in mixture cure models when the cure status is partially known. For details, see Safari et al (2021) <doi:10.1002/bimj.202100156>, Safari et al (2022) <doi:10.1177/09622802221115880> and Safari et al (2023) <doi:10.1007/s10985-023-09591-x>.
An implementation of the Nonparametric Predictive Inference approach in R. It provides tools for quantifying uncertainty via lower and upper probabilities. It includes useful functions for pairwise and multiple comparisons: comparing two groups with and without terminated tails, selecting the best group, selecting the subset of best groups, selecting the subset including the best group.
Fits sphere-sphere regression models by estimating locally weighted rotations. Simulation of sphere-sphere data according to non-rigid rotation models. Provides methods for bias reduction applying iterative procedures within a Newton-Raphson learning scheme. Cross-validation is exploited to select smoothing parameters. See Marco Di Marzio, Agnese Panzera & Charles C. Taylor (2018) <doi:10.1080/01621459.2017.1421542>.
This package provides tools for data-driven statistical analysis using local polynomial regression and kernel density estimation methods as described in Calonico, Cattaneo and Farrell (2018, <doi:10.1080/01621459.2017.1285776>): lprobust()
for local polynomial point estimation and robust bias-corrected inference, lpbwselect()
for local polynomial bandwidth selection, kdrobust()
for kernel density point estimation and robust bias-corrected inference, kdbwselect()
for kernel density bandwidth selection, and nprobust.plot()
for plotting results. The main methodological and numerical features of this package are described in Calonico, Cattaneo and Farrell (2019, <doi:10.18637/jss.v091.i08>).
With this package, it is possible to compute nonparametric simultaneous confidence intervals for relative contrast effects in the unbalanced one way layout. Moreover, it computes simultaneous p-values. The simultaneous confidence intervals can be computed using multivariate normal distribution, multivariate t-distribution with a Satterthwaite Approximation of the degree of freedom or using multivariate range preserving transformations with Logit or Probit as transformation function. 2 sample comparisons can be performed with the same methods described above. There is no assumption on the underlying distribution function, only that the data have to be at least ordinal numbers. See Konietschke et al. (2015) <doi:10.18637/jss.v064.i09> for details.
This package provides a number of statistical tests have been proposed to compare two survival curves, including the difference in (or ratio of) t-year survival, difference in (or ratio of) p-th percentile survival, difference in (or ratio of) restricted mean survival time, and the weighted log-rank test. Despite the multitude of options, the convention in survival studies is to assume proportional hazards and to use the unweighted log-rank test for design and analysis. This package provides sample size and power calculation for all of the above statistical tests with allowance for flexible accrual, censoring, and survival (eg. Weibull, piecewise-exponential, mixture cure). It is the companion R package to the paper by Yung and Liu (2020) <doi:10.1111/biom.13196>. Specific to the weighted log-rank test, users may specify which approximations they wish to use to estimate the large-sample mean and variance. The default option has been shown to provide substantial improvement over the conventional sample size and power equations based on Schoenfeld (1981) <doi:10.1093/biomet/68.1.316>.
This package performs combination tests and sample size calculation for fixed design with survival endpoints using combination tests under either proportional hazards or non-proportional hazards. The combination tests include maximum weighted log-rank test and projection test. The sample size calculation procedure is very flexible, allowing for user-defined hazard ratio function and considering various trial conditions like staggered entry, drop-out etc. The sample size calculation also applies to various cure models such as proportional hazards cure model, cure model with (random) delayed treatments effects. Trial simulation function is also provided to facilitate the empirical power calculation. The references for projection test and maximum weighted logrank test include Brendel et al. (2014) <doi:10.1111/sjos.12059> and Cheng and He (2021) <arXiv:2110.03833>
. The references for sample size calculation under proportional hazard include Schoenfeld (1981) <doi:10.1093/biomet/68.1.316> and Freedman (1982) <doi:10.1002/sim.4780010204>. The references for calculation under non-proportional hazards include Lakatos (1988) <doi:10.2307/2531910> and Cheng and He (2023) <doi:10.1002/bimj.202100403>.