This package provides methods to evaluate the performance characteristics of various point and interval estimators for optimal adaptive two-stage designs as described in Meis et al. (2024) <doi:10.1002/sim.10020>. Specifically, this package is written to work with trial designs created by the adoptr package (Kunzmann et al. (2021) <doi:10.18637/jss.v098.i09>; Pilz et al. (2021) <doi:10.1002/sim.8953>)). Apart from the a priori evaluation of performance characteristics, this package also allows for the evaluation of the implemented estimators on real datasets, and it implements methods to calculate p-values.
Set of functions to perform various bootstrap unit root tests for both individual time series (including augmented Dickey-Fuller test and union tests), multiple time series and panel data; see Smeekes and Wilms (2023) <doi:10.18637/jss.v106.i12>, Palm, Smeekes and Urbain (2008) <doi:10.1111/j.1467-9892.2007.00565.x>, Palm, Smeekes and Urbain (2011) <doi:10.1016/j.jeconom.2010.11.010>, Moon and Perron (2012) <doi:10.1016/j.jeconom.2012.01.008>, Smeekes and Taylor (2012) <doi:10.1017/S0266466611000387> and Smeekes (2015) <doi:10.1111/jtsa.12110> for key references.
Meta-analysis approaches for Phase I dose finding early phases clinical trials in order to better suit requirements in terms of maximum tolerated dose (MTD) and maximal dose regimen (MDR). This package has currently three different approaches: (a) an approach proposed by Zohar et al, 2011, <doi:10.1002/sim.4121> (denoted as ZKO), (b) the Variance Weighted pooling analysis (called VarWT
) and (c) the Random Effects Model Based (REMB) algorithm, where user can input his/her own model based approach or use the existing random effect logistic regression model (named as glimem) through the dfmeta package.
The EvoPER
, Evolutionary Parameter Estimation for Individual-based Models is an extensible package providing optimization driven parameter estimation methods using metaheuristics and evolutionary computation techniques (Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization for continuous domains, Tabu Search, Evolutionary Strategies, ...) which could be more efficient and require, in some cases, fewer model evaluations than alternatives relying on experimental design. Currently there are built in support for models developed with Repast Simphony Agent-Based framework (<https://repast.github.io/>) and with NetLogo
(<https://ccl.northwestern.edu/netlogo/>) which are the most used frameworks for Agent-based modeling.
Offers a pipe-friendly alternative to the dplyr functions case_when()
and if_else()
, as well as a number of user-friendly simplifications for common use cases. These functions accept a vector as an optional first argument, allowing conditional statements to be built using the magrittr dot operator. The functions also coerce all outputs to the same type, meaning you no longer have to worry about using specific typed variants of NA or explicitly declaring integer outputs, and evaluate outputs somewhat lazily, so you don't waste time on long operations that won't be used.
Simulate demand and attributes for ready to launch new products during their life cycle, or during their introduction and growth phases. You provide the number of products, attributes, time periods and/or other parameters and npdsim can simulate for you the demand for each product during the considered time periods, and the attributes of each product. The simulation for the demand is based on the idea that each product has a shape and a level, where the level is the cumulative demand over the considered time periods, and the shape is the normalized demand across those time periods.
Fits Bayesian regularized varying coefficient models with the Nonparametric Varying Coefficient Spike-and-Slab Lasso (NVC-SSL) introduced by Bai et al. (2023) <arXiv:1907.06477>
. Functions to fit frequentist penalized varying coefficients are also provided, with the option of employing the group lasso penalty of Yuan and Lin (2006) <doi:10.1111/j.1467-9868.2005.00532.x>, the group minimax concave penalty (MCP) of Breheny and Huang <doi:10.1007/s11222-013-9424-2>, or the group smoothly clipped absolute deviation (SCAD) penalty of Breheny and Huang (2015) <doi:10.1007/s11222-013-9424-2>.
Fast functions for dealing with prime numbers, such as testing whether a number is prime and generating a sequence prime numbers. Additional functions include finding prime factors and Ruth-Aaron pairs, finding next and previous prime numbers in the series, finding or estimating the nth prime, estimating the number of primes less than or equal to an arbitrary number, computing primorials, prime k-tuples (e.g., twin primes), finding the greatest common divisor and smallest (least) common multiple, testing whether two numbers are coprime, and computing Euler's totient function. Most functions are vectorized for speed and convenience.
Several person-fit statistics (PFSs; Meijer and Sijtsma, 2001, <doi:10.1177/01466210122031957>) are offered. These statistics allow assessing whether individual response patterns to tests or questionnaires are (im)plausible given the other respondents in the sample or given a specified item response theory model. Some PFSs apply to dichotomous data, such as the likelihood-based PFSs (lz, lz*) and the group-based PFSs (personal biserial correlation, caution index, (normed) number of Guttman errors, agreement/disagreement/dependability statistics, U3, ZU3, NCI, Ht). PFSs suitable to polytomous data include extensions of lz, U3, and (normed) number of Guttman errors.
This package performs a qualitative analysis of one- and two-dimensional autonomous ordinary differential equation systems, using phase plane methods. Programs are available to identify and classify equilibrium points, plot the direction field, and plot trajectories for multiple initial conditions. In the one-dimensional case, a program is also available to plot the phase portrait. Whilst in the two-dimensional case, programs are additionally available to plot nullclines and stable/unstable manifolds of saddle points. Many example systems are provided for the user. For further details can be found in Grayling (2014) <doi:10.32614/RJ-2014-023>.
G-computation for a set of time-fixed exposures with quantile-based basis functions, possibly under linearity and homogeneity assumptions. This approach estimates a regression line corresponding to the expected change in the outcome (on the link basis) given a simultaneous increase in the quantile-based category for all exposures. Works with continuous, binary, and right-censored time-to-event outcomes. Reference: Alexander P. Keil, Jessie P. Buckley, Katie M. OBrien, Kelly K. Ferguson, Shanshan Zhao, and Alexandra J. White (2019) A quantile-based g-computation approach to addressing the effects of exposure mixtures; <doi:10.1289/EHP5838>.
Toolbox containing a variety of spectral clustering tools functions. Among the tools available are the hierarchical spectral clustering algorithm, the Shi and Malik clustering algorithm, the Perona and Freeman algorithm, the non-normalized clustering, the Von Luxburg algorithm, the Partition Around Medoids clustering algorithm, a multi-level clustering algorithm, recursive clustering and the fast method for all clustering algorithm. As well as other tools needed to run these algorithms or useful for unsupervised spectral clustering. This toolbox aims to gather the main tools for unsupervised spectral classification. See <http://mawenzi.univ-littoral.fr/> for more information and documentation.
The Swiss Ephemeris (version 2.10.03) is a high precision ephemeris based upon the DE431 ephemerides from NASA's JPL. It covers the time range 13201 BCE to 17191 CE. This package uses the semi-analytic theory by Steve Moshier. For faster and more accurate calculations, the compressed Swiss Ephemeris data is available in the swephRdata
package. To access this data package, run install.packages("swephRdata
", repos = "https://rstub.r-universe.dev", type = "source")'. The size of the swephRdata
package is approximately 115 MB. The user can also use the original JPL DE431 data.
Easily calculate precession and obliquity from an orbital solution (defaults to ZB18a from Zeebe and Lourens (2019) <doi:10.1126/science.aax0612>) and assumed or reconstructed values for tidal dissipation (Td) and dynamical ellipticity (Ed). This is a translation and adaptation of the C'-code in the supplementary material to Zeebe and Lourens (2022) <doi:10.1029/2021PA004349>, with further details on the methodology described in Zeebe (2022) <doi:10.3847/1538-3881/ac80f8>. The name of the C'-routine is snvec', which refers to the key units of computation: spin vector s and orbit normal vector n.
This package contains an implementation of invariant causal prediction for sequential data. The main function in the package is seqICP
', which performs linear sequential invariant causal prediction and has guaranteed type I error control. For non-linear dependencies the package also contains a non-linear method seqICPnl
', which allows to input any regression procedure and performs tests based on a permutation approach that is only approximately correct. In order to test whether an individual set S is invariant the package contains the subroutines seqICP.s
and seqICPnl.s
corresponding to the respective main methods.
MetNet
contains functionality to infer metabolic network topologies from quantitative data and high-resolution mass/charge information. Using statistical models (including correlation, mutual information, regression and Bayes statistics) and quantitative data (intensity values of features) adjacency matrices are inferred that can be combined to a consensus matrix. Mass differences calculated between mass/charge values of features will be matched against a data frame of supplied mass/charge differences referring to transformations of enzymatic activities. In a third step, the two levels of information are combined to form a adjacency matrix inferred from both quantitative and structure information.
This package implements several basic algorithms for estimating regression parameters for semiparametric accelerated failure time (AFT) model. The main methods are: Jin rank-based method (Jin (2003) <doi:10.1093/biomet/90.2.341>), Hellerâ s estimating method (Heller (2012) <doi:10.1198/016214506000001257>), Polynomial smoothed Gehan function method (Chung (2013) <doi:10.1007/s11222-012-9333-9>), Buckley-James method (Buckley (1979) <doi:10.2307/2335161>) and Jin`s improved least squares method (Jin (2006) <doi:10.1093/biomet/93.1.147>). This package can be used for modeling right-censored data and for comparing different estimation algorithms.
Real-time quantitative polymerase chain reaction (qPCR
) data sets by Batsch et al. (2008) <doi:10.1186/1471-2105-9-95>. This package provides five data sets, one for each PCR target: (i) rat SLC6A14, (ii) human SLC22A13, (iii) pig EMT, (iv) chicken ETT, and (v) human GAPDH. Each data set comprises a five-point, four-fold dilution series. For each concentration there are three replicates. Each amplification curve is 45 cycles long. Original raw data file: <https://static-content.springer.com/esm/art%3A10.1186%2F1471-2105-9-95/MediaObjects/12859_2007_2080_MOESM5_ESM.xls>
.
This package provides classes (S4) of commonly used elliptical, Archimedean, extreme-value and other copula families, as well as their rotations, mixtures and asymmetrizations. Nested Archimedean copulas, related tools and special functions. Methods for density, distribution, random number generation, bivariate dependence measures, Rosenblatt transform, Kendall distribution function, perspective and contour plots. Fitting of copula models with potentially partly fixed parameters, including standard errors. Serial independence tests, copula specification tests (independence, exchangeability, radial symmetry, extreme-value dependence, goodness-of-fit) and model selection based on cross-validation. Empirical copula, smoothed versions, and non-parametric estimators of the Pickands dependence function.
Random sampling from distributions with user-specified population covariance matrix. Marginal information may be fully specified, for which the package implements the VITA (VIne-To-Anything) algorithm Grønneberg and Foldnes (2017) <doi:10.1007/s11336-017-9569-6>. See also Grønneberg, Foldnes and Marcoulides (2022) <doi:10.18637/jss.v102.i03>. Alternatively, marginal skewness and kurtosis may be specified, for which the package implements the IG (independent generator) and PLSIM (piecewise linear) algorithms, see Foldnes and Olsson (2016) <doi:10.1080/00273171.2015.1133274> and Foldnes and Grønneberg (2021) <doi:10.1080/10705511.2021.1949323>, respectively.
This package implements a generalized linear model approach for detecting differentially expressed genes across treatment groups in count data. The package supports both quasi-Poisson and negative binomial models to handle over-dispersion, ensuring robust identification of differential expression. It allows for the inclusion of treatment effects and gene-wise covariates, as well as normalization factors for accurate scaling across samples. Additionally, it incorporates statistical significance testing with options for p-value adjustment and log2 fold range thresholds, making it suitable for RNA-seq analysis as described in by Xu et al., (2024) <doi:10.1371/journal.pone.0300565>.
The EXPOS model uses a digital elevation model (DEM) to estimate exposed and protected areas for a given hurricane wind direction and inflection angle. The resulting topographic exposure maps can be combined with output from the HURRECON model to estimate hurricane wind damage across a region. For details on the original version of the EXPOS model written in Borland Pascal', see: Boose, Foster, and Fluet (1994) <doi:10.2307/2937142>, Boose, Chamberlin, and Foster (2001) <doi:10.1890/0012-9615(2001)071[0027:LARIOH]2.0.CO;2>, and Boose, Serrano, and Foster (2004) <doi:10.1890/02-4057>.
The geomod does spatial prediction of the Geotechnical soil properties. It predicts the spatial distribution of Geotechnical properties of soil e.g. shear strength, permeability, plasticity index, Standard Penetration Test (SPT) counts, etc. The output of the prediction takes the form of a map or a series of maps. It uses the interpolation technique where a single or statistically â bestâ estimate of spatial occurrence soil property is determined. The interpolation is based on both the sampled data and a variogram model for the spatial correlation of the sampled data. The single estimate is produced by a Kriging technique.
Base R's default setting for stringsAsFactors
within data.frame()
and as.data.frame()
is supposedly the most often complained about piece of code in the R infrastructure. The hellno package provides an explicit solution without changing R itself or having to mess around with options. It tries to solve this problem by providing alternative data.frame()
and as.data.frame()
functions that are in fact simple wrappers around base R's data.frame()
and as.data.frame()
with stringsAsFactors
option set to HELLNO ( which in turn equals FALSE ) by default.