This package provides a framework that provides the methods for quantifying entropy-based local indicator of spatial association (ELSA) that can be used for both continuous and categorical data. In addition, this package offers other methods to measure local indicators of spatial associations (LISA). Furthermore, global spatial structure can be measured using a variogram-like diagram, called entrogram. For more information, please check that paper: Naimi, B., Hamm, N. A., Groen, T. A., Skidmore, A. K., Toxopeus, A. G., & Alibakhshi, S. (2019) <doi:10.1016/j.spasta.2018.10.001>.
An implementation of the sandwich smoother proposed in Fast Bivariate Penalized Splines by Xiao et al. (2012) <doi:10.1111/rssb.12007>. A hero is a specific type of sandwich. Dictionary.com (2018) <https://www.dictionary.com> describes a hero as: a large sandwich, usually consisting of a small loaf of bread or long roll cut in half lengthwise and containing a variety of ingredients, as meat, cheese, lettuce, and tomatoes. Also implements the spatio-temporal sandwich smoother of French and Kokoszka (2021) <doi:10.1016/j.spasta.2020.100413>.
Multivariable fractional polynomial algorithm simultaneously selects variables and functional forms in both generalized linear models and Cox proportional hazard models. Key references are Royston and Altman (1994) <doi:10.2307/2986270> and Royston and Sauerbrei (2008, ISBN:978-0-470-02842-1). In addition, it can model a sigmoid relationship between variable x and an outcome variable y using the approximate cumulative distribution transformation proposed by Royston (2014) <doi:10.1177/1536867X1401400206>. This feature distinguishes it from a standard fractional polynomial function, which lacks the ability to achieve such modeling.
Bivariate additive categorical regression via penalized maximum likelihood. Under a multinomial framework, the method fits bivariate models where both responses are nominal, ordinal, or a mix of the two. Partial proportional odds models are supported, with flexible (non-)uniform association structures. Various logit types and parametrizations can be specified for both marginals and the association, including Daleâ s model. The association structure can be regularized using polynomial-type penalty terms. Additive effects are modeled using P-splines. Standard methods such as summary(), residuals(), and predict() are available.
This package implements a set of distribution modeling methods that are suited to species with small sample sizes (e.g., poorly sampled species or rare species). While these methods can also be used on well-sampled taxa, they are united by the fact that they can be utilized with relatively few data points. More details on the currently implemented methodologies can be found in Drake and Richards (2018) <doi:10.1002/ecs2.2373>, Drake (2015) <doi:10.1098/rsif.2015.0086>, and Drake (2014) <doi:10.1890/ES13-00202.1>.
Generalized additive models under shape constraints on the component functions of the linear predictor. Models can include multiple shape-constrained (univariate and bivariate) and unconstrained terms. Routines of the package mgcv are used to set up the model matrix, print, and plot the results. Multiple smoothing parameter estimation by the Generalized Cross Validation or similar. See Pya and Wood (2015) <doi:10.1007/s11222-013-9448-7> for an overview. A broad selection of shape-constrained smoothers, linear functionals of smooths with shape constraints, and Gaussian models with AR1 residuals.
Support functions and datasets to facilitate the analysis of linguistic data. The current focus is on the calculation of corpus-linguistic dispersion measures as described in Gries (2021) <doi:10.1007/978-3-030-46216-1_5> and Soenning (2025) <doi:10.3366/cor.2025.0326>. The most commonly used parts-based indices are implemented, including different formulas and modifications that are found in the literature, with the additional option to obtain frequency-adjusted scores. Dispersion scores can be computed based on individual count variables or a term-document matrix.
Prediction intervals for ARIMA and structural time series models using importance sampling approach with uninformative priors for model parameters, leading to more accurate coverage probabilities in frequentist sense. Instead of sampling the future observations and hidden states of the state space representation of the model, only model parameters are sampled, and the method is based solving the equations corresponding to the conditional coverage probability of the prediction intervals. This makes method relatively fast compared to for example MCMC methods, and standard errors of prediction limits can also be computed straightforwardly.
Gives a number of functions to aid common data analysis processes and reporting statistical results in an RMarkdown file. Data analysis functions combine multiple base R functions used to describe simple bivariate relationships into a single, easy to use function. Reporting functions will return character strings to report p-values, confidence intervals, and hypothesis test and regression results. Strings will be LaTeX-formatted as necessary and will knit pretty in an RMarkdown document. The package also provides wrappers function in the tableone package to make the results knit-able.
Deconvolving thermoluminescence glow curves according to various kinetic models (first-order, second-order, general-order, and mixed-order) using a modified Levenberg-Marquardt algorithm (More, 1978) <DOI:10.1007/BFb0067700>. It provides the possibility of setting constraints or fixing any of parameters. It offers an interactive way to initialize parameters by clicking with a mouse on a plot at positions where peak maxima should be located. The optimal estimate is obtained by "trial-and-error". It also provides routines for simulating first-order, second-order, and general-order glow peaks.
REDUCE is a portable general-purpose computer algebra system. It is a system for doing scalar, vector and matrix algebra by computer, which also supports arbitrary precision numerical approximation and interfaces to gnuplot to provide graphics. It can be used interactively for simple calculations but also provides a full programming language, with a syntax similar to other modern programming languages. REDUCE supports alternative user interfaces including Run-REDUCE, TeXmacs and GNU Emacs. This package provides the Codemist Standard Lisp (CSL) version of REDUCE. It uses the gnuplot program, if installed, to draw figures.
Evaluation for density and distribution function of convolution of gamma distributions in R. Two related exact methods and one approximate method are implemented with efficient algorithm and C++ code. A quick guide for choosing correct method and usage of this package is given in package vignette. For the detail of methods used in this package, we refer the user to Mathai(1982)<doi:10.1007/BF02481056>, Moschopoulos(1984)<doi:10.1007/BF02481123>, Barnabani(2017)<doi:10.1080/03610918.2014.963612>, Hu et al.(2020)<doi:10.1007/s00180-019-00924-9>.
Package implements entropy balancing, a data preprocessing procedure described in Hainmueller (2008, <doi:10.1093/pan/mpr025>) that allows users to reweight a dataset such that the covariate distributions in the reweighted data satisfy a set of user specified moment conditions. This can be useful to create balanced samples in observational studies with a binary treatment where the control group data can be reweighted to match the covariate moments in the treatment group. Entropy balancing can also be used to reweight a survey sample to known characteristics from a target population.
This package provides tools to perform fuzzy formal concept analysis, presented in Wille (1982) <doi:10.1007/978-3-642-01815-2_23> and in Ganter and Obiedkov (2016) <doi:10.1007/978-3-662-49291-8>. It provides functions to load and save a formal context, extract its concept lattice and implications. In addition, one can use the implications to compute semantic closures of fuzzy sets and, thus, build recommendation systems. Matrix factorization is provided by the GreConD+ algorithm (Belohlavek and Trneckova, 2024 <doi:10.1109/TFUZZ.2023.3330760>).
Generate decision tables and simulate operating characteristics for phase I dose-finding designs to enable objective comparison across methods. Supported designs include the traditional 3+3, Bayesian Optimal Interval (BOIN) (Liu and Yuan (2015) <doi:10.1158/1078-0432.CCR-14-1526>), modified Toxicity Probability Interval-2 (mTPI-2) (Guo et al. (2017) <doi:10.1002/sim.7185>), interval 3+3 (i3+3) (Liu et al. (2020) <doi:10.1177/0962280220939123>), and Generalized 3+3 (G3). Provides visualization tools for comparing decision rules and operating characteristics across multiple designs simultaneously.
Automatically displays the order and spatial weighting matrix of the distance between locations. This concept was derived from the research of Mubarak, Aslanargun, and Siklar (2021) <doi:10.52403/ijrr.20211150> and Mubarak, Aslanargun, and Siklar (2022) <doi:10.17654/0972361722052>. Distance data between locations can be imported from Ms. Excel', maps package or created in R programming directly. This package also provides 5 simulations of distances between locations derived from fictitious data, the maps package, and from research by Mubarak, Aslanargun, and Siklar (2022) <doi:10.29244/ijsa.v6i1p90-100>.
Algorithms of distance-based k-medoids clustering: simple and fast k-medoids, ranked k-medoids, and increasing number of clusters in k-medoids. Calculate distances for mixed variable data such as Gower, Podani, Wishart, Huang, Harikumar-PV, and Ahmad-Dey. Cluster validation applies internal and relative criteria. The internal criteria includes silhouette index and shadow values. The relative criterium applies bootstrap procedure producing a heatmap with a flexible reordering matrix algorithm such as complete, ward, or average linkages. The cluster result can be plotted in a marked barplot or pca biplot.
This package provides a weighting approach that employs kernels to make one group have a similar distribution to another group on covariates. This method matches not only means or marginal distributions but also higher-order transformations implied by the choice of kernel. kbal is applicable to both treatment effect estimation and survey reweighting problems. Based on Hazlett, C. (2020) "Kernel Balancing: A flexible non-parametric weighting procedure for estimating causal effects." Statistica Sinica. <https://www.researchgate.net/publication/299013953_Kernel_Balancing_A_flexible_non-parametric_weighting_procedure_for_estimating_causal_effects>.
In the context of multistate models, which are popular in sociology, demography, and epidemiology, Markov chain with rewards calculations can help to refine transition timings and so obtain more accurate estimates. The package code accommodates up to nine transient states and irregular age (time) intervals. Traditional demographic life tables result as a special case. Formulas and methods involved are explained in detail in the accompanying article: Schneider / Myrskyla / van Raalte (2021): Flexible Transition Timing in Discrete-Time Multistate Life Tables Using Markov Chains with Rewards, MPIDR Working Paper WP-2021-002.
This package performs mutational signature analysis for targeted sequenced tumors. Unlike the canonical analysis of mutational signatures, SATS factorizes the mutation counts matrix into a panel context matrix (measuring the size of the targeted sequenced genome for each tumor in the unit of million base pairs (Mb)), a signature profile matrix, and a signature activity matrix. SATS also calculates the expected number of mutations attributed by a signature, namely signature burden, for each targeted sequenced tumor. For more details see Lee et al. (2024) <doi:10.1101/2023.05.18.23290188>.
Animal movement models including Moving-Resting Process with Embedded Brownian Motion (Yan et al., 2014, <doi:10.1007/s10144-013-0428-8>; Pozdnyakov et al., 2017, <doi:10.1007/s11009-017-9547-6>), Brownian Motion with Measurement Error (Pozdnyakov et al., 2014, <doi:10.1890/13-0532.1>), Moving-Resting-Handling Process with Embedded Brownian Motion (Pozdnyakov et al., 2020, <doi:10.1007/s11009-020-09774-1>), Moving-Resting Process with Measurement Error (Hu et al., 2021, <doi:10.1111/2041-210X.13694>), Moving-Moving Process with two Embedded Brownian Motions.
Some accelerated three-term conjugate gradient algorithms implemented purely in R with the same user interface as optim(). The search directions and acceleration scheme are described in Andrei, N. (2013) <doi:10.1016/j.amc.2012.11.097>, Andrei, N. (2013) <doi:10.1016/j.cam.2012.10.002>, and Andrei, N (2015) <doi:10.1007/s11075-014-9845-9>. Line search is done by a hybrid algorithm incorporating the ideas in Oliveia and Takahashi (2020) <doi:10.1145/3423597> and More and Thuente (1994) <doi:10.1145/192115.192132>.
Users can estimate the treatment effect for multiple subgroups basket trials based on the Bayesian Cluster Hierarchical Model (BCHM). In this model, a Bayesian non-parametric method is applied to dynamically calculate the number of clusters by conducting the multiple cluster classification based on subgroup outcomes. Hierarchical model is used to compute the posterior probability of treatment effect with the borrowing strength determined by the Bayesian non-parametric clustering and the similarities between subgroups. To use this package, JAGS software and rjags package are required, and users need to pre-install them.
When samples contain missing data, are small, or are suspected of bias, estimation of scale reliability may not be trustworthy. A recommended solution for this common problem has been Bayesian model estimation. Bayesian methods rely on user specified information from historical data or researcher intuition to more accurately estimate the parameters. This package provides a user friendly interface for estimating test reliability. Here, reliability is modeled as a beta distributed random variable with shape parameters alpha=true score variance and beta=error variance (Tanzer & Harlow, 2020) <doi:10.1080/00273171.2020.1854082>.