Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package contains the methods proposed by Geyer and Meeden (2005)<doi:10.1214/088342305000000340> and Trigo et al. (2025) <doi:10.47749/T/UNICAMP.2025.1500297> to construct fuzzy confidence intervals. Compute and plot the fuzzy membership functions of the methods, and the expected length compared with the infimum.
Implementation of the Future API <doi:10.32614/RJ-2021-048> on top of the batchtools package. This allows you to process futures, as defined by the future package, in parallel out of the box, not only on your local machine or ad-hoc cluster of machines, but also via high-performance compute ('HPC') job schedulers such as LSF', OpenLava', Slurm', SGE', and TORQUE / PBS', e.g. y <- future.apply::future_lapply(files, FUN = process)'.
Estimates fuzzy measures of poverty and deprivation. It also estimates the sampling variance of these measures using bootstrap or jackknife repeated replications.
Screens daily streamflow time series for temporal trends and change-points. This package has been primarily developed for assessing the quality of daily streamflow time series. It also contains tools for plotting and calculating many different streamflow metrics. The package can be used to produce summary screening plots showing change-points and significant temporal trends for high flow, low flow, and/or baseflow statistics, or it can be used to perform more detailed hydrological time series analyses. The package was designed for screening daily streamflow time series from Water Survey Canada and the United States Geological Survey but will also work with streamflow time series from many other agencies. Package update to version 2.0 made updates to read.flows function to allow loading of GRDC and ROBIN streamflow record formats. This package uses the `changepoint` package for change point detection. For more information on change point methods, see the changepoint package at <https://cran.r-project.org/package=changepoint>.
Convenient functions for ensemble forecasts in R combining approaches from the forecast package. Forecasts generated from auto.arima(), ets(), thetaf(), nnetar(), stlm(), tbats(), snaive() and arfima() can be combined with equal weights, weights based on in-sample errors (introduced by Bates & Granger (1969) <doi:10.1057/jors.1969.103>), or cross-validated weights. Cross validation for time series data with user-supplied models and forecasting functions is also supported to evaluate model accuracy.
Perform Maximum Likelihood Factor analysis on a covariance matrix or data matrix.
Reads cell contents plus formatting from a spreadsheet file and creates an editable gt object with the same data and formatting. Supports the most commonly-used cell and text styles including colors, fills, font weights and decorations, and borders.
Designed to streamline the process of analyzing genotyping data from Fluidigm machines, this package offers a suite of tools for data handling and analysis. It includes functions for converting Fluidigm data to format used by PLINK', estimating errors, calculating pairwise similarities, determining pairwise similarity loci, and generating a similarity matrix.
Construction and smart selection of Gaussian process models for analysis of computer experiments with emphasis on treatment of functional inputs that are regularly sampled. This package offers: (i) flexible modeling of functional-input regression problems through the fairly general Gaussian process model; (ii) built-in dimension reduction for functional inputs; (iii) heuristic optimization of the structural parameters of the model (e.g., active inputs, kernel function, type of distance). An in-depth tutorial in the use of funGp is provided in Betancourt et al. (2024) <doi:10.18637/jss.v109.i05> and Metamodeling background is provided in Betancourt et al. (2020) <doi:10.1016/j.ress.2020.106870>. The algorithm for structural parameter optimization is described in <https://hal.science/hal-02532713>.
Implement and fit a variety of short-memory (SM) and long-memory (LM) models from a very broad family of exponential generalized autoregressive conditional heteroskedasticity (EGARCH) models, such as a MEGARCH (modified EGARCH), FIEGARCH (fractionally integrated EGARCH), FIMLog-GARCH (fractionally integrated modulus Log-GARCH), and more. The FIMLog-GARCH as part of the EGARCH family is discussed in Feng et al. (2023) <https://econpapers.repec.org/paper/pdnciepap/156.htm>. For convenience and the purpose of comparison, a variety of other popular SM and LM GARCH-type models, like an APARCH model, a fractionally integrated APARCH (FIAPARCH) model, standard GARCH and fractionally integrated GARCH (FIGARCH) models, GJR-GARCH and FIGJR-GARCH models, TGARCH and FITGARCH models, are implemented as well as dual models with simultaneous modelling of the mean, including dual long-memory models with a fractionally integrated autoregressive moving average (FARIMA) model in the mean and a long-memory model in the variance, and semiparametric volatility model extensions. Parametric models and parametric model parts are fitted through quasi-maximum-likelihood estimation. Furthermore, common forecasting and backtesting functions for value-at-risk (VaR) and expected shortfall (ES) based on the package's models are provided.
An implementation of the fractional weighted bootstrap to be used as a drop-in for functions in the boot package. The fractional weighted bootstrap (also known as the Bayesian bootstrap) involves drawing weights randomly that are applied to the data rather than resampling units from the data. See Xu et al. (2020) <doi:10.1080/00031305.2020.1731599> for details.
Fuel economy data from the EPA, 1985-2015, conveniently packaged for consumption by R users.
Estimating the number of factors in Exploratory Factor Analysis (EFA) with out-of-sample prediction errors using a cross-validation scheme. Haslbeck & van Bork (Preprint) <https://psyarxiv.com/qktsd>.
This package provides a set of analytical tools useful in analysing ecological and geographical data sets, both ancient and modern. The package includes functions for estimating species richness (Chao 1 and 2, ACE, ICE, Jacknife), shared species/beta diversity, species area curves and geographic distances and areas.
Developed by CDC/ATSDR (Centers for Disease Control and Prevention/ Agency for Toxic Substances and Disease Registry), Social Vulnerability Index (SVI) serves as a tool to assess the resilience of communities by taking into account socioeconomic and demographic factors. Provided with year(s), region(s) and a geographic level of interest, findSVI retrieves required variables from US census data and calculates SVI for communities in the specified area based on CDC/ATSDR SVI documentation. Reference for the calculation methods: Flanagan BE, Gregory EW, Hallisey EJ, Heitgerd JL, Lewis B (2011) <doi:10.2202/1547-7355.1792>.
Forest data quality is a package containing nine methods of analysis for forest databases, from databases containing inventory data and growth models, the focus of the analyzes is related to the quality of the data present in the database with a focus on consistency , punctuality and completeness of data.
This package provides a collection of features, decomposition methods, statistical summaries and graphics functions for the analysing tidy time series data. The package name feasts is an acronym comprising of its key features: Feature Extraction And Statistics for Time Series.
Statistical hypothesis testing methods for inferring model-free functional dependency using asymptotic chi-squared or exact distributions. Functional test statistics are asymmetric and functionally optimal, unique from other related statistics. Tests in this package reveal evidence for causality based on the causality-by- functionality principle. They include asymptotic functional chi-squared tests (Zhang & Song 2013) <doi:10.48550/arXiv.1311.2707>, an adapted functional chi-squared test (Kumar & Song 2022) <doi:10.1093/bioinformatics/btac206>, and an exact functional test (Zhong & Song 2019) <doi:10.1109/TCBB.2018.2809743> (Nguyen et al. 2020) <doi:10.24963/ijcai.2020/372>. The normalized functional chi-squared test was used by Best Performer NMSUSongLab in HPN-DREAM (DREAM8) Breast Cancer Network Inference Challenges (Hill et al. 2016) <doi:10.1038/nmeth.3773>. A function index (Zhong & Song 2019) <doi:10.1186/s12920-019-0565-9> (Kumar et al. 2018) <doi:10.1109/BIBM.2018.8621502> derived from the functional test statistic offers a new effect size measure for the strength of functional dependency, a better alternative to conditional entropy in many aspects. For continuous data, these tests offer an advantage over regression analysis when a parametric functional form cannot be assumed; for categorical data, they provide a novel means to assess directional dependency not possible with symmetrical Pearson's chi-squared or Fisher's exact tests.
This package provides a computationally efficient and statistically rigorous fast Kernel Machine method for multi-kernel analysis. The approach is based on a low-rank approximation to the nuisance effect kernel matrices. The algorithm is applicable to continuous, binary, and survival traits and is implemented using the existing single-kernel analysis software SKAT and coxKM'. coxKM can be obtained from <https://github.com/lin-lab/coxKM>.
Routines for estimating tree fiber (tracheid) length distributions in the standing tree based on increment core samples. Two types of data can be used with the package, increment core data measured by means of an optical fiber analyzer (OFA), e.g. such as the Kajaani Fiber Lab, or measured by microscopy. Increment core data analyzed by OFAs consist of the cell lengths of both cut and uncut fibres (tracheids) and fines (such as ray parenchyma cells) without being able to identify which cells are cut or if they are fines or fibres. The microscopy measured data consist of the observed lengths of the uncut fibres in the increment core. A censored version of a mixture of the fine and fiber length distributions is proposed to fit the OFA data, under distributional assumptions (Svensson et al., 2006) <doi:10.1111/j.1467-9469.2006.00501.x>. The package offers two choices for the assumptions of the underlying density functions of the true fiber (fine) lenghts of those fibers (fines) that at least partially appear in the increment core, being the generalized gamma and the log normal densities.
Modelizations and previsions functions for Functional AutoRegressive processes using nonparametric methods: functional kernel, estimation of the covariance operator in a subspace, ...
This package provides core functions and utilities for packages and other code developed by Jordan Mark Barbone.
Regression models for functional data, i.e., scalar-on-function, function-on-scalar and function-on-function regression models, are fitted by a component-wise gradient boosting algorithm. For a manual on how to use FDboost', see Brockhaus, Ruegamer, Greven (2017) <doi:10.18637/jss.v094.i10>.
Fit (generalized) linear regression models in each leaf node of a tree. The tree is constructed using clinical variables only. The linear regression models are constructed using (high-dimensional) omics variables only. The leaf-node-specific regression models are estimated using the penalized likelihood including a standard ridge (L2) penalty and a fusion penalty that links the leaf-node-specific regression models to one another. The intercepts of the leaf nodes reflect the effects of the clinical variables and are left unpenalized. The tree, fitted with the clinical variables only, should be constructed outside of the package with the rpart R package. See Goedhart and others (2024) <doi:10.48550/arXiv.2411.02396> for details on the method.