Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Tidy tools to apply filter-based supervised feature selection methods. These methods score and rank feature relevance using metrics such as p-values, correlation, and importance scores (Kuhn and Johnson (2019) <doi:10.1201/9781315108230>).
This package provides tools, helpers and data structures for developing models and time series functions for fable and extension packages. These tools support a consistent and tidy interface for time series modelling and analysis.
Statistical methods and simulation tools for the interpretation of forensic DNA mixtures. The methods implemented are described in Haned et al. (2011) <doi:10.1111/j.1556-4029.2010.01550.x>, Haned et al. (2012) <doi:10.1016/j.fsigen.2012.11.002> and Gill & Haned (2013) <doi:10.1016/j.fsigen.2012.08.008>.
This package provides color palettes designed to be reminiscent of text on paper. The color schemes were taken from <https://stephango.com/flexoki>. Includes discrete, continuous, and binned scales that are not necessarily color-blind friendly. Simple scale and theme functions are available for use with ggplot2'.
An implementation of the methodologies described in Xi Liu, Afshin A. Divani, and Alexander Petersen (2022) <doi:10.1016/j.csda.2022.107421>, including truncated functional linear and truncated functional logistic regression models.
This package provides a suite of methods for detecting influential subjects in longitudinal datasets, particularly when observations occur at irregular time points. The methods identify individuals whose response trajectories deviate significantly from the population pattern, enabling detection of anomalies or subjects exerting undue influence on model outcomes.
Estimate a FAVAR model by a Bayesian method, based on Bernanke et al. (2005) <DOI:10.1162/0033553053327452>.
This package provides generic data structures and algorithms for use with forest mensuration data in a consistent framework. The functions and objects included are a collection of broadly applicable tools. More specialized applications should be implemented in separate packages that build on this foundation. Documentation about ForestElementsR is provided by three vignettes included in this package. For an introduction to the field of forest mensuration, refer to the textbooks by Kershaw et al. (2017) <doi:10.1002/9781118902028>, and van Laar and Akca (2007) <doi:10.1007/978-1-4020-5991-9>.
The penalized and non-penalized Minorize-Maximization (MM) method for frailty models to fit the clustered data, multi-event data and recurrent data. Least absolute shrinkage and selection operator (LASSO), minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) penalized functions are implemented. All the methods are computationally efficient. These general methods are proposed based on the following papers, Huang, Xu and Zhou (2022) <doi:10.3390/math10040538>, Huang, Xu and Zhou (2023) <doi:10.1177/09622802221133554>.
This package provides functional control charts for statistical process monitoring of functional data, using the methods of Capezza et al. (2020) <doi:10.1002/asmb.2507>, Centofanti et al. (2021) <doi:10.1080/00401706.2020.1753581>, Capezza et al. (2024) <doi:10.1080/00224065.2024.2383674>, Capezza et al. (2024) <doi:10.1080/00401706.2024.2327346>, Centofanti et al. (2025) <doi:10.1080/00224065.2024.2430978>, Capezza et al. (2025) <doi:10.48550/arXiv.2410.20138>. The package is thoroughly illustrated in the paper of Capezza et al (2023) <doi:10.1080/00224065.2023.2219012>.
Approximate false positive rate control in selection frequency for random forest using the methods described by Ender Konukoglu and Melanie Ganz (2014) <arXiv:1410.2838>. Methods for calculating the selection frequency threshold at false positive rates and selection frequency false positive rate feature selection.
Estimation, model selection and goodness-of-fit of (1) factor copula models for mixed continuous and discrete data in Kadhem and Nikoloulopoulos (2021) <doi:10.1111/bmsp.12231>; (2) bi-factor and second-order copula models for item response data in Kadhem and Nikoloulopoulos (2023) <doi:10.1007/s11336-022-09894-2>; (3) factor tree copula models for item response data in Kadhem and Nikoloulopoulos (2022) <arXiv:2201.00339>.
An implementation of various learning algorithms based on fuzzy rule-based systems (FRBSs) for dealing with classification and regression tasks. Moreover, it allows to construct an FRBS model defined by human experts. FRBSs are based on the concept of fuzzy sets, proposed by Zadeh in 1965, which aims at representing the reasoning of human experts in a set of IF-THEN rules, to handle real-life problems in, e.g., control, prediction and inference, data mining, bioinformatics data processing, and robotics. FRBSs are also known as fuzzy inference systems and fuzzy models. During the modeling of an FRBS, there are two important steps that need to be conducted: structure identification and parameter estimation. Nowadays, there exists a wide variety of algorithms to generate fuzzy IF-THEN rules automatically from numerical data, covering both steps. Approaches that have been used in the past are, e.g., heuristic procedures, neuro-fuzzy techniques, clustering methods, genetic algorithms, squares methods, etc. Furthermore, in this version we provide a universal framework named frbsPMML', which is adopted from the Predictive Model Markup Language (PMML), for representing FRBS models. PMML is an XML-based language to provide a standard for describing models produced by data mining and machine learning algorithms. Therefore, we are allowed to export and import an FRBS model to/from frbsPMML'. Finally, this package aims to implement the most widely used standard procedures, thus offering a standard package for FRBS modeling to the R community.
This package provides a model-independent factor importance ranking and selection procedure based on total Sobol indices. Please see Huang and Joseph (2025) <doi:10.1080/00401706.2025.2483531>. This research is supported by U.S. National Science Foundation grants DMS-2310637 and DMREF-1921873.
Allows user to obtain subsets of columns of data or vectors within a list. These subsets will match the original data in terms of average and variation, but have a consistent length of data per column. It is intended for use on automated data generation which may not always output the same N per replicate or sample.
Supports teaching methods of estimating and testing time series factor models for use in robust portfolio construction and analysis. Unique in providing not only classical least squares, but also modern robust model fitting methods which are not much influenced by outliers. Includes returns and risk decompositions, with user choice of standard deviation, value-at-risk, and expected shortfall risk measures. "Robust Statistics Theory and Methods (with R)", R. A. Maronna, R. D. Martin, V. J. Yohai, M. Salibian-Barrera (2019) <doi:10.1002/9781119214656>.
Small set of functions designed to speed up the computation of certain matrix operations that are commonly used in statistics and econometrics. It provides efficient implementations for the computation of several structured matrices, matrix decompositions and statistical procedures, many of which have minimal memory overhead. Furthermore, the package provides interfaces to C code callable by another C code from other R packages.
Binding to the C++ implementation of the flexible polyline encoding by HERE <https://github.com/heremaps/flexible-polyline>. The flexible polyline encoding is a lossy compressed representation of a list of coordinate pairs or coordinate triples. The encoding is achieved by: (1) Reducing the decimal digits of each value; (2) encoding only the offset from the previous point; (3) using variable length for each coordinate delta; and (4) using 64 URL-safe characters to display the result.
This package provides a collection of methods for modeling time-to-event data using both functional and scalar predictors. It implements functional data analysis techniques for estimation and inference, allowing researchers to assess the impact of functional covariates on survival outcomes, including time-to-single event and recurrent event outcomes.
Generating fractional binomial random variables and computing density, cumulative distribution, and quantiles of fractional binomial distributions. (Lee, J. (2023) <arXiv:2209.01516>.).
The FLEX method, developed by Yoon and Choi (2013) <doi:10.1007/978-3-642-33042-1_21>, performs least squares estimation for fuzzy predictors and outcomes, generating crisp regression coefficients by minimizing the distance between observed and predicted outcomes. It also provides functions for fuzzifying data and inference tasks, including significance testing, fit indices, and confidence interval estimation.
Downloads all the datasets (you can exclude the daily ones or specify a list of those you are targeting specifically) from Kenneth French's Website at <https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html>, process them and convert them to list of xts (time series).
Classical (bottom-up and top-down), optimal combination and heuristic point (Di Fonzo and Girolimetto, 2023 <doi:10.1016/j.ijforecast.2021.08.004>) and probabilistic (Girolimetto et al. 2024 <doi:10.1016/j.ijforecast.2023.10.003>) forecast reconciliation procedures for linearly constrained time series (e.g., hierarchical or grouped time series) in cross-sectional, temporal, or cross-temporal frameworks.
The funFEM algorithm (Bouveyron et al., 2014) allows to cluster functional data by modeling the curves within a common and discriminative functional subspace.