Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Multifactor nonparametric analysis of variance based on ranks. Builds on the Kruskal-Wallis H test and its 2x2 Scheirer-Ray-Hare extension to handle any factorial designs. Provides effect sizes, Dunn-Bonferroni pairwise-comparison matrices, and simple-effects analyses. Tailored for psychology and the social sciences, with beginner-friendly R syntax and outputs that can be dropped into journal reports. Includes helpers to export tab-separated results and compact tables of descriptive statistics (to APA-style reports).
The main function of this package allows numerical vector objects to be displayed with their values in vulgar fractional form. This is convenient if patterns can then be more easily detected. In some cases replacing the components of a numeric vector by a rational approximation can also be expected to remove some component of round-off error. The main functions form a re-implementation of the functions fractions and rational of the MASS package, but using a radically improved programming strategy.
Estimates heterogeneous effects in factorial (and conjoint) models. The methodology employs a Bayesian finite mixture of regularized logistic regressions, where moderators can affect each observation's probability of group membership and a sparsity-inducing prior fuses together levels of each factor while respecting ANOVA-style sum-to-zero constraints. Goplerud, Imai, and Pashley (2024) <doi:10.48550/ARXIV.2201.01357> provide further details.
Classical (bottom-up and top-down), optimal combination and heuristic point (Di Fonzo and Girolimetto, 2023 <doi:10.1016/j.ijforecast.2021.08.004>) and probabilistic (Girolimetto et al. 2024 <doi:10.1016/j.ijforecast.2023.10.003>) forecast reconciliation procedures for linearly constrained time series (e.g., hierarchical or grouped time series) in cross-sectional, temporal, or cross-temporal frameworks.
Annotates Finnish textual survey responses into CoNLL-U format using Finnish treebanks from <https://universaldependencies.org/format.html> using UDPipe as described in Straka and Straková (2017) <doi:10.18653/v1/K17-3009>. Formatted data is then analysed using single or comparison n-gram plots, wordclouds, summary tables and Concept Network plots. The Concept Network plots use the TextRank algorithm as outlined in Mihalcea, Rada & Tarau, Paul (2004) <https://aclanthology.org/W04-3252/>.
Computes the power and sample size (PASS) required to test for the difference in the mean function between two groups under a repeatedly measured longitudinal or sparse functional design. See the manuscript by Koner and Luo (2023) <https://salilkoner.github.io/assets/PASS_manuscript.pdf> for details of the PASS formula and computational details. The details of the testing procedure for univariate and multivariate response are presented in Wang (2021) <doi:10.1214/21-EJS1802> and Koner and Luo (2023) <arXiv:2302.05612> respectively.
Generating fractional binomial random variables and computing density, cumulative distribution, and quantiles of fractional binomial distributions. (Lee, J. (2023) <arXiv:2209.01516>.).
Full Consistency Method (FUCOM) for multi-criteria decision-making (MCDM), developed by Dragam Pamucar in 2018 (<doi:10.3390/sym10090393>). The goal of the method is to determine the weights of criteria such that the deviation from full consistency is minimized. Users provide a character vector specifying the ranking of each criterion according to its significance, starting from the criterion expected to have the highest weight to the least significant one. Additionally, users provide a numeric vector specifying the priority values for each criterion. The comparison is made with respect to the first-ranked (most significant) criterion. The function returns the optimized weights for each criterion (summing to 1), the comparative priority (Phi) values, the mathematical transitivity condition (w) value, and the minimum deviation from full consistency (DFC).
Finds features through a detailed analysis of model residuals using rpart classification and regression trees. Scans the residuals of a model across subsets of the data to identify areas where the model differs from the actual data.
This package provides a comprehensive framework in R for modeling and forecasting economic scenarios based on multi-level dynamic factor model. The package enables users to: (i) extract global and group-specific factors using a flexible multi-level factor structure; (ii) compute asymptotically valid confidence regions for the estimated factors, accounting for uncertainty in the factor loadings; (iii) obtain estimates of the parameters of the factor-augmented quantile regressions together with their standard deviations; (iv) recover full predictive conditional densities from estimated quantiles; (v) obtain risk measures based on extreme quantiles of the conditional densities; (vi) estimate the conditional density and the corresponding extreme quantiles when the factors are stressed.
Utilities to read and write files in the FITS (Flexible Image Transport System) format, a standard format in astronomy (see e.g. <https://en.wikipedia.org/wiki/FITS> for more information). Present low-level routines allow: reading, parsing, and modifying FITS headers; reading FITS images (multi-dimensional arrays); reading FITS binary and ASCII tables; and writing FITS images (multi-dimensional arrays). Higher-level functions allow: reading files composed of one or more headers and a single (perhaps multidimensional) image or single table; reading tables into data frames; generating vectors for image array axes; scaling and writing images as 16-bit integers. Known incompletenesses are reading random group extensions, as well as complex and array descriptor data types in binary tables.
Generate decision tables and simulate operating characteristics for phase I dose-finding designs to enable objective comparison across methods. Supported designs include the traditional 3+3, Bayesian Optimal Interval (BOIN) (Liu and Yuan (2015) <doi:10.1158/1078-0432.CCR-14-1526>), modified Toxicity Probability Interval-2 (mTPI-2) (Guo et al. (2017) <doi:10.1002/sim.7185>), interval 3+3 (i3+3) (Liu et al. (2020) <doi:10.1177/0962280220939123>), and Generalized 3+3 (G3). Provides visualization tools for comparing decision rules and operating characteristics across multiple designs simultaneously.
This package provides fast moving-window ("focal") and buffer-based extraction for raster data using the terra package. Automatically selects between a C++ backend (via terra') and a Fast Fourier Transform (FFT) backend depending on problem size. The FFT backend supports sum and mean, while other statistics (e.g., median, min, max, standard deviation) are handled by the terra backend. Supports multiple kernel types (e.g., circle, rectangle, gaussian), with NA handling consistent with terra via na.rm and na.policy'. Operates on SpatRaster objects and returns results with the same geometry.
Fuzzy set ordination is a multivariate analysis used in ecology to relate the composition of samples to possible explanatory variables. While differing in theory and method, in practice, the use is similar to constrained ordination. The package contains plotting and summary functions as well as the analyses.
Accompanying package of the book Financial Risk Modelling and Portfolio Optimisation with R', second edition. The data sets used in the book are contained in this package.
Fits models to catch and effort data. Single-species models are 1) delta log-normal, 2) Tweedie, or 3) Poisson-gamma (G)LMs.
This package provides templates for reports in rmarkdown and functions to create tables and summaries of data.
Three methods are implemented in R to facilitate the aggregations of flags in official statistics. From the underlying flags the highest in the hierarchy, the most frequent, or with the highest total weight is propagated to the flag(s) for EU or other aggregates. Below there are some reference documents for the topic: <https://sdmx.org/wp-content/uploads/CL_OBS_STATUS_v2_1.docx>, <https://sdmx.org/wp-content/uploads/CL_CONF_STATUS_1_2_2018.docx>, <http://ec.europa.eu/eurostat/data/database/information>, <http://www.oecd.org/sdd/33869551.pdf>, <https://sdmx.org/wp-content/uploads/CL_OBS_STATUS_implementation_20-10-2014.pdf>.
This package provides a structured profile likelihood algorithm for the logistic fixed effects model and an approximate expectation maximization (EM) algorithm for the logistic mixed effects model. Based on He, K., Kalbfleisch, J.D., Li, Y. and Li, Y. (2013) <doi:10.1007/s10985-013-9264-6>.
This package provides high-level access to neuroimaging data from standard software packages like FreeSurfer <http://freesurfer.net/> on the level of subjects and groups. Load morphometry data, surfaces and brain parcellations based on atlases. Mask data using labels, load data for specific atlas regions only, and visualize data and statistical results directly in R'.
Fast and numerically stable estimation of a covariance matrix by banding the Cholesky factor using a modified Gram-Schmidt algorithm implemented in RcppArmadilo. See <http://stat.umn.edu/~molst029> for details on the algorithm.
Standard generalized additive models assume a response function, which induces an assumption on the shape of the distribution of the response. However, miss-specifying the response function results in biased estimates. Therefore in Spiegel et al. (2017) <doi:10.1007/s11222-017-9799-6> we propose to estimate the response function jointly with the covariate effects. This package provides the underlying functions to estimate these generalized additive models with flexible response functions. The estimation is based on an iterative algorithm. In the outer loop the response function is estimated, while in the inner loop the covariate effects are determined. For the response function a strictly monotone P-spline is used while the covariate effects are estimated based on a modified Fisher-Scoring algorithm. Overall the estimation relies on the mgcv'-package.
This package provides functions for plotting probability density functions, distribution functions, survival functions, hazard functions and computing distribution moments. The implementation is inspired by Delignette-Muller and Dutang (2015) <doi:10.18637/jss.v064.i04>.
This package performs alignment, PCA, and modeling of multidimensional and unidimensional functions using the square-root velocity framework (Srivastava et al., 2011 <doi:10.48550/arXiv.1103.3817> and Tucker et al., 2014 <DOI:10.1016/j.csda.2012.12.001>). This framework allows for elastic analysis of functional data through phase and amplitude separation.