Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements fast change point detection algorithm based on the paper "Sequential Gradient Descent and Quasi-Newton's Method for Change-Point Analysis" by Xianyang Zhang, Trisha Dawn <https://proceedings.mlr.press/v206/zhang23b.html>. The algorithm is based on dynamic programming with pruning and sequential gradient descent. It is able to detect change points a magnitude faster than the vanilla Pruned Exact Linear Time(PELT). The package includes examples of linear regression, logistic regression, Poisson regression, penalized linear regression data, and whole lot more examples with custom cost function in case the user wants to use their own cost function.
This package provides a tool for spatial/spatio-temporal modelling and prediction with large datasets. The approach models the field, and hence the covariance function, using a set of basis functions. This fixed-rank basis-function representation facilitates the modelling of big data, and the method naturally allows for non-stationary, anisotropic covariance functions. Discretisation of the spatial domain into so-called basic areal units (BAUs) facilitates the use of observations with varying support (i.e., both point-referenced and areal supports, potentially simultaneously), and prediction over arbitrary user-specified regions. `FRK` also supports inference over various manifolds, including the 2D plane and 3D sphere, and it provides helper functions to model, fit, predict, and plot with relative ease. Version 2.0.0 and above also supports the modelling of non-Gaussian data (e.g., Poisson, binomial, negative-binomial, gamma, and inverse-Gaussian) by employing a generalised linear mixed model (GLMM) framework. Zammit-Mangion and Cressie <doi:10.18637/jss.v098.i04> describe `FRK` in a Gaussian setting, and detail its use of basis functions and BAUs, while Sainsbury-Dale, Zammit-Mangion, and Cressie <doi:10.18637/jss.v108.i10> describe `FRK` in a non-Gaussian setting; two vignettes are available that summarise these papers and provide additional examples.
This package provides robust tests for testing in GLMs, by sign-flipping score contributions. The tests are robust against overdispersion, heteroscedasticity and, in some cases, ignored nuisance variables. See Hemerik, Goeman and Finos (2020) <doi:10.1111/rssb.12369>.
This package provides methods for matrix factorization based on Wang and Stephens (2021) <https://jmlr.org/papers/v22/20-589.html>.
Does fuzzy tests and confidence intervals (following Geyer and Meeden, Statistical Science, 2005, <doi:10.1214/088342305000000340>) for sign test and Wilcoxon signed rank and rank sum tests.
This package provides a Bayesian Nonparametric model for the study of time-evolving frequencies, which has become renowned in the study of population genetics. The model consists of a Hidden Markov Model (HMM) in which the latent signal is a distribution-valued stochastic process that takes the form of a finite mixture of Dirichlet Processes, indexed by vectors that count how many times each value is observed in the population. The package implements methodologies presented in Ascolani, Lijoi and Ruggiero (2021) <doi:10.1214/20-BA1206> and Ascolani, Lijoi and Ruggiero (2023) <doi:10.3150/22-BEJ1504> that make it possible to study the process at the time of data collection or to predict its evolution in future or in the past.
This package provides tools for training and analysing fairness-aware gated neural networks for subgroup-aware prediction and interpretation in clinical datasets. Methods draw on prior work in mixture-of-experts neural networks by Jordan and Jacobs (1994) <doi:10.1007/978-1-4471-2097-1_113>, fairness-aware learning by Hardt, Price, and Srebro (2016) <doi:10.48550/arXiv.1610.02413>, and personalised treatment prediction for depression by Iniesta, Stahl, and McGuffin (2016) <doi:10.1016/j.jpsychires.2016.03.016>.
Read and write Frictionless Data Packages. A Data Package (<https://specs.frictionlessdata.io/data-package/>) is a simple container format and standard to describe and package a collection of (tabular) data. It is typically used to publish FAIR (<https://www.go-fair.org/fair-principles/>) and open datasets.
Functions, S4 classes/methods and a graphical user interface (GUI) to design surveys to substantiate freedom from disease using a modified hypergeometric function (see Cameron and Baldock, 1997, <doi:10.1016/s0167-5877(97)00081-0>). Herd sensitivities are computed according to sampling strategies "individual sampling" or "limited sampling" (see M. Ziller, T. Selhorst, J. Teuffert, M. Kramer and H. Schlueter, 2002, <doi:10.1016/S0167-5877(01)00245-8>). Methods to compute the a-posteriori alpha-error are implemented. Risk-based targeted sampling is supported.
New and faster implementations for quantile quantile plots. The package also includes a function to prune data for quantile quantile plots. This can drastically reduce the running time for large samples, for 100 million samples, you can expect a factor 80X speedup.
Generates predictive distributions based on calibrating priors for various commonly used statistical models, including models with predictors. Routines for densities, probabilities, quantiles, random deviates and the parameter posterior are provided. The predictions are generated from the Bayesian prediction integral, with priors chosen to give good reliability (also known as calibration). For homogeneous models, the prior is set to the right Haar prior, giving predictions which are exactly reliable. As a result, in repeated testing, the frequencies of out-of-sample outcomes and the probabilities from the predictions agree. For other models, the prior is chosen to give good reliability. Where possible, the Bayesian prediction integral is solved exactly. Where exact solutions are not possible, the Bayesian prediction integral is solved using the Datta-Mukerjee-Ghosh-Sweeting (DMGS) asymptotic expansion. Optionally, the prediction integral can also be solved using posterior samples generated using Paul Northrop's ratio of uniforms sampling package ('rust'). Results are also generated based on maximum likelihood, for comparison purposes. Various model selection diagnostics and testing routines are included. Based on "Reducing reliability bias in assessments of extreme weather risk using calibrating priors", Jewson, S., Sweeting, T. and Jewson, L. (2024); <doi:10.5194/ascmo-11-1-2025>.
Developed for the following tasks. 1 ) Computing the probability density function, cumulative distribution function, random generation, and estimating the parameters of the eleven mixture models. 2 ) Point estimation of the parameters of two - parameter Weibull distribution using twelve methods and three - parameter Weibull distribution using nine methods. 3 ) The Bayesian inference for the three - parameter Weibull distribution. 4 ) Estimating parameters of the three - parameter Birnbaum - Saunders, generalized exponential, and Weibull distributions fitted to grouped data using three methods including approximated maximum likelihood, expectation maximization, and maximum likelihood. 5 ) Estimating the parameters of the gamma, log-normal, and Weibull mixture models fitted to the grouped data through the EM algorithm, 6 ) Estimating parameters of the nonlinear height curve fitted to the height - diameter observation, 7 ) Estimating parameters, computing probability density function, cumulative distribution function, and generating realizations from gamma shape mixture model introduced by Venturini et al. (2008) <doi:10.1214/07-AOAS156> , 8 ) The Bayesian inference, computing probability density function, cumulative distribution function, and generating realizations from univariate and bivariate Johnson SB distribution, 9 ) Robust multiple linear regression analysis when error term follows skewed t distribution, 10 ) Estimating parameters of a given distribution fitted to grouped data using method of maximum likelihood, and 11 ) Estimating parameters of the Johnson SB distribution through the Bayesian, method of moment, conditional maximum likelihood, and two - percentile method.
Generate search filters to query scientific bibliographic sources, such as PubMed and Web of Science, for non-human primate related publications.
Accompanies a paper (Barunik, Krehlik (2018) <doi:10.1093/jjfinec/nby001>) dedicated to spectral decomposition of connectedness measures and their interpretation. We implement all the developed estimators as well as the historical counterparts. For more information, see the help or GitHub page (<https://github.com/tomaskrehlik/frequencyConnectedness>) for relevant information.
"This package quantifies the provenance of sediments in a catchment or study area. Based on a characterization of the sediment sources and the end sediment mixtures, a mixing model algorithm is applied to the sediment mixtures to estimate the relative contribution of each potential source. The package includes several graphs to help users in their data understanding, such as box plots, correlation, PCA, and LDA graphs. In addition, new developments such as the Consensus Ranking (CR), Consistent Tracer Selection (CTS), and Linear Variability Propagation (LVP) methods are included to correctly apply the fingerprinting technique and increase dataset and model understanding. A new method based on Conservative Balance (CB) method has also been included to enable the use of isotopic tracers.".
Fuel economy data from the EPA, 1985-2015, conveniently packaged for consumption by R users.
Aim: Supports the most frequently used methods to combine forecasts. Among others: Simple average, Ordinary Least Squares, Least Absolute Deviation, Constrained Least Squares, Variance-based, Best Individual model, Complete subset regressions and Information-theoretic (information criteria based).
This package implements fractional differencing with Autoregressive Moving Average models to analyse long-memory time series data. Traditional ARIMA models typically use integer values for differencing, which are suitable for time series with short memory or anti-persistent behaviour. In contrast, the Fractional ARIMA model allows fractional differencing, enabling it to effectively capture long memory characteristics in time series data. The âfracARMAâ package is user-friendly and allows users to manually input the fractional differencing parameter, which can be obtained using various estimators such as the GPH estimator, Sperio method, or Wavelet method and many. Additionally, the package enables users to directly feed the time series data, AR order, MA order, fractional differencing parameter, and the proportion of training data as a split ratio, all in a single command. The package is based on the reference from the paper of Irshad and others (2024, <doi:10.22271/maths.2024.v9.i6b.1906>).
Easy installation, loading and management, of high-performance packages for statistical computing and data manipulation in R. The core fastverse consists of 4 packages: data.table', collapse', kit and magrittr', that jointly only depend on Rcpp'. The fastverse can be freely and permanently extended with additional packages, both globally or for individual projects. Separate package verses can also be created. Fast packages for many common tasks such as time series, dates and times, strings, spatial data, statistics, data serialization, larger-than-memory processing, and compilation of R code are listed in the README file: <https://github.com/fastverse/fastverse#suggested-extensions>.
This package performs family-based association tests with a polytomous outcome under 2-locus and 1-locus models defined by some design matrix.
Clustering for categorical and mixed-type of data, to preventing classification biases due to race, gender or others sensitive attributes. This algorithm is an extension of the methodology proposed by "Santos & Heras (2020) <doi:10.28945/4643>".
It implements an improved and computationally faster version of the original Stepwise Gaussian Graphical Algorithm for estimating the Omega precision matrix from high-dimensional data. Zamar, R., Ruiz, M., Lafit, G. and Nogales, J. (2021) <doi:10.52933/jdssv.v1i2.11>.
This package implements numerical entropy-pooling for portfolio construction and scenario analysis as described in Meucci, Attilio (2008) and Meucci, Attilio (2010) <doi:10.2139/ssrn.1696802>.
Curry, Compose, and other higher-order functions.