Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides statistical procedures for linear regression in the general context where the errors are assumed to be correlated. Different ways to estimate the asymptotic covariance matrix of the least squares estimators are available. Starting from this estimation of the covariance matrix, the confidence intervals and the usual tests on the parameters are modified. The functions of this package are very similar to those of lm': it contains methods such as summary(), plot(), confint() and predict(). The slm package is described in the paper by E. Caron, J. Dedecker and B. Michel (2019), "Linear regression with stationary errors: the R package slm", arXiv preprint <arXiv:1906.06583>.
Add a searchbar widget to your Shiny application. The widget quickly integrates with any existing element containing text to highlight matches. Highlighting is done with the JavaScript library mark.js'. The widget includes buttons to cycle through multiple instances of the match and automatically scroll to the matches in an overflow element (or window). The widget also displays the total number of matches and which match is currently being cycled through. The widget is structured as a Bootstrap 3 input group.
This package provides functions and data sets for data sharpening. Nonparametric regressions are computed subject to smoothness and other kinds of penalties.
Computes the trimmed-k mean by removing the k smallest and k largest values from a numeric vector. Created for STAT 5400 at the University of Iowa.
Data Sets for Kumar and Petersen (2012). Statistical Methods in Customer Relationship Management, Wiley: New York.
An interface to explore trends in Twitter data using the Storywrangler Application Programming Interface (API), which can be found here: <https://github.com/janeadams/storywrangler>.
Implementation of the SSR-Algorithm. The Sign-Simplicity-Regression model is a nonparametric statistical model which is based on residual signs and simplicity assumptions on the regression function. Goal is to calculate the most parsimonious regression function satisfying the statistical adequacy requirements. Theory and functions are specified in Metzner (2020, ISBN: 979-8-68239-420-3, "Trendbasierte Prognostik") and Metzner (2021, ISBN: 979-8-59347-027-0, "Adäquates Maschinelles Lernen").
Can be used to model the fate of soil organic carbon and soil organic nitrogen and to calculate N mineralisation rates. Provides a framework that numerically solves differential equations of soil organic carbon models based on first-order kinetics and extends these models to include the nitrogen component. The name sorcering is an acronym for Soil ORganic Carbon & CN Ratio drIven Nitrogen modellinG framework'.
This package provides a collection of self-labeled techniques for semi-supervised classification. In semi-supervised classification, both labeled and unlabeled data are used to train a classifier. This learning paradigm has obtained promising results, specifically in the presence of a reduced set of labeled examples. This package implements a collection of self-labeled techniques to construct a classification model. This family of techniques enlarges the original labeled set using the most confident predictions to classify unlabeled data. The techniques implemented can be applied to classification problems in several domains by the specification of a supervised base classifier. At low ratios of labeled data, it can be shown to perform better than classical supervised classifiers.
Geostatistical modeling and kriging with gridded data using spatially separable covariance functions (Kronecker covariances). Kronecker products in these models provide shortcuts for solving large matrix problems in likelihood and conditional mean, making snapKrig computationally efficient with large grids. The package supplies its own S3 grid object class, and a host of methods including plot, print, Ops, square bracket replace/assign, and more. Our computational methods are described in Koch, Lele, Lewis (2020) <doi:10.7939/r3-g6qb-bq70>.
Starting from a given object representing a fitted model (within a certain set of model classes) whose (non-)linear predictor includes some ordered factor(s) among the explanatory variables, a new model is constructed and fitted where each named factor is replaced by a single numeric score, suitably chosen so that the new variable produces a fit comparable with the standard methodology based on a set of polynomial contrasts. Two variants of the present approach have been developed, one in each of the next references: Azzalini (2023) <doi:10.1002/sta4.624>, (2024) <doi:10.48550/arXiv.2406.15933>.
Computes the maximum likelihood estimator of the generalised additive and index regression with shape constraints. Each additive component function is assumed to obey one of the nine possible shape restrictions: linear, increasing, decreasing, convex, convex increasing, convex decreasing, concave, concave increasing, or concave decreasing. For details, see Chen and Samworth (2016) <doi:10.1111/rssb.12137>.
This package provides tools for simulating spatially dependent predictors (continuous or binary), which are used to generate scalar outcomes in a (generalized) linear model framework. Continuous predictors are generated using traditional multivariate normal distributions or Gauss Markov random fields with several correlation function approaches (e.g., see Rue (2001) <doi:10.1111/1467-9868.00288> and Furrer and Sain (2010) <doi:10.18637/jss.v036.i10>), while binary predictors are generated using a Boolean model (see Cressie and Wikle (2011, ISBN: 978-0-471-69274-4)). Parameter vectors exhibiting spatial clustering can also be easily specified by the user.
This package provides a collection of tools and functions to adjust a variety of stochastic blockmodels (SBM). Supports at the moment Simple, Bipartite, Multipartite and Multiplex SBM (undirected or directed with Bernoulli, Poisson or Gaussian emission laws on the edges, and possibly covariate for Simple and Bipartite SBM). See Léger (2016) <doi:10.48550/arXiv.1602.07587>, Barbillon et al. (2020) <doi:10.1111/rssa.12193> and Bar-Hen et al. (2020) <doi:10.48550/arXiv.1807.10138>.
This package provides a tool to plot data with a large sample size using shiny and plotly'. Relatively small samples are obtained from the original data using a specific algorithm. The samples are updated according to a user-defined x range. Jonas Van Der Donckt, Jeroen Van Der Donckt, Emiel Deprost (2022) <https://github.com/predict-idlab/plotly-resampler>.
Computes the probability of a set of species abundances of a single or multiple samples of individuals with one or more guilds under a mainland-island model. One must specify the mainland (metacommunity) model and the island (local) community model. It assumes that species fluctuate independently. The package also contains functions to simulate under this model. See Haegeman, B. & R.S. Etienne (2017). A general sampling formula for community structure data. Methods in Ecology & Evolution 8: 1506-1519 <doi:10.1111/2041-210X.12807>.
Mixed DNA profiles can be sampled according to models for probabilistic genotyping. Peak height variability is modelled using a log normal distribution or a gamma distribution. Sample contributors may be related according to a pedigree.
This package provides functions for conducting jackknife Euclidean / empirical likelihood inference for Spearman's rho (de Carvalho and Marques (2012) <doi:10.1080/10920277.2012.10597644>).
This package provides a novel meta-learning framework for forecast model selection using time series features. Many applications require a large number of time series to be forecast. Providing better forecasts for these time series is important in decision and policy making. We propose a classification framework which selects forecast models based on features calculated from the time series. We call this framework FFORMS (Feature-based FORecast Model Selection). FFORMS builds a mapping that relates the features of time series to the best forecast model using a random forest. seer package is the implementation of the FFORMS algorithm. For more details see our paper at <https://www.monash.edu/business/econometrics-and-business-statistics/research/publications/ebs/wp06-2018.pdf>.
This package provides functions for stabilometric signal quantification. The input is a data frame containing the x, y coordinates of the center-of-pressure displacement. Jose Magalhaes de Oliveira (2017) <doi:10.3758/s13428-016-0706-4> "Statokinesigram normalization method"; T E Prieto, J B Myklebust, R G Hoffmann, E G Lovett, B M Myklebust (1996) <doi:10.1109/10.532130> "Measures of postural steadiness: Differences between healthy young and elderly adults"; L F Oliveira et al (1996) <doi:10.1088/0967-3334/17/4/008> "Calculation of area of stabilometric signals using principal component analisys".
Identifying outcome relevant subgroups has now become as simple as possible! The formerly lengthy and tedious search for the needle in a haystack will be replaced by a single, comprehensive and coherent presentation. The central result of a subgroup screening is a diagram in which each single dot stands for a subgroup. The diagram may show thousands of them. The position of the dot in the diagram is determined by the sample size of the subgroup and the statistical measure of the treatment effect in that subgroup. The sample size is shown on the horizontal axis while the treatment effect is displayed on the vertical axis. Furthermore, the diagram shows the line of no effect and the overall study results. For small subgroups, which are found on the left side of the plot, larger random deviations from the mean study effect are expected, while for larger subgroups only small deviations from the study mean can be expected to be chance findings. So for a study with no conspicuous subgroup effects, the dots in the figure are expected to form a kind of funnel. Any deviations from this funnel shape hint to conspicuous subgroups.
This package provides a collection of classes and methods for working with times and dates. The code was originally available in S-PLUS'.
Fits Bayesian hierarchical spatial and spatial-temporal process models for point-referenced Gaussian, Poisson, binomial, and binary data using stacking of predictive densities. It involves sampling from analytically available posterior distributions conditional upon candidate values of the spatial process parameters and, subsequently assimilate inference from these individual posterior distributions using Bayesian predictive stacking. Our algorithm is highly parallelizable and hence, much faster than traditional Markov chain Monte Carlo algorithms while delivering competitive predictive performance. See Zhang, Tang, and Banerjee (2025) <doi:10.1080/01621459.2025.2566449>, and, Pan, Zhang, Bradley, and Banerjee (2025) <doi:10.48550/arXiv.2406.04655> for details.
This package provides functions for the evaluation of surrogate endpoints when both the surrogate and the true endpoint are failure time variables. The approaches implemented are: (1) the two-step approach (Burzykowski et al, 2001) <DOI:10.1111/1467-9876.00244> with a copula model (Clayton, Plackett, Hougaard) at the first step and either a linear regression of log-hazard ratios at the second step (either adjusted or not for measurement error); (2) mixed proportional hazard models estimated via mixed Poisson GLM (Rotolo et al, 2017 <DOI:10.1177/0962280217718582>).