Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Fits a geographically weighted regression model with different scales for each covariate. Uses the negative binomial distribution as default, but also accepts the normal, Poisson, or logistic distributions. Can fit the global versions of each regression and also the geographically weighted alternatives with only one scale, since they are all particular cases of the multiscale approach. Hanchen Yu (2024). "Exploring Multiscale Geographically Weighted Negative Binomial Regression", Annals of the American Association of Geographers <doi:10.1080/24694452.2023.2289986>. Fotheringham AS, Yang W, Kang W (2017). "Multiscale Geographically Weighted Regression (MGWR)", Annals of the American Association of Geographers <doi:10.1080/24694452.2017.1352480>. Da Silva AR, Rodrigues TCV (2014). "Geographically Weighted Negative Binomial Regression - incorporating overdispersion", Statistics and Computing <doi:10.1007/s11222-013-9401-9>.
Framework for the Item Response Theory analysis of dichotomous and ordinal polytomous outcomes under the assumption of within-item multidimensionality and discreteness of the latent traits. The fitting algorithms allow for missing responses and for different item parametrizations and are based on the Expectation-Maximization paradigm. Individual covariates affecting the class weights may be included in the new version together with possibility of constraints on all model parameters.
This package provides methods for calculating and testing the significance of pairwise monotonic association from and based on the work of Pimentel (2009) <doi:10.4135/9781412985291.n2>. Computation of association of vectors from one or multiple sets can be performed in parallel thanks to the packages foreach and doMC'.
This package provides functions for measuring population divergence from genotypic data.
The maybe type represents the possibility of some value or nothing. It is often used instead of throwing an error or returning `NULL`. The advantage of using a maybe type over `NULL` is that it is both composable and requires the developer to explicitly acknowledge the potential absence of a value, helping to avoid the existence of unexpected behaviour.
Routines to perform estimation and inference under the multivariate t-distribution <doi:10.1007/s10182-022-00468-2>. Currently, the following methodologies are implemented: multivariate mean and covariance estimation, hypothesis testing about equicorrelation and homogeneity of variances, the Wilson-Hilferty transformation, QQ-plots with envelopes and random variate generation.
Simple helpers for matrix multiplication on data.frames. These allow for more concise code during low level mathematical operations, and help ensure code is more easily read, understood, and serviced.
Defines classes and methods to learn models and use them to predict binary outcomes. These are generic tools, but we also include specific examples for many common classifiers.
Uses recursive partitioning to create homogeneous subgroups based on structural equation models fit in Mplus', a stand-alone program developed by Muthen and Muthen.
This package provides a framework based on S3 dispatch for constructing models of mosquito-borne pathogen transmission which are constructed from submodels of various components (i.e. immature and adult mosquitoes, human populations). A consistent mathematical expression for the distribution of bites on hosts means that different models (stochastic, deterministic, etc.) can be coherently incorporated and updated over a discrete time step.
This package provides a method for the multiresolution analysis of spatial fields and images to capture scale-dependent features. mrbsizeR is based on scale space smoothing and uses differences of smooths at neighbouring scales for finding features on different scales. To infer which of the captured features are credible, Bayesian analysis is used. The scale space multiresolution analysis has three steps: (1) Bayesian signal reconstruction. (2) Using differences of smooths, scale-dependent features of the reconstructed signal can be found. (3) Posterior credibility analysis of the differences of smooths created. The method has first been proposed by Holmstrom, Pasanen, Furrer, Sain (2011) <DOI:10.1016/j.csda.2011.04.011> and extended in Flury, Gerber, Schmid and Furrer (2021) <DOI:10.1016/j.spasta.2020.100483>.
This package provides a standalone package combining several stop-word lists for 65 languages with a median of 329 stop words for language and over 1,000 entries for English, Breton, Latin, Slovenian, and Ancient Greek! The user automatically gets access to all the unique stop words contained in: the StopwordISO repository; the Natural Language Toolkit for python'; the Snowball stop-word list; the R package quanteda'; the marimo repository; the Perseus project; and A. Berra's list of stop words for Ancient Greek and Latin.
Regularly spaced grids containing continuous data are transformed to contour polygons. A grid can be defined by a data.frame (x, y, value), an sf object or a raster from terra'.
This package contains the Markov cluster algorithm (MCL) for identifying clusters in networks and graphs. The algorithm simulates random walks on a (n x n) matrix as the adjacency matrix of a graph. It alternates an expansion step and an inflation step until an equilibrium state is reached.
Constructing matrices for quick prototyping can be a nuisance, requiring the user to think about how to fill the matrix with values using the matrix() function. The %<-% operator solves that issue by allowing the user to construct matrices using code that shows the actual matrices.
Fit generalized linear models with binomial responses using a median modified score approach (Kenne Pagui et al., 2016, <https://arxiv.org/abs/1604.04768>) to median bias reduction. This method respects equivariance under reparameterizations for each parameter component and also solves the infinite estimates problem (data separation).
Easy implementation of the MABAC multi-criteria decision method, that was introduced by PamuÄ ar and Ä iroviÄ in the work entitled: "The selection of transport and handling resources in logistics centers using Multi-Attributive Border Approximation area Comparison (MABAC)" - <doi:10.1016/j.eswa.2014.11.057> - which aimed to choose implements for logistics centers. This package receives data, preferably in a spreadsheet, reads it and applies the mathematical algorithms inherent to the MABAC method to generate a ranking with the optimal solution according to the established criteria, weights and type of criteria. The data will be normalized, weighted by the weights, the border area will be determined, the distances to this border area will be calculated and finally a ranking with the optimal option will be generated.
Create tile grid maps, which are like choropleth maps except each region is represented with equal visual space.
This is the core functions needed by the tsmp package. The low level and carefully checked mathematical functions are here. These are implementations of the Matrix Profile concept that was created by CS-UCR <http://www.cs.ucr.edu/~eamonn/MatrixProfile.html>.
Predictive multivariate modelling for metabolomics. Types: Classification and regression. Methods: Partial Least Squares, Random Forest ans Elastic Net Data structures: Paired and unpaired Validation: repeated double cross-validation (Westerhuis et al. (2008)<doi:10.1007/s11306-007-0099-6>, Filzmoser et al. (2009)<doi:10.1002/cem.1225>) Variable selection: Performed internally, through tuning in the inner cross-validation loop.
Calibrate and apply multivariate bias correction algorithms for climate model simulations of multiple climate variables. Three methods described by Cannon (2016) <doi:10.1175/JCLI-D-15-0679.1> and Cannon (2018) <doi:10.1007/s00382-017-3580-6> are implemented â (i) MBC Pearson correlation (MBCp), (ii) MBC rank correlation (MBCr), and (iii) MBC N-dimensional PDF transform (MBCn) â as is the Rank Resampling for Distributions and Dependences (R2D2) method.
Computation of standardized interquartile range (IQR), Huber-type skipped mean (Hampel (1985), <doi:10.2307/1268758>), robust coefficient of variation (CV) (Arachchige et al. (2019), <doi:10.48550/arXiv.1907.01110>), robust signal to noise ratio (SNR), z-score, standardized mean difference (SMD), as well as functions that support graphical visualization such as boxplots based on quartiles (not hinges), negative logarithms and generalized logarithms for ggplot2 (Wickham (2016), ISBN:978-3-319-24277-4).
Mine metrics on common places on the web through the power of their APIs (application programming interfaces). It also helps make the data in a format that is easily used for a dashboard or other purposes. There is an associated dashboard template and tutorials that are underdevelopment that help you fully utilize metricminer'.
This package provides a lightweight package designed to facilitate statistical simulations through functional programming. It centralizes the simulation process into a single higher-order function, enhancing manageability and usability without adding overhead from external dependencies. The package includes ready-to-use functions for common simulation targets. A detailed example can be found on <https://github.com/ielbadisy/mcstatsim>.