Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Implementation of an alternating direction method of multipliers algorithm for fitting a linear model with tree-based lasso regularization, which is proposed in Algorithm 1 of Yan and Bien (2020) <doi:10.1080/01621459.2020.1796677>. The package allows efficient model fitting on the entire 2-dimensional regularization path for large datasets. The complete set of functions also makes the entire process of tuning regularization parameters and visualizing results hassle-free.
Implementation of a model-based clustering algorithm for ranking data (C. Biernacki, J. Jacques (2013) <doi:10.1016/j.csda.2012.08.008>). Multivariate rankings as well as partial rankings are taken into account. This algorithm is based on an extension of the Insertion Sorting Rank (ISR) model for ranking data, which is a meaningful and effective model parametrized by a position parameter (the modal ranking, quoted by mu) and a dispersion parameter (quoted by pi). The heterogeneity of the rank population is modelled by a mixture of ISR, whereas conditional independence assumption is considered for multivariate rankings.
Includes algorithms to facilitate the assessment of extinction risk of species according to the IUCN (International Union for Conservation of Nature, see <https://iucn.org/> for more information) red list criteria.
We utilize approximate Bayesian machinery to fit two-level conjugate hierarchical models on overdispersed Gaussian, Poisson, and Binomial data and evaluates whether the resulting approximate Bayesian interval estimates for random effects meet the nominal confidence levels via frequency coverage evaluation. The data that Rgbp assumes comprise observed sufficient statistic for each random effect, such as an average or a proportion of each group, without population-level data. The approximate Bayesian tool equipped with the adjustment for density maximization produces approximate point and interval estimates for model parameters including second-level variance component, regression coefficients, and random effect. For the Binomial data, the package provides an option to produce posterior samples of all the model parameters via the acceptance-rejection method. The package provides a quick way to evaluate coverage rates of the resultant Bayesian interval estimates for random effects via a parametric bootstrapping, which we call frequency method checking.
Aims to create a single isolated Miniconda and Python environment for reproducible pipeline scripts. The package provides utilities to run system command within the conda environment, making it easy to install, launch, manage, and stop Jupyter-lab'.
Automatically creates separate regression models for different spatial regions. The prediction surface is smoothed using a regional border smoothing method. If regional models are continuous, the resulting prediction surface is continuous across the spatial dimensions, even at region borders. Methodology is described in Wagstaff and Bean (2023) <doi:10.32614/RJ-2023-004>.
Ranked set sampling (RSS) is introduced as an advanced method for data collection which is substantial for the statistical and methodological analysis in scientific studies by McIntyre (1952) (reprinted in 2005) <doi:10.1198/000313005X54180>. This package introduces the first package that implements the RSS and its modified versions for sampling. With RSSampling', the researchers can sample with basic RSS and the modified versions, namely, Median RSS, Extreme RSS, Percentile RSS, Balanced groups RSS, Double RSS, L-RSS, Truncation-based RSS, Robust extreme RSS. The RSSampling also allows imperfect ranking using an auxiliary variable (concomitant) which is widely used in the real life applications. Applicants can also use this package for parametric and nonparametric inference such as mean, median and variance estimation, regression analysis and some distribution-free tests where the the samples are obtained via basic RSS.
Ensmallen is a templated C++ mathematical optimization library (by the MLPACK team) that provides a simple set of abstractions for writing an objective function to optimize. Provided within are various standard and cutting-edge optimizers that include full-batch gradient descent techniques, small-batch techniques, gradient-free optimizers, and constrained optimization. The RcppEnsmallen package includes the header files from the Ensmallen library and pairs the appropriate header files from armadillo through the RcppArmadillo package. Therefore, users do not need to install Ensmallen nor Armadillo to use RcppEnsmallen'. Note that Ensmallen is licensed under 3-Clause BSD, Armadillo starting from 7.800.0 is licensed under Apache License 2, RcppArmadillo (the Rcpp bindings/bridge to Armadillo') is licensed under the GNU GPL version 2 or later. Thus, RcppEnsmallen is also licensed under similar terms. Note that Ensmallen requires a compiler that supports C++14 and Armadillo 10.8.2 or later.
Implementation of Nelson rules for control charts in R'. The Rspc implements some Statistical Process Control methods, namely Levey-Jennings type of I (individuals) chart, Shewhart C (count) chart and Nelson rules (as described in Montgomery, D. C. (2013) Introduction to statistical quality control. Hoboken, NJ: Wiley.). Typical workflow is taking the time series, specify the control limits, and list of Nelson rules you want to evaluate. There are several options how to modify the rules (one sided limits, numerical parameters of rules, etc.). Package is also capable of calculating the control limits from the data (so far only for i-chart and c-chart are implemented).
Use rprofile::load() inside a project .Rprofile file to ensure that the user-global .Rprofile is loaded correctly regardless of its location, and other common resources (in particular renv') are also set up correctly.
Three robust marginal integration procedures for additive models based on local polynomial kernel smoothers. As a preliminary estimator of the multivariate function for the marginal integration procedure, a first approach uses local constant M-estimators, a second one uses local polynomials of order 1 over all the components of covariates, and the third one uses M-estimators based on local polynomials but only in the direction of interest. For this last approach, estimators of the derivatives of the additive functions can be obtained. All three procedures can compute predictions for points outside the training set if desired. See Boente and Martinez (2017) <doi:10.1007/s11749-016-0508-0> for details.
When creating a package, authors may sometimes struggle with coming up with easy and straightforward function names, and at the same time hoping that other packages do not already have the same function names. In trying to meet this goal, sometimes, function names are not descriptive enough and may confuse the potential users. The purpose of this package is to serve as a package function short form generator and also provide shorthand names for other functions. Having this package will entice authors to create long function names without the fear of users not wanting to use their packages because of the long names. In a way, everyone wins - the authors can use long descriptive function names, and the users can use this package to make short functions names while still using the package in question.
It computes the Schmidt decomposition of bipartite quantum systems, discrete or continuous, and their respective entanglement metrics. See Artur Ekert, Peter L. Knight (1995) <doi:10.1119/1.17904> for more details.
Visualize your favorite XKCD comic strip directly from R. XKCD <https://xkcd.com> web comic content is provided under the Creative Commons Attribution-NonCommercial 2.5 License.
This package contains functions for analysing relative survival data, including nonparametric estimators of net (marginal relative) survival, relative survival ratio, crude mortality, methods for fitting and checking additive and multiplicative regression models, transformation approach, methods for dealing with population mortality tables. Work has been described in Pohar Perme, Pavlic (2018) <doi:10.18637/jss.v087.i08>.
Convert text into target classifications (e.g., ISO 3166-1) using a JSON mapping with regular expressions. Provides helpers to return the full mapping and associated metadata.
OpenWeatherMap (OWM) <http://openweathermap.org/api> is a service providing weather related data. This package can be used to access current weather data for one location or several locations. It can also be used to forecast weather for 5 days with data for every 3 hours.
This package creates reports from Trello, a collaborative, project organization and list-making application. <https://trello.com/> Reports are created by comparing individual Trello board cards from two different points in time and documenting any changes made to the cards.
Client for ChromaDB', a vector database for storing and querying embeddings. This package provides a convenient interface to interact with the REST API of ChromaDB <https://docs.trychroma.com>.
This package provides functions to fit Gaussian linear model by maximising the residual log likelihood where the covariance structure can be written as a linear combination of known matrices. Can be used for multivariate models and random effects models. Easy straight forward manner to specify random effects models, including random interactions. Code now optimised to use Sherman Morrison Woodbury identities for matrix inversion in random effects models. We've added the ability to fit models using any kernel as well as a function to return the mean and covariance of random effects conditional on the data (best linear unbiased predictors, BLUPs). Clifford and McCullagh (2006) <https://www.r-project.org/doc/Rnews/Rnews_2006-2.pdf>.
This package provides methods for multiway data analysis by means of Parafac and Tucker 3 models. Robust versions (Engelen and Hubert (2011) <doi:10.1016/j.aca.2011.04.043>) and versions for compositional data are also provided (Gallo (2015) <doi:10.1080/03610926.2013.798664>, Di Palma et al. (2018) <doi:10.1080/02664763.2017.1381669>). Several optimization methods alternative to ALS are available (Simonacci and Gallo (2019) <doi:10.1016/j.chemolab.2019.103822>, Simonacci and Gallo (2020) <doi:10.1007/s00500-019-04320-9>).
This package provides methods for Resampling-based False Discovery Proportion control. A function is provided that provides simultaneous, multi-resolution False Discovery Exceedance (FDX) control as described in Hemerik (2025) <doi:10.48550/arXiv.2509.02376>.
An interface to iDigBio's search API that allows downloading specimen records. Searches are returned as a data.frame. Other functions such as the metadata end points return lists of information. iDigBio is a US project focused on digitizing and serving museum specimen collections on the web. See <https://www.idigbio.org> for information on iDigBio.
Finite mixture models are a popular technique for modelling unobserved heterogeneity or to approximate general distribution functions in a semi-parametric way. They are used in a lot of different areas such as astronomy, biology, economics, marketing or medicine. This package is the implementation of popular robust mixture regression methods based on different algorithms including: fleximix, finite mixture models and latent class regression; CTLERob, component-wise adaptive trimming likelihood estimation; mixbi, bi-square estimation; mixL, Laplacian distribution; mixt, t-distribution; TLE, trimmed likelihood estimation. The implemented algorithms includes: CTLERob stands for Component-wise adaptive Trimming Likelihood Estimation based mixture regression; mixbi stands for mixture regression based on bi-square estimation; mixLstands for mixture regression based on Laplacian distribution; TLE stands for Trimmed Likelihood Estimation based mixture regression. For more detail of the algorithms, please refer to below references. Reference: Chun Yu, Weixin Yao, Kun Chen (2017) <doi:10.1002/cjs.11310>. NeyKov N, Filzmoser P, Dimova R et al. (2007) <doi:10.1016/j.csda.2006.12.024>. Bai X, Yao W. Boyer JE (2012) <doi:10.1016/j.csda.2012.01.016>. Wennan Chang, Xinyu Zhou, Yong Zang, Chi Zhang, Sha Cao (2020) <arXiv:2005.11599>.