Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a bridge between the loon and ggplot2 packages. Extends the grammar of ggplot to add clauses to create interactive loon plots. Existing ggplot(s) can be turned into interactive loon plots and loon plots into static ggplot(s); the function loon.ggplot() is the bridge from one plot structure to the other.
The Bayesian estimation of mixture models (and more general hidden Markov models) suffers from the label switching phenomenon, making the MCMC output non-identifiable. This package can be used in order to deal with this problem using various relabelling algorithms.
This package provides a collection of helper functions and illustrative datasets to support learning and teaching of data science with R. The package is designed as a companion to the book <https://book-data-science-r.netlify.app>, making key data science techniques accessible to individuals with minimal coding experience. Functions include tools for data partitioning, performance evaluation, and data transformations (e.g., z-score and min-max scaling). The included datasets are curated to highlight practical applications in data exploration, modeling, and multivariate analysis. An early inspiration for the package came from an ancient Persian idiom about "eating the liveR," symbolizing deep and immersive engagement with knowledge.
The goal of this package is to cover the most common steps in Loss Given Default (LGD) rating model development. The main procedures available are those that refer to bivariate and multivariate analysis. In particular two statistical methods for multivariate analysis are currently implemented â OLS regression and fractional logistic regression. Both methods are also available within different blockwise model designs and both have customized stepwise algorithms. Descriptions of these customized designs are available in Siddiqi (2016) <doi:10.1002/9781119282396.ch10> and Anderson, R.A. (2021) <doi:10.1093/oso/9780192844194.001.0001>. Although they are explained for PD model, the same designs are applicable for LGD model with different underlying regression methods (OLS and fractional logistic regression). To cover other important steps for LGD model development, it is recommended to use LGDtoolkit package along with PDtoolkit', and monobin (or monobinShiny') packages. Additionally, LGDtoolkit provides set of procedures handy for initial and periodical model validation.
Navigating the shift of clinical laboratory data from primary everyday clinical use to secondary research purposes presents a significant challenge. Given the substantial time and expertise required for lab data pre-processing and cleaning and the lack of all-in-one tools tailored for this need, we developed our algorithm lab2clean as an open-source R-package. lab2clean package is set to automate and standardize the intricate process of cleaning clinical laboratory results. With a keen focus on improving the data quality of laboratory result values and units, our goal is to equip researchers with a straightforward, plug-and-play tool, making it smoother for them to unlock the true potential of clinical laboratory data in clinical research and clinical machine learning (ML) model development. Functions to clean & validate result values (Version 1.0) are described in detail in Zayed et al. (2024) <doi:10.1186/s12911-024-02652-7>. Functions to standardize & harmonize result units (added in Version 2.0) are described in detail in Zayed et al. (2025) <doi:10.1016/j.ijmedinf.2025.106131>.
Estimation of a lognormal - Generalized Pareto mixture via the Expectation-Maximization algorithm. Computation of bootstrap standard errors is supported and performed via parallel computing. Functions for random number simulation and density evaluation are also available. For more details see Bee and Santi (2025) <doi:10.48550/arXiv.2505.22507>.
The leaflet JavaScript library provides many plugins some of which are available in the core leaflet package, but there are many more. It is not possible to support them all in the core leaflet package. This package serves as an add-on to the leaflet package by providing extra functionality via leaflet plugins.
Change-point detection algorithm with label constraints and a penalty for each change outside of labels. Read TD Hocking, A Srivastava (2023) <doi:10.1007/s00180-022-01238-z> for details.
This package provides a collection of tools for interactive manipulation of (spatial) data layers on leaflet web maps. Tools include editing of existing layers, creation of new layers through drawing of shapes (points, lines, polygons), deletion of shapes as well as cutting holes into existing shapes. Provides control over options to e.g. prevent self-intersection of polygons and lines or to enable/disable snapping to align shapes.
This package provides a collection of tools for the calculation of freewater metabolism from in situ time series of dissolved oxygen, water temperature, and, optionally, additional environmental variables. LakeMetabolizer implements 5 different metabolism models with diverse statistical underpinnings: bookkeeping, ordinary least squares, maximum likelihood, Kalman filter, and Bayesian. Each of these 5 metabolism models can be combined with 1 of 7 models for computing the coefficient of gas exchange across the airâ water interface (k). LakeMetabolizer also features a variety of supporting functions that compute conversions and implement calculations commonly applied to raw data prior to estimating metabolism (e.g., oxygen saturation and optical conversion models).
Latent budget analysis is a method for the analysis of a two-way contingency table with an exploratory variable and a response variable. It is specially designed for compositional data.
This package provides functions for forest objects detection, structure metrics computation, model calibration and mapping with airborne laser scanning: co-registration of field plots (Monnet and Mermin (2014) <doi:10.3390/f5092307>); tree detection (method 1 in Eysn et al. (2015) <doi:10.3390/f6051721>) and segmentation; forest parameters estimation with the area-based approach: model calibration with ground reference, and maps export (Aussenac et al. (2023) <doi:10.12688/openreseurope.15373.2>); extraction of both physical (gaps, edges, trees) and statistical features useful for e.g. habitat suitability modeling (Glad et al. (2020) <doi:10.1002/rse2.117>) and forest maturity mapping (Fuhr et al. (2022) <doi:10.1002/rse2.274>).
Solves quadratic programming problems where the Hessian is represented as the product of two matrices. Thanks to Greg Hunt for helping getting this version back on CRAN. The methods in this package are described in: Ormerod, Wand and Koch (2008) "Penalised spline support vector classifiers: computational issues" <doi:10.1007/s00180-007-0102-8>.
This package implements a local likelihood estimator for the dependence parameter in bivariate conditional copula models. Copula family and local likelihood bandwidth parameters are selected by leave-one-out cross-validation. The models are implemented in TMB', meaning that the local score function is efficiently calculated via automated differentiation (AD), such that quasi-Newton algorithms may be used for parameter estimation.
Quickly generate lorem ipsum placeholder text. Easy to integrate in RMarkdown documents. Includes an RStudio addin to insert lorem ipsum into the current document.
This package provides methods for estimation and statistical inference on directional and fluctuating selection in age-structured populations.
When the values of the outcome variable Y are either 0 or 1, the function lsm() calculates the estimation of the log likelihood in the saturated model. This model is characterized by Llinas (2006, ISSN:2389-8976) in section 2.3 through the assumptions 1 and 2. The function LogLik() works (almost perfectly) when the number of independent variables K is high, but for small K it calculates wrong values in some cases. For this reason, when Y is dichotomous and the data are grouped in J populations, it is recommended to use the function lsm() because it works very well for all K.
Create custom labels, badges, certificates and other documents. Automate the production of potentially large numbers of herbarium and collection labels, accreditation badges, attendance and participation certificates, etc, and deliver them automatically. Documents are generated in PDF format, which requires a working installation of LaTeX', such as TinyTeX'.
This package provides a collection of parametric and nonparametric methods for the analysis of survival data. Parametric families implemented include Gompertz-Makeham, exponential and generalized Pareto models and extended models. The package includes an implementation of the nonparametric maximum likelihood estimator for arbitrary truncation and censoring pattern based on Turnbull (1976) <doi:10.1111/j.2517-6161.1976.tb01597.x>, along with graphical goodness-of-fit diagnostics. Parametric models for positive random variables and peaks over threshold models based on extreme value theory are described in Rootzén and Zholud (2017) <doi:10.1007/s10687-017-0305-5>; Belzile et al. (2021) <doi:10.1098/rsos.202097> and Belzile et al. (2022) <doi:10.1146/annurev-statistics-040120-025426>.
We present a method based on filtering algorithms to estimate the parameters of linear, i.e. the coefficients and the variance of the error term. The proposed algorithms make use of Particle Filters following Ristic, B., Arulampalam, S., Gordon, N. (2004, ISBN: 158053631X) resampling methods. Parameters of logistic regression models are also estimated using an evolutionary particle filter method.
This package provides a largish collection of example datasets, including several classics. Many of these datasets are well suited for regression, classification, and visualization.
This package provides functions for validating and normalizing bibliographic codes such as ISBN, ISSN, and LCCN. Also includes functions to communicate with the WorldCat API, translate Call numbers (Library of Congress and Dewey Decimal) to their subject classifications or subclassifications, and provides various loadable data files such call number / subject crosswalks and code tables.
This package provides an extension to factors called lfactor that are similar to factors but allows users to refer to lfactor levels by either the level or the label.
Estimation of life expectancy and Life Years Lost (LYL, or lillies for short) for a given population, for example those with a given disease or condition. In addition, the package can be used to compare estimates from different populations, or to estimate confidence intervals. Technical details of the method are available in Plana-Ripoll et al. (2020) <doi:10.1371/journal.pone.0228073> and Andersen (2017) <doi:10.1002/sim.7357>.