Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Analysis of dichotomous, ordinal, and continuous response data using latent space item response models (LSIRMs). Provides 1PL and 2PL LSIRMs for binary response data as described in Jeon et al. (2021) <doi:10.1007/s11336-021-09762-5>, extensions for continuous response data, and graded response models (GRM) for Likert-scale ordinal data as described in De Carolis et al. (2025) <doi:10.1080/00273171.2025.2605678>. Supports Bayesian model selection with spike-and-slab priors, adaptive MCMC algorithms, and methods for handling missing data under missing at random (MAR) and missing completely at random (MCAR) assumptions. Provides various diagnostic plots to inspect the latent space and summaries of estimated parameters.
In the fashion of node.js <https://nodejs.org/>, requires a file, sourcing into the current environment only the variables explicitly specified in the module.exports or exports list variable. If the file was already sourced, the result of the earlier sourcing is returned to the caller.
Set up, run and explore the outputs of the Length-based Multi-species model (LeMans; Hall et al. 2006 <doi:10.1139/f06-039>), focused on the marine environment.
Fitting multivariate data patterns with local principal curves, including tools for data compression (projection) and measuring goodness-of-fit; with some additional functions for mean shift clustering. See Einbeck, Tutz and Evers (2005) <doi:10.1007/s11222-005-4073-8> and Ameijeiras-Alonso and Einbeck (2023) <doi:10.1007/s11634-023-00575-1>.
Lipid Maps Rest service. Researchers can access the Lipid Maps Rest service programmatically and conveniently integrate it into the current workflow or packages.
This package provides functions to sample from the double log normal distribution and calculate the density, distribution and quantile functions.
This package provides a unified interface to large language models across multiple providers. Supports text generation, structured output with optional JSON Schema validation, and embeddings. Includes tidyverse-friendly helpers, chat session, consistent error handling, and parallel batch tools.
R6 classes to model traditional life insurance contracts like annuities, whole life insurances or endowments. Such life insurance contracts provide a guaranteed interest and are not directly linked to the performance of a particular investment vehicle, but they typically provide (discretionary) profit participation. This package provides a framework to model such contracts in a very generic (cash-flow-based) way and includes modelling profit participation schemes, dynamic increases or more general contract layers, as well as contract changes (like sum increases or premium waivers). All relevant quantities like premium decomposition, reserves and benefits over the whole contract period are calculated and potentially exported to Excel'. Mortality rates are given using the MortalityTables package.
Fits a linear excess relative risk model by maximum likelihood, possibly including several variables and allowing for lagged exposures.
This package provides the tables from the Sean Lahman Baseball Database as a set of R data.frames. It uses the data on pitching, hitting and fielding performance and other tables from 1871 through 2024, as recorded in the 2025 version of the database. Documentation examples show how many baseball questions can be investigated.
The Bayesian estimation of mixture models (and more general hidden Markov models) suffers from the label switching phenomenon, making the MCMC output non-identifiable. This package can be used in order to deal with this problem using various relabelling algorithms.
Fit and simulate latent position and cluster models for statistical networks. See Krivitsky and Handcock (2008) <doi:10.18637/jss.v024.i05> and Krivitsky, Handcock, Raftery, and Hoff (2009) <doi:10.1016/j.socnet.2009.04.001>.
Highly optimized toolkit for approximately solving L0-regularized learning problems (a.k.a. best subset selection). The algorithms are based on coordinate descent and local combinatorial search. For more details, check the paper by Hazimeh and Mazumder (2020) <doi:10.1287/opre.2019.1919>.
LP nonparametric high-dimensional K-sample comparison method that includes (i) confirmatory test, (ii) exploratory analysis, and (iii) options to output a data-driven LP-transformed matrix for classification. The primary reference is Mukhopadhyay, S. and Wang, K. (2020, Biometrika); <arXiv:1810.01724>.
Determine a Prototype from a number of runs of Latent Dirichlet Allocation (LDA) measuring its similarities with S-CLOP: A procedure to select the LDA run with highest mean pairwise similarity, which is measured by S-CLOP (Similarity of multiple sets by Clustering with Local Pruning), to all other runs. LDA runs are specified by its assignments leading to estimators for distribution parameters. Repeated runs lead to different results, which we encounter by choosing the most representative LDA run as prototype.
Access to the Greek New Testament (27 books) and the Old Testament (39 books) and allow users to do textual analysis on the data. The New and Old Testament have been provided in their original languages, Greek and Hebrew, respectively. Additionally, the Revised American Standard Bible is also provided for users who'd rather use a wordâ forâ word modern English translation.
Instrumental variables (IVs) are a popular and powerful tool for estimating causal effects in the presence of unobserved confounding. However, classical methods rely on strong assumptions such as the exclusion criterion, which states that instrumental effects must be entirely mediated by treatments. In the so-called "leaky" IV setting, candidate instruments are allowed to have some direct influence on outcomes, rendering the average treatment effect (ATE) unidentifiable. But with limits on the amount of information leakage, we may still recover sharp bounds on the ATE, providing partial identification. This package implements methods for ATE bounding in the leaky IV setting with linear structural equations. For details, see Watson et al. (2024) <doi:10.48550/arXiv.2404.04446>.
Implementation of the algorithm introduced in Shah, R. D. (2016) <https://www.jmlr.org/papers/volume17/13-515/13-515.pdf>. Data with thousands of predictors can be handled. The algorithm performs sequential Lasso fits on design matrices containing increasing sets of candidate interactions. Previous fits are used to greatly speed up subsequent fits, so the algorithm is very efficient.
Split your rmarkdown or quarto files by sections into a tibble: titles, text, chunks. Rebuild the file from the tibble.
This package provides functions for simulating missing morphometric data randomly, with taxonomic bias and with anatomical bias. LOST also includes functions for estimating linear and geometric morphometric data.
Lake temperature records, metadata, and climate drivers for 291 global lakes during the time period 1985-2009. Temperature observations were collected using satellite and in situ methods. Climatic drivers and geomorphometric characteristics were also compiled and are included for each lake. Data are part of the associated publication from the Global Lake Temperature Collaboration project (http://www.laketemperature.org). See citation('laketemps') for dataset attribution.
This package provides a LaTeX Letter class for rmarkdown', using the pandoc-letter template adapted for use with markdown'.
Suite of R functions for the estimation of the local false discovery rate (LFDR) using Type II maximum likelihood estimation (MLE).
This package provides easy access for sentiment lexicons for those who want to do text analysis in Portuguese texts. As of now, two Portuguese lexicons are available: SentiLex-PT02 and OpLexicon (v2.1 and v3.0).