Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Interfaces with the Hugging Face tokenizers library to provide implementations of today's most used tokenizers such as the Byte-Pair Encoding algorithm <https://huggingface.co/docs/tokenizers/index>. It's extremely fast for both training new vocabularies and tokenizing texts.
The main objective of cooperative Transferable-Utility games (TU-games) is to allocate a good among the agents involved. The package implements major solution concepts including the Shapley value, Banzhaf value, and egalitarian rules, alongside their extensions for structured games: the Owen value and Banzhaf-Owen value for games with a priori unions, and the Myerson value for communication games on networks. To address the inherent exponential computational complexity of exact evaluation, the package offers both exact algorithms and linear approximation methods based on sampling, enabling the analysis of large-scale games. Additionally, it supports core set-based solutions, allowing computation of the vertices and the centroid of the core.
Component analysis for three-way data arrays by means of Candecomp/Parafac, Tucker3, Tucker2 and Tucker1 models.
Fit Thurstonian Item Response Theory (IRT) models in R. This package supports fitting Thurstonian IRT models and its extensions using Stan', lavaan', or Mplus for the model estimation. Functionality for extracting results, making predictions, and simulating data is provided as well. References: Brown & Maydeu-Olivares (2011) <doi:10.1177/0013164410375112>; Bürkner et al. (2019) <doi:10.1177/0013164419832063>.
Two stage curvature identification with machine learning for causal inference in settings when instrumental variable regression is not suitable because of potentially invalid instrumental variables. Based on Guo and Buehlmann (2022) "Two Stage Curvature Identification with Machine Learning: Causal Inference with Possibly Invalid Instrumental Variables" <doi:10.48550/arXiv.2203.12808>. The vignette is available in Carl, Emmenegger, Bühlmann and Guo (2025) "TSCI: Two Stage Curvature Identification for Causal Inference with Invalid Instruments in R" <doi:10.18637/jss.v114.i07>.
This package implements methods for selecting the number of factors in Poisson factor models, with a primary focus on Thinning Cross-Validation (TCV). The TCV method is based on the data thinning technique, which probabilistically partitions each count observation into training and test sets while preserving the underlying factor structure. The Poisson factor model is then fit on the training set, and model selection is performed by comparing predictive performance on the test set. This toolkit is designed for researchers working with high-dimensional count data in fields such as genomics, text mining, and social sciences. The data thinning methodology is detailed in Dharamshi et al. (2025) <doi:10.1080/01621459.2024.2353948> and Wang et al. (2025) <doi:10.1080/01621459.2025.2546577>.
Tide analysis and prediction of predominantly semi-diurnal tides with two high waters and two low waters during one lunar day (~24.842 hours, ~1.035 days). The analysis should preferably cover an observation period of at least 19 years. For shorter periods, for example, the nodal cycle can not be taken into account, which particularly affects the height calculation. The main objective of this package is to produce tide tables.
Better looking call stacks after an error.
Differential analysis of tumor tissue immune cell type abundance based on RNA-seq gene-level expression from The Cancer Genome Atlas (TCGA; <https://pancanatlas.xenahubs.net>) database.
Generate LaTeX tables directly from R. It builds LaTeX tables in blocks in the spirit of ggplot2 using the + and / operators for concatenation in the vertical and horizontal dimensions, respectively. It exports tables in the LaTeX tabular environment using .tex code. It can compile .tex code to PDF automatically.
This package implements sentiment analysis using huggingface <https://huggingface.co> transformer zero-shot classification model pipelines for text and image data. The default text pipeline is Cross-Encoder's DistilRoBERTa <https://huggingface.co/cross-encoder/nli-distilroberta-base> and default image/video pipeline is Open AI's CLIP <https://huggingface.co/openai/clip-vit-base-patch32>. All other zero-shot classification model pipelines can be implemented using their model name from <https://huggingface.co/models?pipeline_tag=zero-shot-classification>.
Reconstructs animal tracks from magnetometer, accelerometer, depth and optional speed data. Designed primarily using data from Wildlife Computers Daily Diary tags deployed on northern fur seals.
Taxonomic lists matching and merging, casting and melting scientific names, managing taxonomic lists from Global Biodiversity Information Facility GBIF <https://www.gbif.org/> or Integrated Taxonomic Information System ITIS', <https://itis.gov/> harvesting names from Wikipedia and fuzzy matching.
This package implements models of leaf temperature using energy balance. It uses units to ensure that parameters are properly specified and transformed before calculations. It allows separate lower and upper surface conductances to heat and water vapour, so sensible and latent heat loss are calculated for each surface separately as in Foster and Smith (1986) <doi:10.1111/j.1365-3040.1986.tb02108.x>. It's straightforward to model leaf temperature over environmental gradients such as light, air temperature, humidity, and wind. It can also model leaf temperature over trait gradients such as leaf size or stomatal conductance. Other references are Monteith and Unsworth (2013, ISBN:9780123869104), Nobel (2009, ISBN:9780123741431), and Okajima et al. (2012) <doi:10.1007/s11284-011-0905-5>.
Various methods for targeted and semiparametric inference including augmented inverse probability weighted (AIPW) estimators for missing data and causal inference (Bang and Robins (2005) <doi:10.1111/j.1541-0420.2005.00377.x>), variable importance and conditional average treatment effects (CATE) (van der Laan (2006) <doi:10.2202/1557-4679.1008>), estimators for risk differences and relative risks (Richardson et al. (2017) <doi:10.1080/01621459.2016.1192546>), assumption lean inference for generalized linear model parameters (Vansteelandt et al. (2022) <doi:10.1111/rssb.12504>).
Time series toolkit with identical behavior for all time series classes: ts','xts', data.frame', data.table', tibble', zoo', timeSeries', tsibble', tis or irts'. Also converts reliably between these classes.
Characterisation of the extremal dependence structure of time series, avoiding pre-processing and filtering as done typically with peaks-over-threshold methods. It uses the conditional approach of Heffernan and Tawn (2004) <DOI:10.1111/j.1467-9868.2004.02050.x> which is very flexible in terms of extremal and asymptotic dependence structures, and Bayesian methods improve efficiency and allow for deriving measures of uncertainty. For example, the extremal index, related to the size of clusters in time, can be estimated and samples from its posterior distribution obtained.
The satisfaction Analysis using the tetraclasse model from Sylvie Llosa. Llosa (1997) <http://www.jstor.org/stable/40592578>.
It includes functions like tropical addition, tropical multiplication for vectors and matrices. In tropical algebra, the tropical sum of two numbers is their minimum and the tropical product of two numbers is their ordinary sum. For more information see also I. Simon (1988) Recognizable sets with multiplicities in the tropical semi ring: Volume 324 Lecture Notes I Computer Science, pages 107-120 <doi: 10.1007/BFb0017135>.
This package provides a standardized workflow to reconstruct spatial configurations of altitude-bounded biogeographic systems over time. For example, tabs can model how island archipelagos expand or contract with changing sea levels or how alpine biomes shift in response to tree line movements. It provides functionality to account for various geophysical processes such as crustal deformation and other tectonic changes, allowing for a more accurate representation of biogeographic system dynamics. For more information see De Groeve et al. (2025) <doi:10.3897/arphapreprints.e151900>.
Fit a trio model via penalized maximum likelihood. The model is fit for a path of values of the penalty parameter. This package is based on Noah Simon, et al. (2011) <doi:10.1080/10618600.2012.681250>.
This package provides users a quick exploratory dive into common visualizations without writing a single line of code given the users data follows the Analysis Data Model (ADaM) standards put forth by the Clinical Data Interchange Standards Consortium (CDISC) <https://www.cdisc.org>. Prominent modules/ features of the application are the Table Generator, Population Explorer, and the Individual Explorer. The Table Generator allows users to drag and drop variables and desired statistics (frequencies, means, ANOVA, t-test, and other summary statistics) into bins that automagically create stunning tables with validated information. The Population Explorer offers various plots to visualize general trends in the population from various vantage points. Plot modules currently include scatter plot, spaghetti plot, box plot, histogram, means plot, and bar plot. Each plot type allows the user to plot uploaded variables against one another, and dissect the population by filtering out certain subjects. Last, the Individual Explorer establishes a cohesive patient narrative, allowing the user to interact with patient metrics (params) by visit or plotting important patient events on a timeline. All modules allow for concise filtering & downloading bulk outputs into html or pdf formats to save for later.
Fast, reproducible detection and quantitative analysis of tertiary lymphoid structures (TLS) in multiplexed tissue imaging. Implements Independent Component Analysis Trace (ICAT) index, local Ripley's K scanning, automated K Nearest Neighbor (KNN)-based TLS detection, and T-cell clusters identification as described in Amiryousefi et al. (2025) <doi:10.1101/2025.09.21.677465>.
Changepoint detection algorithms for R are widespread but have different interfaces and reporting conventions. This makes the comparative analysis of results difficult. We solve this problem by providing a tidy, unified interface for several different changepoint detection algorithms. We also provide consistent numerical and graphical reporting leveraging the broom and ggplot2 packages.