Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a suite of functions for visualising ternary probabilistic forecasts, as discussed in the paper by Jupp (2012) <doi:10.1098/rsta.2011.0350>.
Converting text to numerical features requires specifically created procedures, which are implemented as steps according to the recipes package. These steps allows for tokenization, filtering, counting (tf and tfidf) and feature hashing.
Perform two types of analysis: 1) checking the goodness-of-fit of tree models to your single-cell gene expression data; and 2) deciding which tree best fits your data.
Use the <https://toggl.com> time tracker api through R.
Density, distribution function, quantile function and random generation for the Truncated Generalised Gamma Distribution (also in log10(x) and ln(x) space).
You only need to type why pie charts are bad on Google to find thousands of articles full of (valid) reasons why other types of charts should be preferred over this one. Therefore, because of the little use due to the reasons already mentioned, making pie charts (and related) in R is not straightforward, so other functions are needed to simplify things. In this R package there are useful functions to make tasty pie charts immediately by exploiting the many cool templates provided.
This package provides a standardized workflow to reconstruct spatial configurations of altitude-bounded biogeographic systems over time. For example, tabs can model how island archipelagos expand or contract with changing sea levels or how alpine biomes shift in response to tree line movements. It provides functionality to account for various geophysical processes such as crustal deformation and other tectonic changes, allowing for a more accurate representation of biogeographic system dynamics. For more information see De Groeve et al. (2025) <doi:10.3897/arphapreprints.e151900>.
Test your data! An extension of the testthat unit testing framework with a family of functions and reporting tools for checking and validating data frames.
The general principle relies on calculating the cumulative signal of nascent RNA sequencing over the gene body of any given gene or transcription unit. tepr can identify transcription attenuation sites by comparing profile to a null model which assumes uniform read density over the entirety of the transcription unit. It can also identify increased or diminished transcription attenuation by comparing two conditions. Besides rigorous statistical testing and high sensitivity, a major feature of tepr is its ability to provide the elongation pattern of each individual gene, including the position of the main attenuation point when such a phenomenon occurs. Using tepr', users can visualize and refine genome-wide aggregated analyses of elongation patterns to robustly identify effects specific to subsets of genes. These metrics are suitable for internal comparisons (between genes in each condition) and for studying elongation of the same gene in different conditions or comparing it to a perfect theoretical uniform elongation.
This package implements the approach described in Fong and Grimmer (2016) <https://aclweb.org/anthology/P/P16/P16-1151.pdf> for automatically discovering latent treatments from a corpus and estimating the average marginal component effect (AMCE) of each treatment. The data is divided into a training and test set. The supervised Indian Buffet Process (sibp) is used to discover latent treatments in the training set. The fitted model is then applied to the test set to infer the values of the latent treatments in the test set. Finally, Y is regressed on the latent treatments in the test set to estimate the causal effect of each treatment.
This package provides a collection of interactive shiny applications for performing comprehensive analyses in the field of tree breeding and genetics. The package is designed to assist users in visualizing and interpreting experimental data through a user-friendly interface. Each application is launched via a simple function, and users can upload data in Excel format for analysis. For more information, refer to Singh, R.K. and Chaudhary, B.D. (1977, ISBN:9788176633079).
This package provides a robust computational framework for analyzing complex multimodal data. Extends existing state-dependent models to account for diverse data streams, addressing challenges such as varying temporal scales and learner characteristics to improve the robustness and interpretability of findings. For methodological details, see Shaffer, Wang, and Ruis (2025) "Transmodal Analysis" <doi:10.18608/jla.2025.8423>.
Token-Oriented Object Notation (TOON) is a compact, human-readable serialization format designed for passing structured data to Large Language Models with significantly reduced token usage. It's intended for LLM input as a lossless, drop-in representation of JSON data.
Trusted Timestamps (tts) are created by incorporating a hash of a file or dataset into a transaction on the decentralized blockchain (Stellar network). The package makes use of a free service provided by <https://stellarapi.io>.
The maximum likelihood classifier (MLC) is one of the most common classifiers used for remote sensing imagery. This package uses RcppArmadillo to provide a fast implementation of the MLC to train and predict over tabular data (data.frame). The algorithms were based on Mather (1985) <doi:10.1080/01431168508948456> method.
This package provides tools for building decision and cost-effectiveness analysis models. It enables users to write these models concisely, simulate outcomesâ including probabilistic analysesâ efficiently using optimized vectorized processes and parallel computing, and produce results. The package employs a Grammar of Modeling approach, inspired by the Grammar of Graphics, to streamline model construction. For an interactive graphical user interface, see DecisionTwig at <https://www.dashlab.ca/projects/decision_twig/>. Comprehensive tutorials and vignettes are available at <https://hjalal.github.io/twig/>.
Implementation of two transportation problem algorithms. 1. North West Corner Method 2. Minimum Cost Method or Least cost method. For more technical details about the algorithms please refer below URLs. <http://www.universalteacherpublications.com/univ/ebooks/or/Ch5/nw.htm>. <http://personal.maths.surrey.ac.uk/st/J.F/chapter7.pdf>.
This package provides functions for assigning taxonomy to NCBI accession numbers and taxon IDs based on NCBI's accession2taxid and taxdump files. This package allows the user to download NCBI data dumps and create a local database for fast and local taxonomic assignment.
This package provides an R-interface to the TMDb API (see TMDb API on <https://developers.themoviedb.org/3/getting-started/introduction>). The Movie Database (TMDb) is a popular user editable database for movies and TV shows (see <https://www.themoviedb.org>).
Routines for nonlinear time series analysis based on Threshold Autoregressive Moving Average (TARMA) models. It provides functions and methods for: TARMA model fitting and forecasting, including robust estimators, see Goracci et al. JBES (2025) <doi:10.1080/07350015.2024.2412011>; tests for threshold effects, see Giannerini et al. JoE (2024) <doi:10.1016/j.jeconom.2023.01.004>, Goracci et al. Statistica Sinica (2023) <doi:10.5705/ss.202021.0120>, Angelini et al. (2024) <doi:10.48550/arXiv.2308.00444>; unit-root tests based on TARMA models, see Chan et al. Statistica Sinica (2024) <doi:10.5705/ss.202022.0125>.
Facilities to work with vector and raster data in efficient repeatable and systematic work flow. Missing functionality in existing packages is included here to allow extraction from raster data with simple features and Spatial types and to make extraction consistent and straightforward. Extract cell numbers from raster data and return the cells as a data frame rather than as lists of matrices or vectors. The functions here allow spatial data to be used without special handling for the format currently in use.
This package provides functionality of a statistical testing implementation whether a dataset comes from a symmetric distribution when the center of symmetry is unknown, including Wilcoxon test and sign test procedure. In addition, sample size determination for both tests is provided. The Wilcoxon test procedure is described in Vexler et al. (2023) <https://www.sciencedirect.com/science/article/abs/pii/S0167947323000579>, and the sign test is outlined in Gastwirth (1971) <https://www.jstor.org/stable/2284233>.
This package provides the means to convert multiqc_data.json files, produced by the wonderful MultiQC tool, into tidy data frames for downstream analysis in R. This analysis might involve cohort analysis, quality control visualisation, change-point detection, statistical process control, clustering, or any other type of quality analysis.
R spatial objects for Tilegrams. Tilegrams are tiled maps where the region size is proportional to the certain characteristics of the dataset.