Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a Tcl/Tk Graphical User Interface (GUI) to display images than can be zoomed and panned using the mouse and keyboard shortcuts. tkImgR read and write different image formats (PPM/PGM, PNG and GIF) using the standard Tcl/Tk distribution (>=8.6), but other formats (JPEG, TIFF, CR2) can be handled using the tkImg package for Tcl/Tk'.
This package provides a collection of functions and routines for inputting thermal image video files, plotting and converting binary raw data into estimates of temperature. First published 2015-03-26. Written primarily for research purposes in biological applications of thermal images. v1 included the base calculations for converting thermal image binary values to temperatures. v2 included additional equations for providing heat transfer calculations and an import function for thermal image files (v2.2.3 fixed error importing thermal image to windows OS). v3. Added numerous functions for converting thermal image, videos, rewriting and exporting. v3.1. Added new functions to convert files. v3.2. Fixed the various functions related to finding frame times. v4.0. fixed an error in atmospheric attenuation constants, affecting raw2temp and temp2raw functions. Recommend update for use with long distance calculations. v.4.1.3 changed to frameLocates to reflect change to as.character() to format().
This package implements the Temporal Forest algorithm for feature selection in high-dimensional longitudinal data. The method combines time-aware network construction via weighted gene co-expression network analysis (WGCNA), module-based feature screening, and stability selection using tree-based models. This package provides tools for reproducible longitudinal analysis, closely following the methodology described in Shao, Moore, and Ramirez (2025) <https://github.com/SisiShao/TemporalForest>.
Handling and manipulation polygons, coordinates, and other geographical objects. The tools include: polygon areas, barycentric and trilinear coordinates (Hormann and Floater, 2006, <doi:10.1145/1183287.1183295>), convex hull for polygons (Graham and Yao, 1983, <doi:10.1016/0196-6774(83)90013-5>), polygon triangulation (Toussaint, 1991, <doi:10.1007/BF01905693>), great circle and geodesic distances, Hausdorff distance, and reduced major axis.
Data filtering module for teal applications. Allows for interactive filtering of data stored in data.frame and MultiAssayExperiment objects. Also displays filtered and unfiltered observation counts.
Convert T Cell Receptor (TCR) gene names between the 10X Genomics, Adaptive Biotechnologies, and ImMunoGeneTics (IMGT) nomenclatures.
Estimates the weights and measure of robustness to treatment effect heterogeneity attached to two-way fixed effects regressions. Clément de Chaisemartin, Xavier D'HaultfŠuille (2020) <DOI: 10.1257/aer.20181169>.
This package provides a step-up test for genetic rare variants in a gene or in a pathway. The method determines an optimal grouping of rare variants analytically. The method has been described in Hoffmann TJ, Marini NJ, and Witte JS (2010) <doi:10.1371/journal.pone.0013584>.
Some tools for cleaning up messy Excel files to be suitable for R. People who have been working with Excel for years built more or less complicated sheets with names, characters, formats that are not homogeneous. To be able to use them in R nowadays, we built a set of functions that will avoid the majority of importation problems and keep all the data at best.
This package provides a wrapper for the TexTra API <https://mt-auto-minhon-mlt.ucri.jgn-x.jp/>, a web service for translating texts between different languages. TexTra API account is required to use the service.
Implementation of Testlet Item Response Theory (tirt). A light-version yet comprehensive and streamlined framework for psychometric analysis using unidimensional Item Response Theory (IRT; Baker & Kim (2004) <doi:10.1201/9781482276725>) and Testlet Response Theory (TRT; Wainer et al., (2007) <doi:10.1017/CBO9780511618765>). Designed for researchers, this package supports the estimation of item and person parameters for a wide variety of models, including binary (i.e., Rasch, 2-Parameter Logistic, 3-Parameter Logistic) and polytomous (Partial Credit Model, Generalized Partial Credit Model, Graded Response Model) formats. It also supports the estimation of Testlet models (Rasch Testlet, 2-Parameter Logistic Testlet, 3-Parameter Logistic Testlet, Bifactor, Partial Credit Model Testlet, Graded Response), allowing users to account for local item dependence in bundled items. A key feature is the specialized support for combination use and joint estimation of item response model and testlet response model in one calibration. Beyond standard estimation via Marginal Maximum Likelihood with Expectation-Maximization (EM) or Joint Maximum Likelihood, the package offers robust tools for scale linking and equating (Mean-Mean, Mean-Sigma, Stocking-Lord) to ensure comparability across mixed-format test forms. It also facilitates fixed-parameter calibration, enabling users to estimate person abilities with known item parameters or vice versa, which is essential for pre-equating studies and item bank maintenance. Comprehensive data simulation functions are included to generate synthetic datasets with complex structures, including mixed-model blocks and specific testlet effects, aiding in methodological research and study design validation. Researchers can try multiple simulation situations.
Calculate Expert Team on Climate Change Detection and Indices (ETCCDI) <-- (acronym) climate indices from daily or hourly temperature and precipitation data. Provides flexible data handling.
Tri-hierarchical incomplete block design is defined as an arrangement of v treatments each replicated r times in a three system of blocks if, each block of the first system contains m_1 blocks of second system and each block of the second system contains m_2 blocks of the third system. Ignoring the first and second system of blocks, it leaves an incomplete block design with b_3 blocks of size k_3i units; ignoring first and third system of blocks, it leaves an incomplete block design with b_2 blocks each of size k_2i units and ignoring the second and third system of blocks, it leaves an incomplete block design with b_1 blocks each of size k_1 units. For dealing with experimental circumstances where there are three nested sources of variation, a tri-hierarchical incomplete block design can be adopted. Tri - hierarchical incomplete block designs can find application potential in obtaining mating-environmental designs for breeding trials. To know more about nested block designs one can refer Preece (1967) <doi:10.1093/biomet/54.3-4.479>. This package includes series1(), series2(), series3() and series4() functions. This package generates tri-hierarchical designs with six component designs under certain parameter restrictions.
Unleash the power of time-series data visualization with ease using our package. Designed with simplicity in mind, it offers three key features through the shiny package output. The first tab shows time- series charts with forecasts, allowing users to visualize trends and changes effortlessly. The second one displays Averages per country presented in tables with accompanying sparklines, providing a quick and attractive overview of the data. The last tab presents A customizable world map colored based on user-defined variables for any chosen number of countries, offering an advanced visual approach to understanding geographical data distributions. This package operates with just a few simple arguments, enabling users to conduct sophisticated analyses without the need for complex programming skills. Transform your time-series data analysis experience with our user-friendly tool.
Implementation of unconditional Bernoulli Scan Statistic developed by Kulldorff et al. (2003) <doi:10.1111/1541-0420.00039> for hierarchical tree structures. Tree-based Scan Statistics are an exploratory method to identify event clusters across the space of a hierarchical tree.
This package provides a Text mining toolkit for Chinese, which includes facilities for Chinese string processing, Chinese NLP supporting, encoding detecting and converting. Moreover, it provides some functions to support tm package in Chinese.
This package provides functions for the analysis of time series using copula models. The package is based on methodology described in the following references. McNeil, A.J. (2021) <doi:10.3390/risks9010014>, Bladt, M., & McNeil, A.J. (2021) <doi:10.1016/j.ecosta.2021.07.004>, Bladt, M., & McNeil, A.J. (2022) <doi:10.1515/demo-2022-0105>.
Handling taxonomic lists through objects of class taxlist'. This package provides functions to import species lists from Turboveg (<https://www.synbiosys.alterra.nl/turboveg/>) and the possibility to create backups from resulting R-objects. Also quick displays are implemented as summary-methods.
The aim of the R package treebalance is to provide functions for the computation of a large variety of (im)balance indices for rooted trees. The package accompanies the book Tree balance indices: a comprehensive survey by M. Fischer, L. Herbst, S. Kersting, L. Kuehn and K. Wicke (2023) <ISBN: 978-3-031-39799-8>, <doi:10.1007/978-3-031-39800-1>, which gives a precise definition for the terms balance index and imbalance index (Chapter 4) and provides an overview of the terminology in this manual (Chapter 2). For further information on (im)balance indices, see also Fischer et al. (2021) <https://treebalance.wordpress.com>. Considering both established and new (im)balance indices, treebalance provides (among others) functions for calculating the following 18 established indices and index families: the average leaf depth, the B1 and B2 index, the Colijn-Plazzotta rank, the normal, corrected, quadratic and equal weights Colless index, the family of Colless-like indices, the family of I-based indices, the Rogers J index, the Furnas rank, the rooted quartet index, the s-shape statistic, the Sackin index, the symmetry nodes index, the total cophenetic index and the variance of leaf depths. Additionally, we include 9 tree shape statistics that satisfy the definition of an (im)balance index but have not been thoroughly analyzed in terms of tree balance in the literature yet. These are: the total internal path length, the total path length, the average vertex depth, the maximum width, the modified maximum difference in widths, the maximum depth, the maximum width over maximum depth, the stairs1 and the stairs2 index. As input, most functions of treebalance require a rooted (phylogenetic) tree in phylo format (as introduced in ape 1.9 in November 2006). phylo is used to store (phylogenetic) trees with no vertices of out-degree one. For further information on the format we kindly refer the reader to E. Paradis (2012) <http://ape-package.ird.fr/misc/FormatTreeR_24Oct2012.pdf>.
Feature selection algorithm that extracts features in highly correlated spaces. The extracted features are meant to be fed into simple explainable models such as linear or logistic regressions. The package is useful in the field of explainable modelling as a way to understand variable behavior.
Visualizes the relationship between allele frequency and effect size in genetic association studies. The input is a data frame containing association results. The output is a plot with the effect size of risk variants in the Y axis, and the allele frequency spectrum in the X axis. Corte et al (2023) <doi:10.1101/2023.04.21.23288923>.
This is a statistical tool interactive that provides multivariate statistical tests that are more powerful than traditional Hotelling T2 test and LRT (likelihood ratio test) for the vector of normal mean populations with and without contamination and non-normal populations (Henrique J. P. Alves & Daniel F. Ferreira (2019) <DOI: 10.1080/03610918.2019.1693596>).
The goal of tidyheatmaps is to simplify the generation of publication-ready heatmaps from tidy data. By offering an interface to the powerful pheatmap package, it allows for the effortless creation of intricate heatmaps with minimal code.
The typicality and eccentricity data analysis (TEDA) framework was put forward by Angelov (2013) <DOI:10.14313/JAMRIS_2-2014/16>. It has been further developed into multiple different techniques since, and provides a non-parametric way of determining how similar an observation, from a process that is not purely random, is to other observations generated by the process. This package provides code to use the batch and recursive TEDA methods that have been published.