Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The function TailClassifier() suggests one of the following types of tail for your discrete data: 1) Power decaying tail; 2) Sub-exponential decaying tail; and 3) Near-exponential decaying tail. The function also provides an estimate of the parameter for the classified-distribution as a reference.
Generate a palette of tints, shades or both from a single colour.
Computation of t-year survival probabilities and t-year risks with right censored survival data. The Kaplan-Meier estimator is used to provide estimates for data without competing risks and the Aalen-Johansen estimator is used when there are competing risks. Confidence intervals and p-values are obtained using either usual Wald-type inference or empirical likelihood inference, as described in Thomas and Grunkemeier (1975) <doi:10.1080/01621459.1975.10480315> and Blanche (2020) <doi:10.1007/s10985-018-09458-6>. Functions for both one-sample and two-sample inference are provided. Unlike Wald-type inference, empirical likelihood inference always leads to consistent conclusions, in terms of statistical significance, when comparing two risks (or survival probabilities) via either a ratio or a difference.
Access Google Trends information. This package provides a tidy wrapper to the gtrendsR package. Use four spaces when indenting paragraphs within the Description.
Produce an HTML page containing horizontal strips that symbolize events in a person's lsife. Since this is entirely a visualization, the image <https://barryzee.github.io/henry-timeline/henry.html> will show the basic use to show a timeline of events. The image <https://barryzee.github.io/vermeer/cssOverlay.html> shows how to correlate two timelines of events. A brief description is available at <https://barryzee.github.io/timeLineGraphics_manuscript/golden_age.html>.
This package provides a robust implementation of Topolow algorithm. It embeds objects into a low-dimensional Euclidean space from a matrix of pairwise dissimilarities, even when the data do not satisfy metric or Euclidean axioms. The package is particularly well-suited for sparse, incomplete, and censored (thresholded) datasets such as antigenic relationships. The core is a physics-inspired, gradient-free optimization framework that models objects as particles in a physical system, where observed dissimilarities define spring rest lengths and unobserved pairs exert repulsive forces. The package also provides functions specific to antigenic mapping to transform cross-reactivity and binding affinity measurements into accurate spatial representations in a phenotype space. Key features include: * Robust Embedding from Sparse Data: Effectively creates complete and consistent maps (in optimal dimensions) even with high proportions of missing data (e.g., >95%). * Physics-Inspired Optimization: Models objects (e.g., antigens, landmarks) as particles connected by springs (for measured dissimilarities) and subject to repulsive forces (for missing dissimilarities), and simulates the physical system using laws of mechanics, reducing the need for complex gradient computations. * Automatic Dimensionality Detection: Employs a likelihood-based approach to determine the optimal number of dimensions for the embedding/map, avoiding distortions common in methods with fixed low dimensions. * Noise and Bias Reduction: Naturally mitigates experimental noise and bias through its network-based, error-dampening mechanism. * Antigenic Velocity Calculation (for antigenic data): Introduces and quantifies "antigenic velocity," a vector that describes the rate and direction of antigenic drift for each pathogen isolate. This can help identify cluster transitions and potential lineage replacements. * Broad Applicability: Analyzes data from various objects that their dissimilarity may be of interest, ranging from complex biological measurements such as continuous and relational phenotypes, antibody-antigen interactions, and protein folding to abstract concepts, such as customer perception of different brands. Methods are described in the context of bioinformatics applications in Arhami and Rohani (2025a) <doi:10.1093/bioinformatics/btaf372>, and mathematical proofs and Euclidean embedding details are in Arhami and Rohani (2025b) <doi:10.48550/arXiv.2508.01733>.
This package implements a semiparametric estimator for the odds ratio model with censored, time-lagged, ordered categorical outcome in a randomized clinical trial that incorporates baseline and time-dependent information. Tsiatis AA, Davidian M, Holloway ST (2023) <doi:10.1111/biom.13603>.
Table 1 is the classical way to describe the patients in a clinical study. The amount of splits in the data in such a table is limited. Table1Heatmap draws a heatmap of all crosstables that can be generated with the data. Users can choose between showing the actual crosstables or direction of effect of associations, and highlight associations by number of patients or p-values. v1.2 - fixed "missing "no visible global function definition for ..".
Trusted Timestamps (tts) are created by incorporating a hash of a file or dataset into a transaction on the decentralized blockchain (Stellar network). The package makes use of a free service provided by <https://stellarapi.io>.
This package provides a tool to help create shiny apps for selecting and annotating elements of images. Users must supply images, questions, and answer choices. The user interface is a dynamic shiny app, that displays the images and questions and answer choices. The data generated can be saved to a file that can be used for subsequent analysis. The original purpose was to annotate still images from tennis video for face recognition and emotion detection purposes.
Density, distribution function, the quantile function, random generation function, and maximum likelihood estimation.
For when your colors absolutely should not be excluded from the narrative.
Forecasting of long memory time series in presence of structural break by using TSF algorithm by Papailias and Dias (2015) <doi:10.1016/j.ijforecast.2015.01.006>.
The total deviation index (TDI) is an unscaled statistical measure used to evaluate the deviation between paired quantitative measurements when assessing the extent of agreement between different raters. It describes a boundary such that a large specified proportion of the differences in paired measurements are within the boundary (Lin, 2000) <https://pubmed.ncbi.nlm.nih.gov/10641028/>. This R package implements some methodologies existing in the literature for TDI estimation and inference in the case of two raters.
This package implements methods and functions to calibrate time-specific niche models (multi-temporal calibration), letting users execute a strict calibration and selection process of niche models based on ellipsoids, as well as functions to project the potential distribution in the present and in global change scenarios.The tenm package has functions to recover information that may be lost or overlooked while applying a data curation protocol. This curation involves preserving occurrences that may appear spatially redundant (occurring in the same pixel) but originate from different time periods. A novel aspect of this package is that it might reconstruct the fundamental niche more accurately than mono-calibrated approaches. The theoretical background of the package can be found in Peterson et al. (2011)<doi:10.5860/CHOICE.49-6266>.
Interface to TensorFlow Probability', a Python library built on TensorFlow that makes it easy to combine probabilistic models and deep learning on modern hardware ('TPU', GPU'). TensorFlow Probability includes a wide selection of probability distributions and bijectors, probabilistic layers, variational inference, Markov chain Monte Carlo, and optimizers such as Nelder-Mead, BFGS, and SGLD.
Create a time-varying dataset using features, exposure, and look back specifications.
We propose an optimality criterion to determine the required training set, r-score, which is derived directly from Pearson's correlation between the genomic estimated breeding values and phenotypic values of the test set <doi:10.1007/s00122-019-03387-0>. This package provides two main functions to determine a good training set and its size.
Tensor-train is a compact representation for higher-order tensors. Some algorithms for performing tensor-train decomposition are available such as TT-SVD, TT-WOPT, and TT-Cross. For the details of the algorithms, see I. V. Oseledets (2011) <doi:10.1137/090752286>, Yuan Longao, et al (2017) <doi:10.48550/arXiv.1709.02641>, I. V. Oseledets (2010) <doi:10.1016/j.laa.2009.07.024>.
This package provides a tm Source to create corpora from articles exported from the LexisNexis content provider as HTML files. It is able to read both text content and meta-data information (including source, date, title, author and pages). Note that the file format is highly unstable: there is no warranty that this package will work for your corpus, and you may have to adjust the code to adapt it to your particular format.
The main function of the package aims to update lmer()'/'glmer() models depending on their warnings, so trying to avoid convergence and singularity problems.
Interactive shiny application for working with textmining and text analytics. Various visualizations are provided.
The R implementation of TIGER. TIGER integrates random forest algorithm into an innovative ensemble learning architecture. Benefiting from this advanced architecture, TIGER is resilient to outliers, free from model tuning and less likely to be affected by specific hyperparameters. TIGER supports targeted and untargeted metabolomics data and is competent to perform both intra- and inter-batch technical variation removal. TIGER can also be used for cross-kit adjustment to ensure data obtained from different analytical assays can be effectively combined and compared. Reference: Han S. et al. (2022) <doi:10.1093/bib/bbab535>.
An R re-implementation of the treeinterpreter package on PyPI <https://pypi.org/project/treeinterpreter/>. Each prediction can be decomposed as prediction = bias + feature_1_contribution + ... + feature_n_contribution'. This decomposition is then used to calculate the Mean Decrease Impurity (MDI) and Mean Decrease Impurity using out-of-bag samples (MDI-oob) feature importance measures based on the work of Li et al. (2019) <doi:10.48550/arXiv.1906.10845>.