Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides Apache Spark style window aggregation for R dataframes and remote dbplyr tables via mutate in dplyr flavour.
Access open data from <https://www.threesixtygiving.org>, a database of charitable grant giving in the UK operated by 360Giving'. The package provides functions to search and retrieve data on charitable grant giving, and process that data into tidy formats. It relies on the 360Giving data standard, described at <https://standard.threesixtygiving.org/>.
Plots and analyzes time-intensity curve data, such as data from (contrast-enhanced) ultrasound. Values such as peak intensity, time to peak, area under the curve, wash in rate and wash out rate are calculated.
This package creates a local database of many commonly used taxonomic authorities and provides functions that can quickly query this data.
Tensor-train is a compact representation for higher-order tensors. Some algorithms for performing tensor-train decomposition are available such as TT-SVD, TT-WOPT, and TT-Cross. For the details of the algorithms, see I. V. Oseledets (2011) <doi:10.1137/090752286>, Yuan Longao, et al (2017) <doi:10.48550/arXiv.1709.02641>, I. V. Oseledets (2010) <doi:10.1016/j.laa.2009.07.024>.
We propose an optimality criterion to determine the required training set, r-score, which is derived directly from Pearson's correlation between the genomic estimated breeding values and phenotypic values of the test set <doi:10.1007/s00122-019-03387-0>. This package provides two main functions to determine a good training set and its size.
This package provides functions for the analysis of time series using copula models. The package is based on methodology described in the following references. McNeil, A.J. (2021) <doi:10.3390/risks9010014>, Bladt, M., & McNeil, A.J. (2021) <doi:10.1016/j.ecosta.2021.07.004>, Bladt, M., & McNeil, A.J. (2022) <doi:10.1515/demo-2022-0105>.
Bayesian trophic position models using stan by leveraging brms for stable isotope data. Trophic position models are derived by using equations from Post (2002) <doi:10.1890/0012-9658(2002)083[0703:USITET]2.0.CO;2>, Vander Zanden and Vadeboncoeur (2002) <doi:10.1890/0012-9658(2002)083[2152:FAIOBA]2.0.CO;2>, and Heuvel et al. (2024) <doi:10.1139/cjfas-2024-0028>.
This package provides a diverse collection of time series datasets spanning various fields such as economics, finance, energy, healthcare, and more. Designed to support time series analysis in R by offering datasets from multiple disciplines, making it a valuable resource for researchers and analysts.
This contains functions that can be used to estimate the time-dependent precision-recall curve (PRC) and the corresponding area under the PRC for right-censored survival data. It also compute time-dependent ROC curve and its corresponding area under the ROC curve (AUC). See Beyene, Chen and Kifle (2024) <doi:10.1002/bimj.202300135>.
Key-value store, implemented as a wrapper around LMDB'; the "lightning memory-mapped database" <https://www.symas.com/mdb>. LMDB is a transactional key value store that uses a memory map for efficient access. This package wraps the entire LMDB interface (except duplicated keys), and provides objects for transactions and cursors.
This package provides a tidy workflow for generating, estimating, reporting, and plotting structural equation models using lavaan', OpenMx', or Mplus'. Throughout this workflow, elements of syntax, results, and graphs are represented as tidy data, making them easy to customize. Includes functionality to estimate latent class analyses, and to plot dagitty and igraph objects.
The function TailClassifier() suggests one of the following types of tail for your discrete data: 1) Power decaying tail; 2) Sub-exponential decaying tail; and 3) Near-exponential decaying tail. The function also provides an estimate of the parameter for the classified-distribution as a reference.
An easy tool for plotting annotated timelines, grouped timelines, and exploratory graphics (boxplot/histogram/density plot/scatter plot/line plot). Filter, summarize date data by duration and convert to calendar units.
It analyzes text to create a count of top n-grams, including tokens (one-word), bigrams(two-word), and trigrams (three-word), while removing all stopwords. It also plots the n-grams and corresponding counts as a bar chart.
This is a simple addin to RStudio that finds all TODO', FIX ME', CHANGED etc. comments in your project and shows them as a markers list.
Implement text and sentiment analysis with texter'. Generate sentiment scores on text data and also visualize sentiments. texter allows you to quickly generate insights on your data. It includes support for lexicons such as NRC and Bing'.
This package provides a toolkit for calculating topographic distances and identifying and plotting topographic paths. Topographic distances can be calculated along shortest topographic paths (Wang (2009) <doi:10.1111/j.1365-294X.2009.04338.x>), weighted topographic paths (Zhan et al. (1993) <doi:10.1007/3-540-57207-4_29>), and topographic least cost paths (Wang and Summers (2010) <doi:10.1111/j.1365-294X.2009.04465.x>). Functions can map topographic paths on colored or hill shade maps and plot topographic cross sections (elevation profiles) for the paths.
How can we measure how the usage or frequency of some feature, such as words, differs across some group or set, such as documents? One option is to use the log odds ratio, but the log odds ratio alone does not account for sampling variability; we haven't counted every feature the same number of times so how do we know which differences are meaningful? Enter the weighted log odds, which tidylo provides an implementation for, using tidy data principles. In particular, here we use the method outlined in Monroe, Colaresi, and Quinn (2008) <doi:10.1093/pan/mpn018> to weight the log odds ratio by a prior. By default, the prior is estimated from the data itself, an empirical Bayes approach, but an uninformative prior is also available.
Tabu search algorithm for binary configurations. A basic version of the algorithm as described by Fouskakis and Draper (2007) <doi:10.1111/j.1751-5823.2002.tb00174.x>.
In Cox's proportional hazard model, covariates are modeled as linear function and may not be flexible. This package implements additive trend filtering Cox proportional hazards model as proposed in Jiacheng Wu & Daniela Witten (2019) "Flexible and Interpretable Models for Survival Data", Journal of Computational and Graphical Statistics, <DOI:10.1080/10618600.2019.1592758>. The fitted functions are piecewise polynomial with adaptively chosen knots.
This package provides a collection of commonly used tools for animal movement and other tracking data. Variously distance, angle, bearing, distance-to, bearing-to and speed are provided for geographic data that can be used directly or within tidyverse syntax. Distances and bearings are calculated using modern geodesic methods as provided by Charles F. F. Karney (2013) <doi:10.1007/s00190-012-0578-z> via the geodist and geosphere packages.
This package performs Thresholded Ordered Sparse Canonical Correlation Analysis (CCA). For more details see Senar, N. (2024) <doi:10.1093/bioadv/vbae021> and Senar, N. et al. (2025) <doi:10.48550/arXiv.2503.15140>.
The goal of tidyplate is to help researchers convert different types of microplates into tibbles which can be used in data analysis. It accepts xlsx and csv files formatted in a specific way as input. It supports all types of standard microplate formats such as 6-well, 12-well, 24-well, 48-well, 96-well, 384-well, and, 1536-well plates.