Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Compute age-adjusted rates by direct and indirect methods and other epidemiological indicators in a tidy way, wrapping functions from the epitools package.
Pacote para a analise de experimentos com um ou dois fatores com testemunhas adicionais conduzidos no delineamento inteiramente casualizado ou em blocos casualizados. "Package for the analysis of one or two-way experiments with additional controls conducted in a completely randomized design or in a randomized block design".
This package contains some auxiliary functions.
Create browsers for reading full texts from a token list format. Information obtained from text analyses (e.g., topic modeling, word scaling) can be used to annotate the texts.
Density, distribution function, quantile function and random generation for the Truncated Generalised Gamma Distribution (also in log10(x) and ln(x) space).
This package provides an R-interface to the TMDb API (see TMDb API on <https://developers.themoviedb.org/3/getting-started/introduction>). The Movie Database (TMDb) is a popular user editable database for movies and TV shows (see <https://www.themoviedb.org>).
How can we measure how the usage or frequency of some feature, such as words, differs across some group or set, such as documents? One option is to use the log odds ratio, but the log odds ratio alone does not account for sampling variability; we haven't counted every feature the same number of times so how do we know which differences are meaningful? Enter the weighted log odds, which tidylo provides an implementation for, using tidy data principles. In particular, here we use the method outlined in Monroe, Colaresi, and Quinn (2008) <doi:10.1093/pan/mpn018> to weight the log odds ratio by a prior. By default, the prior is estimated from the data itself, an empirical Bayes approach, but an uninformative prior is also available.
Schedule R scripts/processes with the Windows task scheduler. This allows R users to automate R processes on specific time points from R itself.
Computes the t* statistic corresponding to the tau* population coefficient introduced by Bergsma and Dassios (2014) <DOI:10.3150/13-BEJ514> and does so in O(n^2) time following the algorithm of Heller and Heller (2016) <DOI:10.48550/arXiv.1605.08732> building off of the work of Weihs, Drton, and Leung (2016) <DOI:10.1007/s00180-015-0639-x>. Also allows for independence testing using the asymptotic distribution of t* as described by Nandy, Weihs, and Drton (2016) <DOI:10.1214/16-EJS1166>.
Compose data for and extract, manipulate, and visualize posterior draws from Bayesian models ('JAGS', Stan', rstanarm', brms', MCMCglmm', coda', ...) in a tidy data format. Functions are provided to help extract tidy data frames of draws from Bayesian models and that generate point summaries and intervals in a tidy format. In addition, ggplot2 geoms and stats are provided for common visualization primitives like points with multiple uncertainty intervals, eye plots (intervals plus densities), and fit curves with multiple, arbitrary uncertainty bands.
To handle higher-order tensor data. See Kolda and Bader (2009) <doi:10.1137/07070111X> for details on tensor. While existing packages on tensor data extend the base array class to some data classes, this package serves as an alternative resort to handle tensor only as array class. Some functionalities related to missingness are also supported.
This package provides a step-up test for genetic rare variants in a gene or in a pathway. The method determines an optimal grouping of rare variants analytically. The method has been described in Hoffmann TJ, Marini NJ, and Witte JS (2010) <doi:10.1371/journal.pone.0013584>.
Defines the classes used to identify outliers (threshing) and compute the number of significant principal components and number of clusters (reaping) in a joint application of PCA and hierarchical clustering. See Wang et al., 2018, <doi:10.1186/s12859-017-1998-9>.
Set of tools to estimate the probability in the upper tail of the aggregate loss distribution using different methods: Panjer recursion, Monte Carlo simulations, Markov bound, Cantelli bound, Moment bound, and Chernoff bound.
This package provides tools for computing various vector summaries of persistence diagrams studied in Topological Data Analysis. For improved computational efficiency, all code for the vector summaries is written in C++ using the Rcpp and RcppArmadillo packages.
Table 1 is the classical way to describe the patients in a clinical study. The amount of splits in the data in such a table is limited. Table1Heatmap draws a heatmap of all crosstables that can be generated with the data. Users can choose between showing the actual crosstables or direction of effect of associations, and highlight associations by number of patients or p-values. v1.2 - fixed "missing "no visible global function definition for ..".
This package provides functions to design phase 1 trials using an isotonic regression based design incorporating time-to-event information. Simulation and design functions are available, which incorporate information about followup and DLTs, and apply isotonic regression to devise estimates of DLT probability.
The Time-Delay Correlation algorithm (TDCor) reconstructs the topology of a gene regulatory network (GRN) from time-series transcriptomic data. The algorithm is described in details in Lavenus et al., Plant Cell, 2015. It was initially developed to infer the topology of the GRN controlling lateral root formation in Arabidopsis thaliana. The time-series transcriptomic dataset which was used in this study is included in the package to illustrate how to use it.
This package performs Three-Mode Principal Components Analysis, which carries out Tucker Models.
Automates documentation of test_that() calls within R test files. The package scans test sources, extracts human-readable test titles (even when composed with functions like paste() or glue::glue(), ... etc.), and generates reproducible roxygen2-style listings that can be inserted both globally and per-section. It ensures idempotent updates and supports customizable numbering templates with hierarchical indices. Designed for developers, QA teams, and package maintainers seeking consistent, self-documenting test inventories.
This package creates a local Lightning Memory-Mapped Database ('LMDB') of many commonly used taxonomic authorities and provides functions that can quickly query this data. Supported taxonomic authorities include the Integrated Taxonomic Information System ('ITIS'), National Center for Biotechnology Information ('NCBI'), Global Biodiversity Information Facility ('GBIF'), Catalogue of Life ('COL'), and Open Tree Taxonomy ('OTT'). Name and identifier resolution using LMDB can be hundreds of times faster than either relational databases or internet-based queries. Precise data provenance information for data derived from naming providers is also included.
This package creates interpretable decision tree visualizations with the data represented as a heatmap at the tree's leaf nodes. treeheatr utilizes the customizable ggparty package for drawing decision trees.
The companion package that provides all the datasets used in the book "Data Integration, Manipulation and Visualization of Phylogenetic Trees" by Guangchuang Yu (2022, ISBN:9781032233574).
This package provides tools for reading, parsing, indexing, and exporting LAS (Log ASCII Standard) well log files into tidy, analysis-ready tabular formats. The package separates LAS header information and log data into structured components, builds a searchable index across collections of LAS files, and enables reproducible subsetting of wells based on metadata or curve availability. Output tables can be written to CSV or Parquet formats to support large-scale statistical, machine learning, and earth science workflows. The tidy data structure follows Wickham (2014) <doi:10.18637/jss.v059.i10>. The LAS file structure follows the Canadian Well Logging Society LAS standard <https://www.cwls.org/wp-content/uploads/2017/02/Las2_Update_Jan2017.pdf>.