Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Total variation denoising can be used to approximate a given sequence of noisy observations by a piecewise constant sequence, with adaptively-chosen break points. An efficient linear-time algorithm for total variation denoising is provided here, based on Johnson (2013) <doi:10.1080/10618600.2012.681238>.
The tdROC package facilitates the estimation of time-dependent ROC (Receiver Operating Characteristic) curves and the Area Under the time-dependent ROC Curve (AUC) in the context of survival data, accommodating scenarios with right censored data and the option to account for competing risks. In addition to the ROC/AUC estimation, the package also estimates time-dependent Brier score and survival difference. Confidence intervals of various estimated quantities can be obtained from bootstrap. The package also offers plotting functions for visualizing time-dependent ROC curves.
Tidy tools for NetCDF data sources. Explore the contents of a NetCDF source (file or URL) presented as variables organized by grid with a database-like interface. The hyper_filter() interactive function translates the filter value or index expressions to array-slicing form. No data is read until explicitly requested, as a data frame or list of arrays via hyper_tibble() or hyper_array().
This is a statistical tool interactive that provides multivariate statistical tests that are more powerful than traditional Hotelling T2 test and LRT (likelihood ratio test) for the vector of normal mean populations with and without contamination and non-normal populations (Henrique J. P. Alves & Daniel F. Ferreira (2019) <DOI: 10.1080/03610918.2019.1693596>).
Implementation of two transportation problem algorithms. 1. North West Corner Method 2. Minimum Cost Method or Least cost method. For more technical details about the algorithms please refer below URLs. <http://www.universalteacherpublications.com/univ/ebooks/or/Ch5/nw.htm>. <http://personal.maths.surrey.ac.uk/st/J.F/chapter7.pdf>.
Translate R control flow expressions into Tensorflow graphs.
Using Gaussian graphical models we propose a novel approach to perform pathway analysis using gene expression. Given the structure of a graph (a pathway) we introduce two statistical tests to compare the mean and the concentration matrices between two groups. Specifically, these tests can be performed on the graph and on its connected components (cliques). The package is based on the method described in Massa M.S., Chiogna M., Romualdi C. (2010) <doi:10.1186/1752-0509-4-121>.
This package provides functions to scale, log-transform and fit linear models within a tidyverse'-style R code framework. Intended to smooth over inconsistencies in output of base R statistical functions, allowing ease of teaching, learning and daily use. Inspired by the tidy principles used in broom Robinson (2017) <doi:10.21105/joss.00341>.
This is a tidy implementation for heatmap. At the moment it is based on the (great) package ComplexHeatmap'. The goal of this package is to interface a tidy data frame with this powerful tool. Some of the advantages are: Row and/or columns colour annotations are easy to integrate just specifying one parameter (column names). Custom grouping of rows is easy to specify providing a grouped tbl. For example: df %>% group_by(...). Labels size adjusted by row and column total number. Default use of Brewer and Viridis palettes.
For when your colors absolutely should not be excluded from the narrative.
This package performs various statistical transformations; Box-Cox and Log (Box and Cox, 1964) <doi:10.1111/j.2517-6161.1964.tb00553.x>, Glog (Durbin et al., 2002) <doi:10.1093/bioinformatics/18.suppl_1.S105>, Neglog (Whittaker et al., 2005) <doi:10.1111/j.1467-9876.2005.00520.x>, Reciprocal (Tukey, 1957), Log Shift (Feng et al., 2016) <doi:10.1002/sta4.104>, Bickel-Docksum (Bickel and Doksum, 1981) <doi:10.1080/01621459.1981.10477649>, Yeo-Johnson (Yeo and Johnson, 2000) <doi:10.1093/biomet/87.4.954>, Square Root (Medina et al., 2019), Manly (Manly, 1976) <doi:10.2307/2988129>, Modulus (John and Draper, 1980) <doi:10.2307/2986305>, Dual (Yang, 2006) <doi:10.1016/j.econlet.2006.01.011>, Gpower (Kelmansky et al., 2013) <doi:10.1515/sagmb-2012-0030>. It also performs graphical approaches, assesses the success of the transformation via tests and plots.
This package provides functions to get personal Google Scholar profile data from web API and show it in table or figure format.
This package provides a convenient way to log scalars, images, audio, and histograms in the tfevent record file format. Logged data can be visualized on the fly using TensorBoard', a web based tool that focuses on visualizing the training progress of machine learning models.
Automates documentation of test_that() calls within R test files. The package scans test sources, extracts human-readable test titles (even when composed with functions like paste() or glue::glue(), ... etc.), and generates reproducible roxygen2-style listings that can be inserted both globally and per-section. It ensures idempotent updates and supports customizable numbering templates with hierarchical indices. Designed for developers, QA teams, and package maintainers seeking consistent, self-documenting test inventories.
R implementation of TFactS to predict which are the transcription factors (TFs), regulated in a biological condition based on lists of differentially expressed genes (DEGs) obtained from transcriptome experiments. This package is based on the TFactS concept by Essaghir et al. (2010) <doi:10.1093/nar/gkq149> and expands it. It allows users to perform TFactS'-like enrichment approach. The package can import and use the original catalogue file from the TFactS as well as users defined catalogues of interest that are not supported by TFactS (e.g., Arabidopsis).
Package designed for working with vectors and lists of vectors, mainly for turning them into other indexed data structures.
Interactive laboratory of Time Series based in Box-Jenkins methodology.
For writing tables with custom formats in a Excel file ready to be distributed.
This package provides model specifications, tuning parameters for models in dann package. Models based on Hastie (1996) <https://web.stanford.edu/~hastie/Papers/dann_IEEE.pdf>.
Parse XML documents from the Open Access subset of Europe PubMed Central <https://europepmc.org> including section paragraphs, tables, captions and references.
This package provides functions that compute predictions from Generalised Additive Models (GAMs) fitted with mgcv and return them as a tibble. These can be plotted with a generic plot()-method that uses ggplot2 or plotted as any other data frame. The main function is predict_gam().
Fits Bayesian finite mixtures with an unknown number of components using the telescoping sampler and different component distributions. For more details see Frühwirth-Schnatter et al. (2021) <doi:10.1214/21-BA1294>.
This package provides a latent, quasi-independent truncation time is assumed to be linked with the observed dependent truncation time, the event time, and an unknown transformation parameter via a structural transformation model. The transformation parameter is chosen to minimize the conditional Kendall's tau (Martin and Betensky, 2005) <doi:10.1198/016214504000001538> or the regression coefficient estimates (Jones and Crowley, 1992) <doi:10.2307/2336782>. The marginal distribution for the truncation time and the event time are completely left unspecified. The methodology is applied to survival curve estimation and regression analysis.
An implementation of hypothesis testing in an extended Rasch modeling framework, including sample size planning procedures and power computations. Provides 4 statistical tests, i.e., gradient test (GR), likelihood ratio test (LR), Rao score or Lagrange multiplier test (RS), and Wald test, for testing a number of hypotheses referring to the Rasch model (RM), linear logistic test model (LLTM), rating scale model (RSM), and partial credit model (PCM). Three types of functions for power and sample size computations are provided. Firstly, functions to compute the sample size given a user-specified (predetermined) deviation from the hypothesis to be tested, the level alpha, and the power of the test. Secondly, functions to evaluate the power of the tests given a user-specified (predetermined) deviation from the hypothesis to be tested, the level alpha of the test, and the sample size. Thirdly, functions to evaluate the so-called post hoc power of the tests. This is the power of the tests given the observed deviation of the data from the hypothesis to be tested and a user-specified level alpha of the test. Power and sample size computations are based on a Monte Carlo simulation approach. It is computationally very efficient. The variance of the random error in computing power and sample size arising from the simulation approach is analytically derived by using the delta method. Additionally, functions to compute the power of the tests as a function of an effect measure interpreted as explained variance are provided. Draxler, C., & Alexandrowicz, R. W. (2015), <doi:10.1007/s11336-015-9472-y>.