Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for extracting tidy data from Bayesian treatment effect models, in particular BART, but extensions are possible. Functionality includes extracting tidy posterior summaries as in tidybayes <https://github.com/mjskay/tidybayes>, estimating (average) treatment effects, common support calculations, and plotting useful summaries of these.
Generates a game of 2048 that can be played in the console. Supports grids of arbitrary sizes, undoing the last move, and resuming a game that was exited during the current session.
This package implements the tail-rank statistic for selecting biomarkers from a microarray data set, an efficient nonparametric test focused on the distributional tails. See <https://gitlab.com/krcoombes/coombeslab/-/blob/master/doc/papers/tolstoy-new.pdf>.
Computing diversity measures on tripartite graphs. This package first implements a parametrized family of such diversity measures which apply on probability distributions. Sometimes called "True Diversity", this family contains famous measures such as the richness, the Shannon entropy, the Herfindahl-Hirschman index, and the Berger-Parker index. Second, the package allows to apply these measures on probability distributions resulting from random walks between the levels of tripartite graphs. By defining an initial distribution at a given level of the graph and a path to follow between the three levels, the probability of the walker's position within the final level is then computed, thus providing a particular instance of diversity to measure.
Cluster analysis is one of the most fundamental problems in data science. We provide a variety of algorithms from clustering to the learning on the space of partitions. See Hennig, Meila, and Rocci (2016, ISBN:9781466551886) for general exposition to cluster analysis.
Tensor Composition Analysis (TCA) allows the deconvolution of two-dimensional data (features by observations) coming from a mixture of heterogeneous sources into a three-dimensional matrix of signals (features by observations by sources). The TCA framework further allows to test the features in the data for different statistical relations with an outcome of interest while modeling source-specific effects; particularly, it allows to look for statistical relations between source-specific signals and an outcome. For example, TCA can deconvolve bulk tissue-level DNA methylation data (methylation sites by individuals) into a three-dimensional tensor of cell-type-specific methylation levels for each individual (i.e. methylation sites by individuals by cell types) and it allows to detect cell-type-specific statistical relations (associations) with phenotypes. For more details see Rahmani et al. (2019) <DOI:10.1038/s41467-019-11052-9>.
Generate a palette of tints, shades or both from a single colour.
This package provides feedback about dplyr and tidyr operations.
User-friendly analysis of hierarchical multinomial processing tree (MPT) models that are often used in cognitive psychology. Implements the latent-trait MPT approach (Klauer, 2010) <DOI:10.1007/s11336-009-9141-0> and the beta-MPT approach (Smith & Batchelder, 2010) <DOI:10.1016/j.jmp.2009.06.007> to model heterogeneity of participants. MPT models are conveniently specified by an .eqn-file as used by other MPT software and data are provided by a .csv-file or directly in R. Models are either fitted by calling JAGS or by an MPT-tailored Gibbs sampler in C++ (only for nonhierarchical and beta MPT models). Provides tests of heterogeneity and MPT-tailored summaries and plotting functions. A detailed documentation is available in Heck, Arnold, & Arnold (2018) <DOI:10.3758/s13428-017-0869-7> and a tutorial on MPT modeling can be found in Schmidt, Erdfelder, & Heck (2023) <DOI:10.1037/met0000561>.
This package provides a dataset of predefined color palettes based on the Star Trek science fiction series, associated color palette functions, and additional functions for generating customized palettes that are on theme. The package also offers functions for applying the palettes to plots made using the ggplot2 package.
Converting text to numerical features requires specifically created procedures, which are implemented as steps according to the recipes package. These steps allows for tokenization, filtering, counting (tf and tfidf) and feature hashing.
This package provides functions for the computationally efficient simulation of dynamic networks estimated with the statistical framework of temporal exponential random graph models, implemented in the tergm package.
Builds tables with customizable rows. Users can specify the type of data to use for each row, as well as how to handle missing data and the types of comparison tests to run on the table columns.
Goodness of Fit and Forecast Evaluation Tests for timeseries models. Includes, among others, the Generalized Method of Moments (GMM) Orthogonality Test of Hansen (1982), the Nyblom (1989) parameter constancy test, the sign-bias test of Engle and Ng (1993), and a range of tests for value at risk and expected shortfall evaluation.
This package provides a Text mining toolkit for Chinese, which includes facilities for Chinese string processing, Chinese NLP supporting, encoding detecting and converting. Moreover, it provides some functions to support tm package in Chinese.
Plots ternary diagrams (simplex plots / Gibbs triangles) and Holdridge life zone plots <doi:10.1126/science.105.2727.367> using the standard graphics functions. Allows custom annotation, interpolating, contouring and scaling of plotting region. Includes a Shiny user interface for point-and-click ternary plotting. An alternative to ggtern', which uses the ggplot2 family of plotting functions.
This package provides functions for tabulating and summarising categorical variables. Most functions are designed to work with dataframes, and use the tidyverse idiom of taking the dataframe as the first argument so they work within pipelines. Equivalent functions that operate directly on vectors are also provided where it makes sense. This package aims to make exploratory data analysis involving categorical variables quicker, simpler and more robust.
Method to estimate the effect of the trend in predictor variables on the observed trend of the response variable using mixed models with temporal autocorrelation. See Fernández-Martà nez et al. (2017 and 2019) <doi:10.1038/s41598-017-08755-8> <doi:10.1038/s41558-018-0367-7>.
Trauma Mortality prediction for ICD-9, ICD-10, and AIS lexicons in long or wide format based on Dr. Alan Cook's tmpm mortality model.
Optimizers for torch deep learning library. These functions include recent results published in the literature and are not part of the optimizers offered in torch'. Prospective users should test these optimizers with their data, since performance depends on the specific problem being solved. The packages includes the following optimizers: (a) adabelief by Zhuang et al (2020), <arXiv:2010.07468>; (b) adabound by Luo et al.(2019), <arXiv:1902.09843>; (c) adahessian by Yao et al.(2021) <arXiv:2006.00719>; (d) adamw by Loshchilov & Hutter (2019), <arXiv:1711.05101>; (e) madgrad by Defazio and Jelassi (2021), <arXiv:2101.11075>; (f) nadam by Dozat (2019), <https://openreview.net/pdf/OM0jvwB8jIp57ZJjtNEZ.pdf>; (g) qhadam by Ma and Yarats(2019), <arXiv:1810.06801>; (h) radam by Liu et al. (2019), <arXiv:1908.03265>; (i) swats by Shekar and Sochee (2018), <arXiv:1712.07628>; (j) yogi by Zaheer et al.(2019), <https://papers.nips.cc/paper/8186-adaptive-methods-for-nonconvex-optimization>.
Recursive partytioning of transformation models with corresponding random forest for conditional transformation models as described in Transformation Forests (Hothorn and Zeileis, 2021, <doi:10.1080/10618600.2021.1872581>) and Top-Down Transformation Choice (Hothorn, 2018, <DOI:10.1177/1471082X17748081>).
Collection of shiny widgets to support teal applications. Enables the manipulation of application layout and plot or table settings.
Palettes generated from Tintin covers. There is one palette per cover, with a total of 24 palettes of 5 colours each. Includes functions to interpolate colors in order to create more colors based on the provided palettes.The data is based on Cyr, et al. (2004) <doi:10.1503/cmaj.1041405> and Wikipedia <https://en.wikipedia.org/wiki/The_Adventures_of_Tintin>.
This package implements the approach described in Fong and Grimmer (2016) <https://aclweb.org/anthology/P/P16/P16-1151.pdf> for automatically discovering latent treatments from a corpus and estimating the average marginal component effect (AMCE) of each treatment. The data is divided into a training and test set. The supervised Indian Buffet Process (sibp) is used to discover latent treatments in the training set. The fitted model is then applied to the test set to infer the values of the latent treatments in the test set. Finally, Y is regressed on the latent treatments in the test set to estimate the causal effect of each treatment.