Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Generates a game of 2048 that can be played in the console. Supports grids of arbitrary sizes, undoing the last move, and resuming a game that was exited during the current session.
Evaluate inline or chunks of R code in template files and replace with their output modifying the resulting template.
Combine a list of taxa with a phylogeny to generate a starting tree for use in total evidence dating analyses.
Facilitate the management of data from knowledge resources that are frequently used alone or together in research environments. In TKCat', knowledge resources are manipulated as modeled database (MDB) objects. These objects provide access to the data tables along with a general description of the resource and a detail data model documenting the tables, their fields and their relationships. These MDBs are then gathered in catalogs that can be easily explored an shared. Finally, TKCat provides tools to easily subset, filter and combine MDBs and create new catalogs suited for specific needs.
This package provides a type system for R. It supports setting variable types in a script or the body of a function, so variables can't be assigned illegal values. Moreover it supports setting argument and return types for functions.
Write modelling results into a database for tigreBrowser', a web-based tool for browsing figures and summary data of independent model fits, such as Gaussian process models fitted for each gene or other genomic element. The browser is available at <https://github.com/PROBIC/tigreBrowser>.
Interface to TensorFlow Probability', a Python library built on TensorFlow that makes it easy to combine probabilistic models and deep learning on modern hardware ('TPU', GPU'). TensorFlow Probability includes a wide selection of probability distributions and bijectors, probabilistic layers, variational inference, Markov chain Monte Carlo, and optimizers such as Nelder-Mead, BFGS, and SGLD.
Different multiple testing procedures for correlation tests are implemented. These procedures were shown to theoretically control asymptotically the Family Wise Error Rate (Roux (2018) <https://tel.archives-ouvertes.fr/tel-01971574v1>) or the False Discovery Rate (Cai & Liu (2016) <doi:10.1080/01621459.2014.999157>). The package gather four test statistics used in correlation testing, four FWER procedures with either single step or stepdown versions, and four FDR procedures.
Density, distribution function, the quantile function, random generation function, and maximum likelihood estimation.
Presentation-quality tables are displayed as plots on an R graphics device. Although there are other packages that format tables for display, this package is unique in combining two features: (a) It is aware of the logical structure of the table being presented, and makes use of that for automatic layout and styling of the table. This avoids the need for most manual adjustments to achieve an attractive result. (b) It displays tables using ggplot2 graphics. Therefore a table can be presented anywhere a graph could be, with no more effort. External software such as LaTeX or HTML or their viewers is not required. The package provides a full set of tools to control the style and appearance of tables, including titles, footnotes and reference marks, horizontal and vertical rules, and spacing of rows and columns. Methods are included to display matrices; data frames; tables created by R's ftable(), table(), and xtabs() functions; and tables created by the tables and xtable packages. Methods can be added to display other table-like objects. A vignette is included that illustrates usage and options available in the package.
This package provides a toolkit for calculating topographic distances and identifying and plotting topographic paths. Topographic distances can be calculated along shortest topographic paths (Wang (2009) <doi:10.1111/j.1365-294X.2009.04338.x>), weighted topographic paths (Zhan et al. (1993) <doi:10.1007/3-540-57207-4_29>), and topographic least cost paths (Wang and Summers (2010) <doi:10.1111/j.1365-294X.2009.04465.x>). Functions can map topographic paths on colored or hill shade maps and plot topographic cross sections (elevation profiles) for the paths.
When plotting treated-minus-control differences, after-minus-before changes, or difference-in-differences, the ttrans() function symmetrically transforms the positive and negative tails to aid plotting. The package includes an observational study with three control groups and an unaffected outcome; see Rosenbaum (2022) <doi:10.1080/00031305.2022.2063944>.
Helper functions for MASCOTNUM / RT-UQ <https://uq.math.cnrs.fr/> algorithm template, for design of numerical experiments practice: algorithm template parser to support MASCOTNUM specification <https://github.com/MASCOTNUM/algorithms>, ask & tell decoupling injection (inspired by <https://search.r-project.org/CRAN/refmans/sensitivity/html/decoupling.html>) to use "crimped" algorithms (like uniroot(), optim(), ...) from outside R, basic template examples: Brent algorithm for 1 dim root finding and L-BFGS-B from base optim().
This package provides functions for statistical analysis, prediction and control of time series based mainly on Akaike and Nakagawa (1988) <ISBN 978-90-277-2786-2>.
This package provides tools for exploring effects of correlated features in predictive models. The predict_triplot() function delivers instance-level explanations that calculate the importance of the groups of explanatory variables. The model_triplot() function delivers data-level explanations. The generic plot function visualises in a concise way importance of hierarchical groups of predictors. All of the the tools are model agnostic, therefore works for any predictive machine learning models. Find more details in Biecek (2018) <arXiv:1806.08915>.
Interface to the API for TreeBASE <http://treebase.org> from R. TreeBASE is a repository of user-submitted phylogenetic trees (of species, population, or genes) and the data used to create them.
This package provides a Tcl/Tk Graphical User Interface (GUI) to display images than can be zoomed and panned using the mouse and keyboard shortcuts. tkImgR read and write different image formats (PPM/PGM, PNG and GIF) using the standard Tcl/Tk distribution (>=8.6), but other formats (JPEG, TIFF, CR2) can be handled using the tkImg package for Tcl/Tk'.
Bayesian trophic position models using stan by leveraging brms for stable isotope data. Trophic position models are derived by using equations from Post (2002) <doi:10.1890/0012-9658(2002)083[0703:USITET]2.0.CO;2>, Vander Zanden and Vadeboncoeur (2002) <doi:10.1890/0012-9658(2002)083[2152:FAIOBA]2.0.CO;2>, and Heuvel et al. (2024) <doi:10.1139/cjfas-2024-0028>.
Function for sparse regression on raw text, regressing a labeling vector onto a feature space consisting of all possible phrases.
This package provides test statistics, p-value, and confidence intervals based on 9 hypothesis tests for dependence.
An implementation of fitting generalized linear models on second-order tensor type data. The functions within this package mainly focus on parameter estimation, including parameter coefficients and standard deviation.
This package provides a model for the growth of self-limiting populations using three, four, or five parameter functions, which have wide applications in a variety of fields. The dependent variable in a dynamical modeling could be the population size at time x, where x is the independent variable. In the analysis of quantitative polymerase chain reaction (qPCR), the dependent variable would be the fluorescence intensity and the independent variable the cycle number. This package then would calculate the TWW cycle threshold.
This package implements the approach described in Fong and Grimmer (2016) <https://aclweb.org/anthology/P/P16/P16-1151.pdf> for automatically discovering latent treatments from a corpus and estimating the average marginal component effect (AMCE) of each treatment. The data is divided into a training and test set. The supervised Indian Buffet Process (sibp) is used to discover latent treatments in the training set. The fitted model is then applied to the test set to infer the values of the latent treatments in the test set. Finally, Y is regressed on the latent treatments in the test set to estimate the causal effect of each treatment.
Detection of outliers in time series following the Chen and Liu (1993) <DOI:10.2307/2290724> procedure. Innovational outliers, additive outliers, level shifts, temporary changes and seasonal level shifts are considered.