Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The r-ggformula introduces a family of graphics functions, gf_point(), gf_density(), and so on, bring the formula interface to ggplot(). This captures and extends the excellent simplicity of the lattice-graphics formula interface, while providing the intuitive capabilities of r-ggplot2.
This package provides R bindings for Google's s2 library for geometric calculations on the sphere. High-performance constructors and exporters provide high compatibility with existing spatial packages, transformers construct new geometries from existing geometries, predicates provide a means to select geometries based on spatial relationships, and accessors extract information about geometries.
Ggplot2 is an implementation of the grammar of graphics in R. It combines the advantages of both base and lattice graphics: conditioning and shared axes are handled automatically, and you can still build up a plot step by step from multiple data sources. It also implements a sophisticated multidimensional conditioning system and a consistent interface to map data to aesthetic attributes.
This package implements the RUV (Remove Unwanted Variation) algorithms. These algorithms attempt to adjust for systematic errors of unknown origin in high-dimensional data. The algorithms were originally developed for use with genomic data, especially microarray data, but may be useful with other types of high-dimensional data as well. The algorithms require the user to specify a set of negative control variables, as described in the references. The algorithms included in this package are RUV-2, RUV-4, RUV-inv, RUV-rinv, RUV-I, and RUV-III, along with various supporting algorithms.
This package provides an easy to use library to setup, apply and make inference with discrete time and discrete space hidden Markov models.
This package provides a set of predicates and assertions for checking the properties of numbers. This is mainly for use by other package developers who want to include run-time testing features in their own packages.
This package provides primitives for visualizing distributions using ggplot2 that are particularly tuned for visualizing uncertainty in either a frequentist or Bayesian mode. Both analytical distributions (such as frequentist confidence distributions or Bayesian priors) and distributions represented as samples (such as bootstrap distributions or Bayesian posterior samples) are easily visualized.
This is a package for text mining for word processing and sentiment analysis using dplyr, ggplot2, and other Tidy tools.
This package lets you manage configuration values across multiple environments (e.g. development, test, production). It reads values using a function that determines the current environment and returns the appropriate value.
This package provides an estimation and inference methods for models of conditional quantiles: linear and nonlinear parametric and non-parametric models for conditional quantiles of a univariate response and several methods for handling censored survival data. Portfolio selection methods based on expected shortfall risk are also included.
This package contains utility functions for the spatstat package which may also be useful for other purposes.
This package lets you use syntax inspired by the package glue to extract matched substrings in a more intuitive and compact way than by using standard regular expressions.
This package provides a toolkit for working with Biological Observation Matrix (BIOM) files. Features include reading/writing all BIOM formats, rarefaction, alpha diversity, beta diversity (including UniFrac), summarizing counts by taxonomic level, and sample subsetting. Standalone functions for reading, writing, and subsetting phylogenetic trees are also provided.
This package computes the areas under the precision-recall (PR) and ROC curve for weighted (e.g. soft-labeled) and unweighted data. In contrast to other implementations, the interpolation between points of the PR curve is done by a non-linear piecewise function. In addition to the areas under the curves, the curves themselves can also be computed and plotted by a specific S3-method.
This package provides a friendly interface for the construction of regular expressions. Regular expressions are a very powerful feature, however they are often difficult to interpret. Rex allows you to build complex regular expressions from human readable expressions
This package provides a set of restricted permutation designs for freely exchangeable, line transects (time series), spatial grid designs and permutation of blocks (groups of samples). permute also allows split-plot designs, in which the whole-plots or split-plots or both can be freely exchangeable.
This package provides Gaussian mixture models, k-means, mini-batch-kmeans, k-medoids and affinity propagation clustering with the option to plot, validate, predict (new data) and estimate the optimal number of clusters. The package takes advantage of RcppArmadillo to speed up the computationally intensive parts of the functions. For more information, see
"Clustering in an Object-Oriented Environment" by Anja Struyf, Mia Hubert, Peter Rousseeuw (1997), Journal of Statistical Software, https://doi.org/10.18637/jss.v001.i04;
"Web-scale k-means clustering" by D. Sculley (2010), ACM Digital Library, https://doi.org/10.1145/1772690.1772862;
"Armadillo: a template-based C++ library for linear algebra" by Sanderson et al (2016), The Journal of Open Source Software, https://doi.org/10.21105/joss.00026;
"Clustering by Passing Messages Between Data Points" by Brendan J. Frey and Delbert Dueck, Science 16 Feb 2007: Vol. 315, Issue 5814, pp. 972-976, https://doi.org/10.1126/science.1136800.
Parametric time warping aligns patterns. It aims to put corresponding features at the same locations. The algorithm searches for an optimal polynomial describing the warping. It is possible to align one sample to a reference, several samples to the same reference, or several samples to several references. One can choose between calculating individual warpings, or one global warping for a set of samples and one reference. Two optimization criteria are implemented: RMS error and WCC. Both warping of peak profiles and of peak lists are supported.
This package provides tools for visualizing, smoothing and comparing receiver operating characteristic (ROC curves). The area under the curve (AUC) can be compared with statistical tests based on U-statistics or bootstrap. Confidence intervals can be computed for (p)AUC or ROC curves.
This is a complete suite to estimate models based on moment conditions. It includes the two step Generalized method of moments (Hansen 1982; <doi:10.2307/1912775>), the iterated GMM and continuous updated estimator (Hansen, Eaton and Yaron 1996; <doi:10.2307/1392442>) and several methods that belong to the Generalized Empirical Likelihood family of estimators (Smith 1997; <doi:10.1111/j.0013-0133.1997.174.x>, Kitamura 1997; <doi:10.1214/aos/1069362388>, Newey and Smith 2004; <doi:10.1111/j.1468-0262.2004.00482.x>, and Anatolyev 2005 <doi:10.1111/j.1468-0262.2005.00601.x>).
Learn vector representations of sentences, paragraphs or documents by using the Paragraph Vector algorithms, namely the distributed bag of words (PV-DBOW) and the distributed memory (PV-DM) model. Top2vec finds clusters in text documents by combining techniques to embed documents and words and density-based clustering. It does this by embedding documents in the semantic space as defined by the doc2vec algorithm. Next it maps these document embeddings to a lower-dimensional space using the Uniform Manifold Approximation and Projection (UMAP) clustering algorithm and finds dense areas in that space using a Hierarchical Density-Based Clustering technique (HDBSCAN). These dense areas are the topic clusters which can be represented by the corresponding topic vector which is an aggregate of the document embeddings of the documents which are part of that topic cluster. In the same semantic space similar words can be found which are representative of the topic.
This package provides a collection of tools to streamline the process of fitting elastic net models with glmnet. In addition to providing a formula interface, it also features a function cva.glmnet to do crossvalidation for both α and λ, as well as some utility functions.
Ggdag is built on top of dagitty, an R package that uses the DAGitty web tool for creating and analyzing DAGs. ggdag makes it easy to tidy and plot dagitty objects using ggplot2 and ggraph, as well as common analytic and graphical functions, such as determining adjustment sets and node relationships.
Easily and flexibly insert Font Awesome icons into R Markdown documents and Shiny apps. These icons can be inserted into HTML content through inline SVG tags or i tags. There is also a utility function for exporting Font Awesome icons as PNG images for those situations where raster graphics are needed.