Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for regulation, decomposition and analysis of space-time series. The pastecs library is a PNEC-Art4 and IFREMER initiative to bring PASSTEC 2000 functionalities to R.
This package adds distinctive yet unobtrusive geometric patterns where solid color fills are normally used. Patterned figures look just as professional when viewed by colorblind readers or when printed in black and white. The dozen included patterns can be customized in terms of scale, rotation, color, fill, line type, and line width. It is compatible with the ggplot2 package as well as grid graphics.
This package provides easy-to-use and versatile functions to output R objects in HTML format.
This is a package for graphical and statistical analyses of environmental data, with a focus on analyzing chemical concentrations and physical parameters, usually in the context of mandated environmental monitoring. It provides major environmental statistical methods found in the literature and regulatory guidance documents, with extensive help that explains what these methods do, how to use them, and where to find them in the literature. It comes with numerous built-in data sets from regulatory guidance documents and environmental statistics literature. It includes scripts reproducing analyses presented in the book "EnvStats: An R Package for Environmental Statistics" (Millard, 2013, Springer, ISBN 978-1-4614-8455-4, https://link.springer.com/book/10.1007/978-1-4614-8456-1).
The basic idea of latent semantic analysis (LSA) is, that text do have a higher order (=latent semantic) structure which, however, is obscured by word usage (e.g. through the use of synonyms or polysemy). By using conceptual indices that are derived statistically via a truncated singular value decomposition (a two-mode factor analysis) over a given document-term matrix, this variability problem can be overcome.
This package provides meta-analysis methods that correct for publication bias and outcome reporting bias. Four methods and a visual tool are currently included in the package.
The p-uniform method as described in van Assen, van Aert, and Wicherts (2015) doi:10.1037/met0000025 can be used for estimating the average effect size, testing the null hypothesis of no effect, and testing for publication bias using only the statistically significant effect sizes of primary studies.
The p-uniform* method as described in van Aert and van Assen (2019) doi:10.31222/osf.io/zqjr9. This method is an extension of the p-uniform method that allows for estimation of the average effect size and the between-study variance in a meta-analysis, and uses both the statistically significant and nonsignificant effect sizes.
The hybrid method as described in van Aert and van Assen (2017) doi:10.3758/s13428-017-0967-6. The hybrid method is a meta-analysis method for combining an original study and replication and while taking into account statistical significance of the original study. The p-uniform and hybrid method are based on the statistical theory that the distribution of p-values is uniform conditional on the population effect size.
The fourth method in the package is the Snapshot Bayesian Hybrid Meta-Analysis Method as described in van Aert and van Assen (2018) doi:10.1371/journal.pone.0175302. This method computes posterior probabilities for four true effect sizes (no, small, medium, and large) based on an original study and replication while taking into account publication bias in the original study. The method can also be used for computing the required sample size of the replication akin to power analysis in null hypothesis significance testing.
The meta-plot is a visual tool for meta-analysis that provides information on the primary studies in the meta-analysis, the results of the meta-analysis, and characteristics of the research on the effect under study (van Assen and others, 2020).
Helper functions to apply the Correcting for Outcome Reporting Bias (CORB) method to correct for outcome reporting bias in a meta-analysis (van Aert & Wicherts, 2020).
This package provides functions to manipulate binary fingerprints of arbitrary length. A fingerprint is represented by an object of S4 class fingerprint. The bitwise logical functions in R are overridden so that they can be used directly with fingerprint objects. A number of distance metrics are also available. Fingerprints can be converted to Euclidean vectors (i.e., points on the unit hypersphere) and can also be folded. Arbitrary fingerprint formats can be handled via line handlers. Currently handlers are provided for CDK, MOE and BCI fingerprint data.
The tidyverse is a set of packages that work in harmony because they share common data representations and API design. This package is designed to make it easy to install and load multiple tidyverse packages in a single step.
This package contains the data set for the crowd-sourced benchmarks from running the benchmarkme package.
This package performs augmented backward elimination and checks the stability of the obtained model. Augmented backward elimination combines significance or information based criteria with the change in estimate to either select the optimal model for prediction purposes or to serve as a tool to obtain a practically sound, highly interpretable model.
This package provides a parallel estimation of the mutual information based on entropy estimates from k-nearest neighbors distances and algorithms for the reconstruction of gene regulatory networks.
This package checks adherence to a given style, syntax errors and possible semantic issues. It supports on the fly checking of R code edited with RStudio IDE, Emacs and Vim.
This is a framework for construction and analysis of 2D Monte-Carlo simulations. In addition, this package includes various distributions.
The vegan package provides tools for descriptive community ecology. It has most basic functions of diversity analysis, community ordination and dissimilarity analysis. Most of its multivariate tools can be used for other data types as well.
Maximum likelihood computations for Tweedie families, including the series expansion (Dunn and Smyth, 2005; <doi10.1007/s11222-005-4070-y>) and the Fourier inversion (Dunn and Smyth, 2008; <doi:10.1007/s11222-007-9039-6>), and related methods.
This package provides an R interface to the jExcel library to create web-based interactive tables and spreadsheets compatible with spreadsheet software.
This package provides functions for drawing and calibrating samples.
This is a package for ratios of count data such as obtained from RNA-seq are modelled using Bayesian statistics to derive posteriors for effects sizes. This approach is described in Erhard & Zimmer (2015) <doi:10.1093/nar/gkv696> and Erhard (2018) <doi:10.1093/bioinformatics/bty471>.
This package provides new statistics, new geometries and new positions for ggplot2 and a suite of functions to facilitate the creation of statistical plots.
Functions to help implement the extraction / subsetting / indexing function [ and replacement function [<- of custom matrix-like types (based on S3, S4, etc.), modeled as closely to the base matrix class as possible (with tests to prove it).
This package provides tools to read, write, create, and manipulate DESCRIPTION files. It is intended for packages that create or manipulate other packages.
This package provides a set of fonts. This is useful when you want to avoid system fonts to make sure your outputs are reproducible.
This package provides Cramer-Von Mises and Anderson-Darling tests of goodness-of-fit for continuous univariate distributions, using efficient algorithms.
This package provides various functions for classification, including k-nearest neighbour, Learning Vector Quantization and Self-Organizing Maps.