Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides an R Markdown format for converting an R Markdown document to a grid-oriented dashboard. The dashboard flexibly adapts the size of its components to the containing web page.
For distributions whose probability density functions are log-concave, the adaptive rejection sampling algorithm can be used to build envelope functions for sampling. For others, the modified adaptive rejection sampling algorithm, the concave-convex adaptive rejection sampling algorithm, and the adaptive slice sampling algorithm can be used. This R package mainly includes these four functions: rARS(), rMARS(), rCCARS(), and rASS(). These functions can realize sampling based on the algorithms above.
This package provides an interface to Amazon Web Services developer tools services, including version control, continuous integration and deployment, and more.
Flexibly restructure and aggregate data using just two functions: melt and cast. This package provides them.
This package provides different approaches for selecting the threshold in generalized Pareto distributions. Most of them are based on minimizing the AMSE-criterion or at least by reducing the bias of the assumed GPD-model. Others are heuristically motivated by searching for stable sample paths, i.e. a nearly constant region of the tail index estimator with respect to k, which is the number of data in the tail. The third class is motivated by graphical inspection. In addition, a sequential testing procedure for GPD-GoF-tests is also implemented here.
This is a package for variable elimination (Gaussian elimination, Fourier-Motzkin elimination), Moore-Penrose pseudoinverse, reduction to reduced row echelon form, value substitution, projecting a vector on the convex polytope described by a system of (in)equations, simplify systems by removing spurious columns and rows and collapse implied equalities, test if a matrix is totally unimodular, compute variable ranges implied by linear (in)equalities.
This package allows estimation and modelling of flight costs in animal (vertebrate) flight, implementing the aerodynamic power model. Flight performance is estimated based on basic morphological measurements such as body mass, wingspan and wing area. Afpt can be used to make predictions on how animals should adjust their flight behaviour and wingbeat kinematics to varying flight conditions.
Ridgeline plots provide a convenient way of visualizing changes in distributions over time or space. This package enables the creation of such plots in ggplot2.
This package lets you build complex plots, heatmaps in particular, using natural semantics. Bigger plots can be assembled using directives such as LeftOf, RightOf, TopOf, and Beneath and more. Other features include clustering, dendrograms and integration with ggplot2 generated grid objects. This package is particularly designed for bioinformaticians to assemble complex plots for publication.
This package provides an API for https://orcid.org. Functions include searching for people, searching by DOI, or searching by Orcid ID.
Hapassoc performs likelihood inference of trait associations with haplotypes and other covariates in generalized linear models (GLMs). The functions are developed primarily for data collected in cohort or cross-sectional studies. They can accommodate uncertain haplotype phase and handle missing genotypes at some SNPs.
This package provides utilities for processing the parameters of various statistical models. Beyond computing p values, CIs, and other indices for a wide variety of models, this package implements features like standardization or bootstrapping of parameters and models, feature reduction (feature extraction and variable selection) as well as conversion between indices of effect size.
This package provides a simple and light-weight API for memory profiling of R expressions. The profiling is built on top of R's built-in memory profiler utils::Rprofmem(), which records every memory allocation done by R (also native code).
This package provides utilities based on libpoppler for extracting text, fonts, attachments and metadata from a PDF file. It also supports high quality rendering of PDF documents into PNG, JPEG, TIFF format, or into raw bitmap vectors for further processing in R.
This package can compute multivariate normal and t-probabilities, quantiles, random deviates and densities.
This package provides a toolkit with infrastructure for representing, summarizing, and visualizing tree-structured regression and classification models.
This package provides tools to perform analyses and combine results from multiple-imputation datasets.
HTML widgets is a framework for creating HTML widgets that render in various contexts including the R console, R Markdown documents, and Shiny web applications.
The true random number service provided by the random.org website created by Mads Haahr samples atmospheric noise via radio tuned to an unused broadcasting frequency together with a skew correction algorithm due to John von Neumann. More background is available in the included vignette based on an essay by Mads Haahr. In its current form, the package offers functions to retrieve random integers, randomized sequences and random strings.
In order to smoothly animate the transformation of polygons and paths, many aspects needs to be taken into account, such as differing number of control points, changing center of rotation, etc. The transformr package provides an extensive framework for manipulating the shapes of polygons and paths and can be seen as the spatial brother to the tweenr package.
Thisp package enables you to track and report code coverage for your package and (optionally) upload the results to a coverage service. Code coverage is a measure of the amount of code being exercised by a set of tests. It is an indirect measure of test quality and completeness. This package is compatible with any testing methodology or framework and tracks coverage of both R code and compiled C/C++/FORTRAN code.
This package covers many important models used in marketing and micro-econometrics applications, including Bayes Regression (univariate or multivariate dep var), Bayes Seemingly Unrelated Regression (SUR), Binary and Ordinal Probit, Multinomial Logit (MNL) and Multinomial Probit (MNP), Multivariate Probit, Negative Binomial (Poisson) Regression, Multivariate Mixtures of Normals (including clustering), Dirichlet Process Prior Density Estimation with normal base, Hierarchical Linear Models with normal prior and covariates, Hierarchical Linear Models with a mixture of normals prior and covariates, Hierarchical Multinomial Logits with a mixture of normals prior and covariates, Hierarchical Multinomial Logits with a Dirichlet Process prior and covariates, Hierarchical Negative Binomial Regression Models, Bayesian analysis of choice-based conjoint data, Bayesian treatment of linear instrumental variables models, Analysis of Multivariate Ordinal survey data with scale usage heterogeneity, and Bayesian Analysis of Aggregate Random Coefficient Logit Models.
This package provides a complete environment for Bayesian inference using a variety of different samplers.
GLDEX offers fitting algorithms corresponding to two major objectives. One is to provide a smoothing device to fit distributions to data using the weighted and unweighted discretised approach based on the bin width of the histogram. The other is to provide a definitive fit to the data set using the maximum likelihood and quantile matching estimation. Other methods such as moment matching, starship method, and L moment matching are also provided. Diagnostics on goodness of fit can be done via qqplots, KS-resample tests and comparing mean, variance, skewness and kurtosis of the data with the fitted distribution.