Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides basic classes and methods for Natural Language Processing.
Biterm Topic Models find topics in collections of short texts. It is a word co-occurrence based topic model that learns topics by modeling word-word co-occurrences patterns which are called biterms. This in contrast to traditional topic models like Latent Dirichlet Allocation and Probabilistic Latent Semantic Analysis which are word-document co-occurrence topic models. A biterm consists of two words co-occurring in the same short text window. This context window can for example be a twitter message, a short answer on a survey, a sentence of a text or a document identifier. The techniques are explained in detail in the paper 'A Biterm Topic Model For Short Text' by Xiaohui Yan, Jiafeng Guo, Yanyan Lan, Xueqi Cheng (2013) https://github.com/xiaohuiyan/xiaohuiyan.github.io/blob/master/paper/BTM-WWW13.pdf.
This package provides tools to compute ordinal, statistics and effect sizes as an alternative to mean comparison: Cliff's delta or success rate difference (SRD), Vargha and Delaney's A or the Area Under a Receiver Operating Characteristic Curve (AUC), the discrete type of McGraw & Wong's Common Language Effect Size (CLES) or Grissom & Kim's Probability of Superiority (PS), and the Number needed to treat (NNT) effect size. Moreover, comparisons to Cohen's d are offered based on Huberty & Lowman's Percentage of Group (Non-)Overlap considerations.
This package provides functionality to assert conditions that have to be met so that errors in data used in analysis pipelines can fail quickly. It is similar to stopifnot() but more powerful, friendly, and easier for use in pipelines.
This package provides a collection of functions dealing with labelled data, like reading and writing data between R and other statistical software packages. This includes easy ways to get, set or change value and variable label attributes, to convert labelled vectors into factors or numeric (and vice versa), or to deal with multiple declared missing values.
This package provides density, distribution, quantile and hazard functions of a stable variate, as well as generalized regression models for the parameters of a stable distribution.
Data exploration and modelling is a process in which a lot of data artifacts are produced. Artifacts like: subsets, data aggregates, plots, statistical models, different versions of data sets and different versions of results. Archivist helps to store and manage artifacts created in R. It allows you to store selected artifacts as binary files together with their metadata and relations. Archivist allows sharing artifacts with others. It can look for already created artifacts by using its class, name, date of the creation or other properties. It also makes it easy to restore such artifacts.
This package can be used to predict the r-species accumulation curve (r-SAC), which is the number of species represented at least r times as a function of the sampling effort. When r = 1, the curve is known as the species accumulation curve, or the library complexity curve in high-throughput genomic sequencing. The package includes both parametric and nonparametric methods, as described by Deng C, et al. (2018).
This package provides tools to obtain estimated marginal means (EMMs) for many linear, generalized linear, and mixed models. It can be used to compute contrasts or linear functions of EMMs, trends, and comparisons of slopes.
This package provides functions to fit kernel density functions to data on temporal activity patterns of animals; estimate coefficients of overlapping of densities for two species; and calculate bootstrap estimates of confidence intervals.
This package provides an implementation of efficient approximate leave-one-out (LOO) cross-validation for Bayesian models fit using Markov chain Monte Carlo, as described in doi:10.1007/s11222-016-9696-4. The approximation uses Pareto smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. As a byproduct of the calculations, we also obtain approximate standard errors for estimated predictive errors and for the comparison of predictive errors between models. The package also provides methods for using stacking and other model weighting techniques to average Bayesian predictive distributions.
This package contains a function that imports data from a CSV file, or uses manually entered data from the format (x, y, weight) and plots the appropriate ACC vs LOI graph and LMA graph. The main function is plotLMA (source file, header) that takes a data set and plots the appropriate LMA and ACC graphs. If no source file (a string) was passed, a manual data entry window is opened. The header parameter indicates by TRUE/FALSE (false by default) if the source CSV file has a header row or not. The dataset should contain only one independent variable (x) and one dependent variable (y) and can contain a weight for each observation.
Visualise complex relations in texts. This is done by providing functionalities for displaying text co-occurrence networks, text correlation networks, dependency relationships as well as text clustering. Feel free to join the effort of providing interesting text visualisations.
This package contains routines and documentation for solving quadratic programming problems.
Fit Bayesian generalized (non-)linear multivariate multilevel models using Stan for full Bayesian inference. A wide range of distributions and link functions are supported, allowing users to fit -- among others -- linear, robust linear, count data, survival, response times, ordinal, zero-inflated, hurdle, and even self-defined mixture models all in a multilevel context. Further modeling options include non-linear and smooth terms, auto-correlation structures, censored data, meta-analytic standard errors, and quite a few more. In addition, all parameters of the response distribution can be predicted in order to perform distributional regression. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their beliefs. Model fit can easily be assessed and compared with posterior predictive checks and leave-one-out cross-validation.
This package contains functions useful for data screening, testing moderation, mediation and estimating power.
This package provides a genetic algorithm plus derivative optimizer.
This package provides functions and vignettes to update data sets in Ecdat and to create, manipulate, plot, and analyze those and similar data sets.
This package serves two purposes:
Provide a comfortable R interface to query the Google server for static maps, and
Use the map as a background image to overlay plots within R. This requires proper coordinate scaling.
This package is for building isoscapes using mixed models and inferring the geographic origin of samples based on their isotopic ratios. This package is essentially a simplified interface to several other packages which implements a new statistical framework based on mixed models. It uses spaMM for fitting and predicting isoscapes, and assigning an organism's origin depending on its isotopic ratio. IsoriX also relies heavily on the package rasterVis for plotting the maps produced with terra using lattice'.
This package provides a set of S3 generics to axe components of fitted model objects and help reduce the size of model objects saved to disk.
This package provides an R interface to the GNU Linear Programming Kit, software for solving large-scale linear programming (LP), mixed integer linear programming (MILP) and other related problems.
This is a package for converting natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the stringi and Rcpp packages for fast yet correct tokenization in UTF-8 encoding.
This package provides functions related to L-moments: computation of L-moments and trimmed L-moments of distributions and data samples; parameter estimation; L-moment ratio diagram; plot vs. quantiles of an extreme-value distribution.