Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Define distribution families and fit them to interval-censored and interval-truncated data, where the truncation bounds may depend on the individual observation. The defined distributions feature density, probability, sampling and fitting methods as well as efficient implementations of the log-density log f(x) and log-probability log P(x0 <= X <= x1) for use in TensorFlow neural networks via the tensorflow package. Allows training parametric neural networks on interval-censored and interval-truncated data with flexible parameterization. Applications include Claims Development in Non-Life Insurance, e.g. modelling reporting delay distributions from incomplete data, see Bücher, Rosenstock (2022) <doi:10.1007/s13385-022-00314-4>.
Quickly imports, processes, analyzes, and visualizes mass-spectrometric data. Includes functions for easily extracting specific data and measurements from large (multi-gigabyte) raw Bruker data files, as well as a set of S3 object classes for manipulating and measuring mass spectrometric peaks and plotting peaks and spectra using the ggplot2 package.
This package provides a comprehensive suite of statistical tools for Quality Management, designed around the Define, Measure, Analyze, Improve, and Control (DMAIC) cycle used in Six Sigma methodology. Based on the discontinued CRAN package qualitytools', this package refactors its original design by incorporating R6 object-oriented programming for increased flexibility and performance. It replaces traditional graphics with modern, interactive visualizations using ggplot2 and plotly'. Built on tidyverse principles, it simplifies data manipulation and visualization, offering an intuitive approach to quality science.
Collect your data on digital marketing campaigns from reddit Ads using the Windsor.ai API <https://windsor.ai/api-fields/>.
Implementation of Robust Regression tailored to deal with Asymmetric noise Distribution, which was originally proposed by Takeuchi & Bengio & Kanamori (2002) <doi:10.1162/08997660260293300>. In addition, this implementation is extended as introducing potential feature regularization by LASSO etc.
This package provides a collection of HTML', JavaScript', CSS and fonts assets that generate Redoc documentation from an OpenAPI Specification: <https://redocly.com/redoc/>.
Based on data of real user-agent strings, we can set filtering conditions and randomly sample user-agent strings from the user-agent string pool.
This package provides functionality for carrying out estimation with data collected using Respondent-Driven Sampling. This includes Heckathorn's RDS-I and RDS-II estimators as well as Gile's Sequential Sampling estimator. The package is part of the "RDS Analyst" suite of packages for the analysis of respondent-driven sampling data. See Gile and Handcock (2010) <doi:10.1111/j.1467-9531.2010.01223.x>, Gile and Handcock (2015) <doi:10.1111/rssa.12091> and Gile, Beaudry, Handcock and Ott (2018) <doi:10.1146/annurev-statistics-031017-100704>.
Implementation of Nelson rules for control charts in R'. The Rspc implements some Statistical Process Control methods, namely Levey-Jennings type of I (individuals) chart, Shewhart C (count) chart and Nelson rules (as described in Montgomery, D. C. (2013) Introduction to statistical quality control. Hoboken, NJ: Wiley.). Typical workflow is taking the time series, specify the control limits, and list of Nelson rules you want to evaluate. There are several options how to modify the rules (one sided limits, numerical parameters of rules, etc.). Package is also capable of calculating the control limits from the data (so far only for i-chart and c-chart are implemented).
This package provides a comprehensive R API for querying Apache Solr databases. A Solr core is represented as a data frame or list that supports Solr-side filtering, sorting, transformation and aggregation, all through the familiar base R API. Queries are processed lazily, i.e., a query is only sent to the database when the data are required.
Calculate endogenous network effects in event sequences and fit relational event models (REM): Using network event sequences (where each tie between a sender and a target in a network is time-stamped), REMs can measure how networks form and evolve over time. Endogenous patterns such as popularity effects, inertia, similarities, cycles or triads can be calculated and analyzed over time.
This package performs species distribution modeling for rare species with unprecedented accuracy (Mondanaro et al., 2023 <doi:10.1111/2041-210X.14066>) and finds the area of origin of species and past contact between them taking climatic variability in full consideration (Mondanaro et al., 2025 <doi:10.1111/2041-210X.14478>).
Sample size and confidence interval calculations in reversible catalytic models, with applications in malaria research. Further details can be found in the paper by Sepúlveda and Drakeley (2015, <doi:10.1186/s12936-015-0661-z>).
Robust inference methods for fixed-effect and random-effects models of meta-analysis are implementable. The robust methods are developed using the density power divergence that is a robust estimating criterion developed in machine learning theory, and can effectively circumvent biases and misleading results caused by influential outliers. The density power divergence is originally introduced by Basu et al. (1998) <doi:10.1093/biomet/85.3.549>, and the meta-analysis methods are developed by Noma et al. (2022) <forthcoming>.
The aim of the report package is to bridge the gap between Râ s output and the formatted results contained in your manuscript. This package converts statistical models and data frames into textual reports suited for publication, ensuring standardization and quality in results reporting.
This package provides a set of functions to perform pathway analysis and meta-analysis from multiple gene expression datasets, as well as visualization of the results. This package wraps functionality from the following packages: Ritchie et al. (2015) <doi:10.1093/nar/gkv007>, Love et al. (2014) <doi:10.1186/s13059-014-0550-8>, Robinson et al. (2010) <doi:10.1093/bioinformatics/btp616>, Korotkevich et al. (2016) <arxiv:10.1101/060012>, Efron et al. (2015) <https://CRAN.R-project.org/package=GSA>, and Gu et al. (2012) <https://CRAN.R-project.org/package=CePa>.
This package provides functions to read and write ImageJ (<https://imagej.net>) Region of Interest (ROI) files, to plot the ROIs and to convert them to spatstat (<https://spatstat.org/>) spatial patterns.
An R Commander plug-in for the WorldFlora package. It was mainly developed to show work flows and scripts for first-time users.
Calculate the probability density functions (PDFs) for two threshold evidence accumulation models (EAMs). These are defined using the following Stochastic Differential Equation (SDE), dx(t) = v(x(t),t)*dt+D(x(t),t)*dW, where x(t) is the accumulated evidence at time t, v(x(t),t) is the drift rate, D(x(t),t) is the noise scale, and W is the standard Wiener process. The boundary conditions of this process are the upper and lower decision thresholds, represented by b_u(t) and b_l(t), respectively. Upper threshold b_u(t) > 0, while lower threshold b_l(t) < 0. The initial condition of this process x(0) = z where b_l(t) < z < b_u(t). We represent this as the relative start point w = z/(b_u(0)-b_l(0)), defined as a ratio of the initial threshold location. This package generates the PDF using the same approach as the python package it is based upon, PyBEAM by Murrow and Holmes (2023) <doi:10.3758/s13428-023-02162-w>. First, it converts the SDE model into the forwards Fokker-Planck equation dp(x,t)/dt = d(v(x,t)*p(x,t))/dt-0.5*d^2(D(x,t)^2*p(x,t))/dx^2, then solves this equation using the Crank-Nicolson method to determine p(x,t). Finally, it calculates the flux at the decision thresholds, f_i(t) = 0.5*d(D(x,t)^2*p(x,t))/dx evaluated at x = b_i(t), where i is the relevant decision threshold, either upper (i = u) or lower (i = l). The flux at each thresholds f_i(t) is the PDF for each threshold, specifically its PDF. We discuss further details of this approach in this package and PyBEAM publications. Additionally, one can calculate the cumulative distribution functions of and sampling from the EAMs.
ENA (Shaffer, D. W. (2017) Quantitative Ethnography. ISBN: 0578191687) is a method used to identify meaningful and quantifiable patterns in discourse or reasoning. ENA moves beyond the traditional frequency-based assessments by examining the structure of the co-occurrence, or connections in coded data. Moreover, compared to other methodological approaches, ENA has the novelty of (1) modeling whole networks of connections and (2) affording both quantitative and qualitative comparisons between different network models. Shaffer, D.W., Collier, W., & Ruis, A.R. (2016).
Robust methods for estimating the parameters of multivariate Gaussian linear models.
Validating sub-national statistical typologies, re-coding across standard typologies of sub-national statistics, and making valid aggregate level imputation, re-aggregation, re-weighting and projection down to lower hierarchical levels to create meaningful data panels and time series.
The TRUST4 or MiXCR is used to identify the clonotypes. The goal of rTCRBCRr is to process the results from these clonotyping tools, and analyze the clonotype repertoire metrics based on chain names and IGH isotypes. The manuscript is still under preparation for publication for now. The references describing the methods in this package will be added later.
This package provides functions to compile and load Rust code from R, similar to how Rcpp or cpp11 allow easy interfacing with C++ code. Also provides helper functions to create R packages that use Rust code. Under the hood, the Rust crate extendr is used to do all the heavy lifting.