Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Enables the user to calculate Value at Risk (VaR) and Expected Shortfall (ES) by means of various types of historical simulation. Currently plain-, age-, volatility-weighted- and filtered historical simulation are implemented in this package. Volatility weighting can be carried out via an exponentially weighted moving average model (EWMA) or other GARCH-type models. The performance can be assessed via Traffic Light Test, Coverage Tests and Loss Functions. The methods of the package are described in Gurrola-Perez, P. and Murphy, D. (2015) <https://EconPapers.repec.org/RePEc:boe:boeewp:0525> as well as McNeil, J., Frey, R., and Embrechts, P. (2015) <https://ideas.repec.org/b/pup/pbooks/10496.html>.
This package provides different specifications of a Quadrilateral Dissimilarity Model which can be used to fit same-different judgments in order to get a predicted matrix that satisfies regular minimality [Colonius & Dzhafarov, 2006, Measurement and representations of sensations, Erlbaum]. From such a matrix, Fechnerian distances can be computed.
This package provides a set of functions for taking qualitative GIS data, hand drawn on a map, and converting it to a simple features object. These tools are focused on data that are drawn on a map that contains some type of polygon features. For each area identified on the map, the id numbers of these polygons can be entered as vectors and transformed using qualmap.
Given a dataset, the user is invited to utilize the Empirical Cumulative Distribution Function (ECDF) to guess interactively the mean and the mean deviation. Thereafter, using the quadratic curve the user can guess the Root Mean Squared Deviation (RMSD) and visualize the standard deviation (SD). For details, see Sarkar and Rashid (2019)<doi:10.3126/njs.v3i0.25574>, Have You Seen the Standard Deviaton?, Nepalese Journal of Statistics, Vol. 3, 1-10.
This package provides three Quarto website templates as an R project, which are commonly used by academics. Templates for personal websites and course/workshop websites are included, as well as a template with minimal content for customization.
This package provides functions to calculate Average Sample Numbers (ASN), Average Run Length (ARL1) and value of k, k1 and k2 for quality control charts under repetitive sampling as given in Aslam et al. (2014) (<DOI:10.7232/iems.2014.13.1.101>).
This package provides functions to access survey results directly into R using the Qualtrics API. Qualtrics <https://www.qualtrics.com/about/> is an online survey and data collection software platform. See <https://api.qualtrics.com/> for more information about the Qualtrics API. This package is community-maintained and is not officially supported by Qualtrics'.
Test whether equality and order constraints hold for all individuals simultaneously by comparing Bayesian mixed models through Bayes factors. A tutorial style vignette and a quickstart guide are available, via vignette("manual", "quid"), and vignette("quickstart", "quid") respectively. See Haaf and Rouder (2017) <doi:10.1037/met0000156>; Haaf, Klaassen and Rouder (2019) <doi:10.31234/osf.io/a4xu9>; and Rouder & Haaf (2021) <doi:10.5334/joc.131>.
Retrieve protein information from the UniProtKB REST API (see <https://www.uniprot.org/help/api_queries>).
An implementation of Quantitative Fatty Acid Signature Analysis (QFASA) in R. QFASA is a method of estimating the diet composition of predators. The fundamental unit of information in QFASA is a fatty acid signature (signature), which is a vector of proportions describing the composition of fatty acids within lipids. Signature data from at least one predator and from samples of all potential prey types are required. Calibration coefficients, which adjust for the differential metabolism of individual fatty acids by predators, are also required. Given those data inputs, a predator signature is modeled as a mixture of prey signatures and its diet estimate is obtained as the mixture that minimizes a measure of distance between the observed and modeled signatures. A variety of estimation options and simulation capabilities are implemented. Please refer to the vignette for additional details and references.
Programmatically access the Quickbase JSON API <https://developer.quickbase.com>. You supply parameters for an API call, qbr delivers an http request to the API endpoint and returns its response. Outputs follow tidyverse philosophy.
An easy framework to set a quality control workflow on a dataset. Includes a various range of functions that allow to establish an adaptable data quality control.
The queueing model of visual search models the accuracy and response time data in a visual search experiment using queueing models with finite customer population and stopping criteria of completing the service for finite number of customers. It implements the conceptualization of a hybrid model proposed by Moore and Wolfe (2001), in which visual stimuli enter the processing one after the other and then are identified in parallel. This package provides functions that simulate the specified queueing process and calculate the Wasserstein distance between the empirical response times and the model prediction.
This package provides functions for estimating ploidy levels and detecting aneuploidy in individuals using allele intensities or allele count data from high-throughput genotyping platforms, including single nucleotide polymorphism (SNP) arrays and sequencing-based technologies. Implements an extended version of the PennCNV signal standardization method by Wang et al. (2007) <doi:10.1101/gr.6861907> for higher ploidy levels. Computes B-allele frequencies (BAF), z-scores, and identifies copy number variation patterns.
For QTL mapping, this package comprises several functions designed to execute diverse tasks, such as simulating or analyzing data, calculating significance thresholds, and visualizing QTL mapping results. The single-QTL or multiple-QTL method, which enables the fitting and comparison of various statistical models, is employed to analyze the data for estimating QTL parameters. The models encompass linear regression, permutation tests, normal mixture models, and truncated normal mixture models. The Gaussian stochastic process is utilized to compute significance thresholds for QTL detection on a genetic linkage map within experimental populations. Two types of data, complete genotyping, and selective genotyping data from various experimental populations, including backcross, F2, recombinant inbred (RI) populations, and advanced intercrossed (AI) populations, are considered in the QTL mapping analysis. For QTL hotspot detection, statistical methods can be developed based on either utilizing individual-level data or summarized data. We have proposed a statistical framework capable of handling both individual-level data and summarized QTL data for QTL hotspot detection. Our statistical framework can overcome the underestimation of thresholds resulting from ignoring the correlation structure among traits. Additionally, it can identify different types of hotspots with minimal computational cost during the detection process. Here, we endeavor to furnish the R codes for our QTL mapping and hotspot detection methods, intended for general use in genes, genomics, and genetics studies. The QTL mapping methods for the complete and selective genotyping designs are based on the multiple interval mapping (MIM) model proposed by Kao, C.-H. , Z.-B. Zeng and R. D. Teasdale (1999) <doi: 10.1534/genetics.103.021642> and H.-I Lee, H.-A. Ho and C.-H. Kao (2014) <doi: 10.1534/genetics.114.168385>, respectively. The QTL hotspot detection analysis is based on the method by Wu, P.-Y., M.-.H. Yang, and C.-H. Kao (2021) <doi: 10.1093/g3journal/jkab056>.
Syntax for defining complex filtering expressions in a programmatic way. A filtering query, built as a nested list configuration, can be easily stored in other formats like YAML or JSON'. What's more, it's possible to convert such configuration to a valid expression that can be applied to popular dplyr package operations.
The letters qe in the package title stand for "quick and easy," alluding to the convenience goal of the package. We bring together a variety of machine learning (ML) tools from standard R packages, providing wrappers with a simple, convenient, and uniform interface.
Fuel economy, size, performance, and price data for cars in Qatar in 2025. Mirrors many of the columns in mtcars, but uses (1) non-US-centric makes and models, (2) 2025 prices, and (3) metric measurements, making it more appropriate for use as an example dataset outside the United States.
Analysis of Q methodology, used to identify distinct perspectives existing within a group. This methodology is used across social, health and environmental sciences to understand diversity of attitudes, discourses, or decision-making styles (for more information, see <https://qmethod.org/>). A single function runs the full analysis. Each step can be run separately using the corresponding functions: for automatic flagging of Q-sorts (manual flagging is optional), for statement scores, for distinguishing and consensus statements, and for general characteristics of the factors. The package allows to choose either principal components or centroid factor extraction, manual or automatic flagging, a number of mathematical methods for rotation (or none), and a number of correlation coefficients for the initial correlation matrix, among many other options. Additional functions are available to import and export data (from raw *.CSV, HTMLQ and FlashQ *.CSV, PQMethod *.DAT and easy-htmlq *.JSON files), to print and plot, to import raw data from individual *.CSV files, and to make printable cards. The package also offers functions to print Q cards and to generate Q distributions for study administration. See further details in the package documentation, and in the web pages below, which include a cookbook, guidelines for more advanced analysis (how to perform manual flagging or change the sign of factors), data management, and a graphical user interface (GUI) for online and offline use.
This function produces both the numerical and graphical summaries of the QTL hotspot detection in the genomes that are available on the worldwide web including the flanking markers of QTLs.
This package provides functions and data sets for reproducing selected results from the book "Quantitative Risk Management: Concepts, Techniques and Tools". Furthermore, new developments and auxiliary functions for Quantitative Risk Management practice.
Scaling models and classifiers for sparse matrix objects representing textual data in the form of a document-feature matrix. Includes original implementations of Laver', Benoit', and Garry's (2003) <doi:10.1017/S0003055403000698>, Wordscores model, the Perry and Benoit (2017) <doi:10.48550/arXiv.1710.08963> class affinity scaling model, and the Slapin and Proksch (2008) <doi:10.1111/j.1540-5907.2008.00338.x> wordfish model, as well as methods for correspondence analysis, latent semantic analysis, and fast Naive Bayes and linear SVMs specially designed for sparse textual data.
Densitometric evaluation of the photo-archived quantitative thin-layer chromatography (TLC) plates.
Quasi-Cauchy quantile regression, proposed by de Oliveira, Ospina, Leiva, Figueroa-Zuniga and Castro (2023) <doi:10.3390/fractalfract7090667>. This regression model is useful for the case where you want to model data of a nature limited to the intervals [0,1], (0,1], [0,1) or (0,1) and you want to use a quantile approach.