Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Plotting functions for visualising textual data. Extends quanteda and related packages with plot methods designed specifically for text data, textual statistics, and models fit to textual data. Plot types include word clouds, lexical dispersion plots, scaling plots, network visualisations, and word keyness plots.
Provide a variety of Q-matrix validation methods for the generalized cognitive diagnosis models, including the method based on the generalized deterministic input, noisy, and gate model (G-DINA) by de la Torre (2011) <DOI:10.1007/s11336-011-9207-7> discrimination index (the GDI method) by de la Torre and Chiu (2016) <DOI:10.1007/s11336-015-9467-8>, the Hull method by Najera et al. (2021) <DOI:10.1111/bmsp.12228>, the stepwise Wald test method (the Wald method) by Ma and de la Torre (2020) <DOI:10.1111/bmsp.12156>, the multiple logistic regressionâ based Qâ matrix validation method (the MLR-B method) by Tu et al. (2022) <DOI:10.3758/s13428-022-01880-x>, the beta method based on signal detection theory by Li and Chen (2024) <DOI:10.1111/bmsp.12371> and Q-matrix validation based on relative fit index by Chen et al. (2013) <DOI:10.1111/j.1745-3984.2012.00185.x>. Different research methods and iterative procedures during Q-matrix validating are available <DOI:10.3758/s13428-024-02547-5>.
This package produces quality scores for each of the US companies from the Russell 3000, following the approach described in "Quality Minus Junk" (Asness, Frazzini, & Pedersen, 2013) <http://www.aqr.com/library/working-papers/quality-minus-junk>. The package includes datasets for users who wish to view the most recently uploaded quality scores. It also provides tools to automatically gather relevant financials and stock price information, allowing users to update their data and customize their universe for further analysis.
This package provides a quantum computer simulator framework with up to 24 qubits. It allows to define general single qubit gates and general controlled single qubit gates. For convenience, it currently provides the most common gates (X, Y, Z, H, Z, S, T, Rx, Ry, Rz, CNOT, SWAP, Toffoli or CCNOT, Fredkin or CSWAP). qsimulatR also implements noise models. qsimulatR supports plotting of circuits and is able to export circuits to Qiskit <https://qiskit.org/>, a python package which can be used to run on IBM's hardware <https://quantum-computing.ibm.com/>.
Basic functions for building parsers, with an application to PC-AXIS format files.
Developed to perform the estimation and inference for regression coefficient parameters in longitudinal marginal models using the method of quadratic inference functions. Like generalized estimating equations, this method is also a quasi-likelihood inference method. It has been showed that the method gives consistent estimators of the regression coefficients even if the correlation structure is misspecified, and it is more efficient than GEE when the correlation structure is misspecified. Based on Qu, A., Lindsay, B.G. and Li, B. (2000) <doi:10.1093/biomet/87.4.823>.
An implementation of Quantitative Fatty Acid Signature Analysis (QFASA) in R. QFASA is a method of estimating the diet composition of predators. The fundamental unit of information in QFASA is a fatty acid signature (signature), which is a vector of proportions describing the composition of fatty acids within lipids. Signature data from at least one predator and from samples of all potential prey types are required. Calibration coefficients, which adjust for the differential metabolism of individual fatty acids by predators, are also required. Given those data inputs, a predator signature is modeled as a mixture of prey signatures and its diet estimate is obtained as the mixture that minimizes a measure of distance between the observed and modeled signatures. A variety of estimation options and simulation capabilities are implemented. Please refer to the vignette for additional details and references.
Conduct multiple quantitative trait loci (QTL) mapping under the framework of random-QTL-effect linear mixed model. First, each position on the genome is detected in order to obtain a negative logarithm P-value curve against genome position. Then, all the peaks on each effect (additive or dominant) curve are viewed as potential QTL, all the effects of the potential QTL are included in a multi-QTL model, their effects are estimated by empirical Bayes in doubled haploid population or by adaptive lasso in F2 population, and true QTL are identified by likelihood radio test. See Wen et al. (2018) <doi:10.1093/bib/bby058>.
Collect your data on digital marketing campaigns from Quora Ads using the Windsor.ai API <https://windsor.ai/api-fields/>.
This package provides functions to compute quasi variances and associated measures of approximation error.
Option pricing (financial derivatives) techniques mainly following textbook Options, Futures and Other Derivatives', 9ed by John C.Hull, 2014. Prentice Hall. Implementations are via binomial tree option model (BOPM), Black-Scholes model, Monte Carlo simulations, etc. This package is a result of Quantitative Financial Risk Management course (STAT 449 and STAT 649) at Rice University, Houston, TX, USA, taught by Oleg Melnikov, statistics PhD student, as of Spring 2015.
High-throughput analysis of growth curves and fluorescence data using three methods: linear regression, growth model fitting, and smooth spline fit. Analysis of dose-response relationships via smoothing splines or dose-response models. Complete data analysis workflows can be executed in a single step via user-friendly wrapper functions. The results of these workflows are summarized in detailed reports as well as intuitively navigable R data containers. A shiny application provides access to all features without requiring any programming knowledge. The package is described in further detail in Wirth et al. (2023) <doi:10.1038/s41596-023-00850-7>.
Shewhart quality control charts for continuous, attribute and count data. Cusum and EWMA charts. Operating characteristic curves. Process capability analysis. Pareto chart and cause-and-effect chart. Multivariate control charts.
Dynamically generate tabset panels <https://quarto.org/docs/output-formats/html-basics.html#tabsets> in Quarto HTML documents using a data frame as input.
This package provides advanced functionality for performing configurational comparative research with Qualitative Comparative Analysis (QCA), including crisp-set, multi-value, and fuzzy-set QCA. It also offers advanced tools for sensitivity diagnostics and methodological evaluations of QCA.
Function that implements the Quantum Genetic Algorithm, first proposed by Han and Kim in 2000. This is an R implementation of the python application developed by Lahoz-Beltra (<https://github.com/ResearchCodesHub/QuantumGeneticAlgorithms>). Each optimization problem is represented as a maximization one, where each solution is a sequence of (qu)bits. Following the quantum paradigm, these qubits are in a superposition state: when measuring them, they collapse in a 0 or 1 state. After measurement, the fitness of the solution is calculated as in usual genetic algorithms. The evolution at each iteration is oriented by the application of two quantum gates to the amplitudes of the qubits: (1) a rotation gate (always); (2) a Pauli-X gate (optionally). The rotation is based on the theta angle values: higher values allow a quicker evolution, and lower values avoid local maxima. The Pauli-X gate is equivalent to the classical mutation operator and determines the swap between alfa and beta amplitudes of a given qubit. The package has been developed in such a way as to permit a complete separation between the engine, and the particular problem subject to combinatorial optimization.
Nomograms are constructed to predict the cumulative incidence rate which is calculated after adjusting for competing causes to the event of interest. K-fold cross-validation is implemented to validate predictive accuracy using a competing-risk version of the concordance index. Methods are as described in: Kattan MW, Heller G, Brennan MF (2003).
Retrieve protein information from the UniProtKB REST API (see <https://www.uniprot.org/help/api_queries>).
Conduct multiple quantitative trait loci (QTL) and QTL-by-environment interaction (QEI) mapping via ordinary or compressed variance component mixed models with random- or fixed QTL/QEI effects. First, each position on the genome is detected in order to obtain a negative logarithm P-value curve against genome position. Then, all the peaks on each effect (additive or dominant) curve or on each locus curve are viewed as potential main-effect QTLs and QEIs, all their effects are included in a multi-locus model, their effects are estimated by both least angle regression and empirical Bayes (or adaptive lasso) in backcross and F2 populations, and true QTLs and QEIs are identified by likelihood radio test. See Zhou et al. (2022) <doi:10.1093/bib/bbab596> and Wen et al. (2018) <doi:10.1093/bib/bby058>.
G-computation for a set of time-fixed exposures with quantile-based basis functions, possibly under linearity and homogeneity assumptions. This approach estimates a regression line corresponding to the expected change in the outcome (on the link basis) given a simultaneous increase in the quantile-based category for all exposures. Works with continuous, binary, and right-censored time-to-event outcomes. Reference: Alexander P. Keil, Jessie P. Buckley, Katie M. OBrien, Kelly K. Ferguson, Shanshan Zhao, and Alexandra J. White (2019) A quantile-based g-computation approach to addressing the effects of exposure mixtures; <doi:10.1289/EHP5838>.
Primarily, the qcv package computes key indices related to the Quantifying Construct Validity procedure (QCV; Westen & Rosenthal, 2003 <doi:10.1037/0022-3514.84.3.608>; see also Furr & Heuckeroth, in press). The qcv() function is the heart of the qcv package, but additional functions in the package provide useful ancillary information related to the QCV procedure.
This package implements the nonparametric quantile regression method developed by Belloni, Chernozhukov, and Fernandez-Val (2011) to partially linear quantile models. Provides point estimates of the conditional quantile function and its derivatives based on series approximations to the nonparametric part of the model. Provides pointwise and uniform confidence intervals using analytic and resampling methods.
Fits non-crossing regression quantiles as a function of linear covariates and multiple smooth terms, including varying coefficients, via B-splines with L1-norm difference penalties. Random intercepts and variable selection are allowed via the lasso penalties. The smoothing parameters are estimated as part of the model fitting, see Muggeo and others (2021) <doi:10.1177/1471082X20929802>. Monotonicity and concavity constraints on the fitted curves are allowed, see Muggeo and others (2013) <doi:10.1007/s10651-012-0232-1>, and also <doi:10.13140/RG.2.2.12924.85122> or <doi:10.13140/RG.2.2.29306.21445> some code examples.
This package provides functions for the joint analysis of Q sets of p-values obtained for the same list of items. This joint analysis is performed by querying a composite hypothesis, i.e. an arbitrary complex combination of simple hypotheses, as described in Mary-Huard et al. (2021) <doi:10.1093/bioinformatics/btab592> and De Walsche et al.(2023) <doi:10.1101/2024.03.17.585412>. In this approach, the Q-uplet of p-values associated with each item is distributed as a multivariate mixture, where each of the 2^Q components corresponds to a specific combination of simple hypotheses. The dependence between the p-value series is considered using a Gaussian copula function. A p-value for the composite hypothesis test is derived from the posterior probabilities.