Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
These guidelines are meant to provide a pragmatic, yet rigorous, help to drug developers and decision makers, since they are shaped by three fundamental ingredients: the clinically determined margin of detriment on OS that is unacceptably high (delta null); the benefit on OS that is plausible given the mechanism of action of the novel intervention (delta alt); and the quantity of information (i.e. survival events) it is feasible to accrue given the clinical and drug development setting. The proposed guidelines facilitate transparent discussions between stakeholders focusing on the risks of erroneous decisions and what might be an acceptable trade-off between power and the false positive error rate.
This package implements a Monte Carlo Based Heterogeneity Test for standardized mean differences (d), Fisher-transformed Pearson's correlations (r), and natural-logarithm-transformed odds ratio (OR) in Meta-Analysis Studies. Depending on the presence of moderators, this Monte Carlo Based Test can be implemented in the random or mixed-effects model. This package uses rma() function from the R package metafor to obtain parameter estimates and likelihood, so installation of R package metafor is required. This approach refers to the studies of Hedges (1981) <doi:10.3102/10769986006002107>, Hedges & Olkin (1985, ISBN:978-0123363800), Silagy, Lancaster, Stead, Mant, & Fowler (2004) <doi:10.1002/14651858.CD000146.pub2>, Viechtbauer (2010) <doi:10.18637/jss.v036.i03>, and Zuckerman (1994, ISBN:978-0521432009).
This package performs Multiple Factor Analysis method for quantitative, categorical, frequency and mixed data, in addition to generating a lot of graphics, also has other useful functions.
Large-scale matrix-variate data have been widely observed nowadays in various research areas such as finance, signal processing and medical imaging. Modelling matrix-valued data by matrix-elliptical family not only provides a flexible way to handle heavy-tail property and tail dependencies, but also maintains the intrinsic row and column structure of random matrices. We proposed a new tool named matrix Kendall's tau which is efficient for analyzing random elliptical matrices. By applying this new type of Kendellâ s tau to the matrix elliptical factor model, we propose a Matrix-type Robust Two-Step (MRTS) method to estimate the loading and factor spaces. See the details in He at al. (2022) <arXiv:2207.09633>. In this package, we provide the algorithms for calculating sample matrix Kendall's tau, the MRTS method and the Matrix Kendall's tau Eigenvalue-Ratio (MKER) method which is used for determining the number of factors.
This package implements a novel density-based approach for estimating unknown parameters, distribution visualisations and meta-analyses of quantiles and ther functions. A detailed vignettes with example datasets and code to prepare data and analyses is available at <https://bookdown.org/a2delivera/metaquant/>. The methods are described in the pre-print by De Livera, Prendergast and Kumaranathunga (2024, <doi:10.48550/arXiv.2411.10971>).
This package contains the datasets for use with the book Salvan, Sartori and Pace (2020, ISBN:978-88-470-4002-1) "Modelli Lineari Generalizzati".
Complements the book "Using R for Modelling and Quantitative Methods in Fisheries" ISBN 9780367469894, published in 2021 by Chapman & Hall in their "Using R series". There are numerous functions and data-sets that are used in the book's many practical examples.
This package provides install functions of other languages such as java', python'.
Allows various models for multivariate response variables where each response is assumed to follow double hierarchical generalized linear models. In double hierarchical generalized linear models, the mean, dispersion parameters for variance of random effects, and residual variance can be further modeled as random-effect models.
Generates multivariate subgaussian stable probabilities using the QRSVN algorithm as detailed in Genz and Bretz (2002) <DOI:10.1198/106186002394> but by sampling positive stable variates not chi/sqrt(nu).
This package provides functions for measuring population divergence from genotypic data.
Calculates k-best solutions and costs for an assignment problem following the method outlined in Murty (1968) <doi:10.1287/opre.16.3.682>.
Many tools for making, modifying, marking, measuring, and motifs and memberships of many different types of networks. All functions operate with matrices, edge lists, and igraph', network', and tidygraph objects, on directed, multiplex, multimodal, signed, and other networks. The package includes functions for importing and exporting, creating and generating networks, modifying networks and node and tie attributes, and describing networks with sensible defaults.
Meta-package for statistical and machine learning with a unified interface for model fitting, prediction, performance assessment, and presentation of results. Approaches for model fitting and prediction of numerical, categorical, or censored time-to-event outcomes include traditional regression models, regularization methods, tree-based methods, support vector machines, neural networks, ensembles, data preprocessing, filtering, and model tuning and selection. Performance metrics are provided for model assessment and can be estimated with independent test sets, split sampling, cross-validation, or bootstrap resampling. Resample estimation can be executed in parallel for faster processing and nested in cases of model tuning and selection. Modeling results can be summarized with descriptive statistics; calibration curves; variable importance; partial dependence plots; confusion matrices; and ROC, lift, and other performance curves.
Test the marginal correlation between a scalar response variable with a vector of explanatory variables using the max-type test with bootstrap. The test is based on the max-type statistic and its asymptotic distribution under the null hypothesis of no marginal correlation. The bootstrap procedure is used to approximate the null distribution of the test statistic. The package provides a function for performing the test. For more technical details, refer to Zhang and Laber (2014) <doi:10.1080/01621459.2015.1106403>.
Used for general multiple mediation analysis with big data sets.
The IRLS (Iteratively Reweighted Least Squares) and GMM (Generalized Method of Moments) methods are applied to estimate mixed correlation coefficient matrix (Pearson, Polyseries, Polychoric), which can be estimated in pairs or simultaneously. For more information see Peng Zhang and Ben Liu (2024) <doi:10.1080/10618600.2023.2257251>; Ben Liu and Peng Zhang (2024) <doi:10.48550/arXiv.2404.06781>.
Perform library searches against electron ionization mass spectral databases using either the API provided by MS Search software (<https://chemdata.nist.gov/dokuwiki/doku.php?id=chemdata:nistlibs>) or custom implementations of the Identity and Similarity algorithms.
This package provides access to teaching materials for various statistics courses, including R and Python programs, Shiny apps, data, and PDF/HTML documents. These materials are stored on the Internet as a ZIP file (e.g., in a GitHub repository) and can be downloaded and displayed or run locally. The content of the ZIP file is temporarily or permanently stored. By default, the package uses the GitHub repository sigbertklinke/mmstat4.data. Additionally, the package includes association_measures.R from the archived package ryouready by Mark Heckman and some auxiliary functions.
Calculates exact hypothesis tests to compare a treatment and a reference group with respect to multiple binary endpoints. The tested null hypothesis is an identical multidimensional distribution of successes and failures in both groups. The alternative hypothesis is a larger success proportion in the treatment group in at least one endpoint. The tests are based on the multivariate permutation distribution of subjects between the two groups. For this permutation distribution, rejection regions are calculated that satisfy one of different possible optimization criteria. In particular, regions with maximal exhaustion of the nominal significance level, maximal power under a specified alternative or maximal number of elements can be found. Optimization is achieved by a branch-and-bound algorithm. By application of the closed testing principle, the global hypothesis tests are extended to multiple testing procedures.
Simulation-based sensitivity analysis for causal mediation studies. It numerically and graphically evaluates the sensitivity of causal mediation analysis results to the presence of unmeasured pretreatment confounding. The proposed method has primary advantages over existing methods. First, using an unmeasured pretreatment confounder conditional associations with the treatment, mediator, and outcome as sensitivity parameters, the method enables users to intuitively assess sensitivity in reference to prior knowledge about the strength of a potential unmeasured pretreatment confounder. Second, the method accurately reflects the influence of unmeasured pretreatment confounding on the efficiency of estimation of the causal effects. Third, the method can be implemented in different causal mediation analysis approaches, including regression-based, simulation-based, and propensity score-based methods. It is applicable to both randomized experiments and observational studies.
Collection of functions to compute within-study covariances for different effect sizes, data visualization, and single and multiple imputations for missing data. Effect sizes include correlation (r), mean difference (MD), standardized mean difference (SMD), log odds ratio (logOR), log risk ratio (logRR), and risk difference (RD).
It offers random-forest-based functions to impute clustered incomplete data. The package is tailored for but not limited to imputing multitissue expression data, in which a gene's expression is measured on the collected tissues of an individual but missing on the uncollected tissues.
This package provides a hybrid of the K-means algorithm and a Majorization-Minimization method to introduce a robust clustering. The reference paper is: Julien Mairal, (2015) <doi:10.1137/140957639>. The two most important functions in package MajKMeans are cluster_km() and cluster_MajKm(). cluster_km() clusters data without Majorization-Minimization and cluster_MajKm() clusters data with Majorization-Minimization method. Both of these functions calculate the sum of squares (SS) of clustering.