Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Eurostat is the statistical office of the European Union and provides high quality statistics for Europe. Large set of the data is disseminated through the Eurostat database (<https://ec.europa.eu/eurostat/web/main/data/database>). The tools are using the REST API with the Statistical Data and Metadata eXchange (SDMX) Web Services (<https://ec.europa.eu/eurostat/web/user-guides/data-browser/api-data-access/api-detailed-guidelines/sdmx2-1>) to search and download data from the Eurostat database using the SDMX standard.
Create plots to visualize the alignment of a corporate lending financial portfolio to climate change scenarios based on climate indicators (production and emission intensities) across key climate relevant sectors of the PACTA methodology (Paris Agreement Capital Transition Assessment; <https://www.transitionmonitor.com/>). Financial institutions use PACTA to study how their capital allocation decisions align with climate change mitigation goals.
Represents high-dimensional data as tables of features, samples and measurements, and a design list for tracking the meaning of individual variables. Using this format, filtering, normalization, and other transformations of a dataset can be carried out in a flexible manner. romic takes advantage of these transformations to create interactive shiny apps for exploratory data analysis such as an interactive heatmap.
Interface of MIXMOD software for supervised, unsupervised and semi-supervised classification with mixture modelling <doi: 10.18637/jss.v067.i06>.
This package provides an Rcmdr "plug-in" based on the TeachingDemos package, and is primarily for illustrative purposes.
Create and manipulate hypergraph objects. This early version of rhype allows for the output of matrices associated with the hypergraphs themselves. It also uses these matrices to calculate hypergraph spectra and perform spectral comparison. Functionality coming soon includes calculation of hyperpaths and hypergraph centrality measures.
Nonparametric maximum likelihood estimation methods for random coefficient binary response models and some related functionality for sequential processing of hyperplane arrangements. See J. Gu and R. Koenker (2020) <DOI:10.1080/01621459.2020.1802284>.
Model fitting, model selection and parameter tuning procedures for a class of random network models. Many useful network modeling, estimation, and processing methods are included. The work to build and improve this package is partially supported by the NSF grants DMS-2015298 and DMS-2015134.
Calculates relevance and significance values for simple models and for many types of regression models. These are introduced in Stahel, Werner A. (2021) "Measuring Significance and Relevance instead of p-values." <https://stat.ethz.ch/~stahel/relevance/stahel-relevance2103.pdf>. These notions are also applied to replication studies, as described in the manuscript Stahel, Werner A. (2022) "'Replicability': Terminology, Measuring Success, and Strategy" available in the documentation.
The commonly used methods for relative quantification of gene expression levels obtained in real-time PCR (Polymerase Chain Reaction) experiments are the delta Ct methods, encompassing 2^-dCt and 2^-ddCt methods, originally proposed by Kenneth J. Livak and Thomas D. Schmittgen (2001) <doi:10.1006/meth.2001.1262>. The main idea is to normalise gene expression values using endogenous control gene, present gene expression levels in linear form by using the 2^-(value)^ transformation, and calculate differences in gene expression levels between groups of samples (or technical replicates of a single sample). The RQdeltaCT package offers functions that cover both methods for comparison of either independent groups of samples or groups with paired samples, together with importing expression datasets, performing multi-step quality control of data, enabling numerous data visualisations, enrichment of the standard workflow with additional useful analyses (correlation analysis, Receiver Operating Characteristic analysis, logistic regression), and conveniently export obtained results in table and image formats. The package has been designed to be friendly to non-experts in R programming.
Using the efficient implementation in the Boost C++ library, functions are provided to generate vectors of Universally Unique Identifiers (UUID) from R supporting random (version 4), name (version 5) and time (version 7) UUIDs'. The initial repository was at <https://gitlab.com/artemklevtsov/rcppuuid>.
Non-parametric clustering of joint pattern multi-genetic/epigenetic factors. This package contains functions designed to cluster subjects based on gene features including single nucleotide polymorphisms (SNPs), DNA methylation (CPG), gene expression (GE), and covariate data. The novel concept follows the general K-means (Hartigan and Wong (1979) <doi:10.2307/2346830> framework but uses weighted Euclidean distances across the gene features to cluster subjects. This approach is unique in that it attempts to capture all pairwise interactions in an effort to cluster based on their complex biological interactions.
Processes and visualizes the output of complex phylogenetic analyses from the RevBayes phylogenetic graphical modeling software.
Analyzing the performance of artificial intelligence (AI) systems/algorithms characterized by a search-and-report strategy. Historically observer performance has dealt with measuring radiologists performances in search tasks, e.g., searching for lesions in medical images and reporting them, but the implicit location information has been ignored. The implemented methods apply to analyzing the absolute and relative performances of AI systems, comparing AI performance to a group of human readers or optimizing the reporting threshold of an AI system. In addition to performing historical receiver operating receiver operating characteristic (ROC) analysis (localization information ignored), the software also performs free-response receiver operating characteristic (FROC) analysis, where lesion localization information is used. A book using the software has been published: Chakraborty DP: Observer Performance Methods for Diagnostic Imaging - Foundations, Modeling, and Applications with R-Based Examples, Taylor-Francis LLC; 2017: <https://www.routledge.com/Observer-Performance-Methods-for-Diagnostic-Imaging-Foundations-Modeling/Chakraborty/p/book/9781482214840>. Online updates to this book, which use the software, are at <https://dpc10ster.github.io/RJafrocQuickStart/>, <https://dpc10ster.github.io/RJafrocRocBook/> and at <https://dpc10ster.github.io/RJafrocFrocBook/>. Supported data collection paradigms are the ROC, FROC and the location ROC (LROC). ROC data consists of single ratings per images, where a rating is the perceived confidence level that the image is that of a diseased patient. An ROC curve is a plot of true positive fraction vs. false positive fraction. FROC data consists of a variable number (zero or more) of mark-rating pairs per image, where a mark is the location of a reported suspicious region and the rating is the confidence level that it is a real lesion. LROC data consists of a rating and a location of the most suspicious region, for every image. Four models of observer performance, and curve-fitting software, are implemented: the binormal model (BM), the contaminated binormal model (CBM), the correlated contaminated binormal model (CORCBM), and the radiological search model (RSM). Unlike the binormal model, CBM, CORCBM and RSM predict proper ROC curves that do not inappropriately cross the chance diagonal. Additionally, RSM parameters are related to search performance (not measured in conventional ROC analysis) and classification performance. Search performance refers to finding lesions, i.e., true positives, while simultaneously not finding false positive locations. Classification performance measures the ability to distinguish between true and false positive locations. Knowing these separate performances allows principled optimization of reader or AI system performance. This package supersedes Windows JAFROC (jackknife alternative FROC) software V4.2.1, <https://github.com/dpc10ster/WindowsJafroc>. Package functions are organized as follows. Data file related function names are preceded by Df', curve fitting functions by Fit', included data sets by dataset', plotting functions by Plot', significance testing functions by St', sample size related functions by Ss', data simulation functions by Simulate and utility functions by Util'. Implemented are figures of merit (FOMs) for quantifying performance and functions for visualizing empirical or fitted operating characteristics: e.g., ROC, FROC, alternative FROC (AFROC) and weighted AFROC (wAFROC) curves. For fully crossed study designs significance testing of reader-averaged FOM differences between modalities is implemented via either Dorfman-Berbaum-Metz or the Obuchowski-Rockette methods. Also implemented is single treatment analysis, which allows comparison of performance of a group of radiologists to a specified value, or comparison of AI to a group of radiologists interpreting the same cases. Crossed-modality analysis is implemented wherein there are two crossed treatment factors and the aim is to determined performance in each treatment factor averaged over all levels of the second factor. Sample size estimation tools are provided for ROC and FROC studies; these use estimates of the relevant variances from a pilot study to predict required numbers of readers and cases in a pivotal study to achieve the desired power. Utility and data file manipulation functions allow data to be read in any of the currently used input formats, including Excel, and the results of the analysis can be viewed in text or Excel output files. The methods are illustrated with several included datasets from the author's collaborations. This update includes improvements to the code, some as a result of user-reported bugs and new feature requests, and others discovered during ongoing testing and code simplification.
This package provides several metrics for assessing relative importance in linear models. These can be printed, plotted and bootstrapped. The recommended metric is lmg, which provides a decomposition of the model explained variance into non-negative contributions. There is a version of this package available that additionally provides a new and also recommended metric called pmvd. If you are a non-US user, you can download this extended version from Ulrike Groempings web site.
The metrics() function calculates measures of scholarly impact. These include conventional measures, such as the number of publications and the total citations to all publications, as well as modern and robust metrics based on the vector of citations associated with each publication, such as the h index and many of its variants or rivals. These methods are described in Ruscio et al. (2012) <DOI: 10.1080/15366367.2012.711147>.
Implementation of Taylor Regression Estimator (TRE), Tulip Extreme Finding Estimator (TEFE), Bell Extreme Finding Estimator (BEFE), Integration Extreme Finding Estimator (IEFE) and Integration Root Finding Estimator (IRFE) for roots, extrema and inflections of a curve . Christopoulos, DT (2019) <doi:10.13140/RG.2.2.17158.32324> . Christopoulos, DT (2016) <doi:10.2139/ssrn.3043076> . Christopoulos, DT (2016) <https://demovtu.veltech.edu.in/wp-content/uploads/2016/04/Paper-04-2016.pdf> . Christopoulos, DT (2014) <doi:10.48550/arXiv.1206.5478> .
This package contains three functions that query AuriQ Systems Essentia Database and return the results in R. essQuery takes a single Essentia command and captures the output in R, where you can save the output to a dataframe or stream it directly into additional analysis. read.essentia takes an Essentia script and captures the output csv data into R, where you can save the output to a dataframe or stream it directly into additional analysis. capture.essentia takes a file containing any number of Essentia commands and captures the output of the specified statements into R dataframes. Essentia can be downloaded for free at http://www.auriq.com/documentation/source/install/index.html.
Compiling regression results into a publishable format, conducting post-hoc hypothesis testing, and plotting moderating effects (the effect of X on Y becomes stronger/weaker as Z increases).
This package provides tools to help developers and producers manipulate R objects and outputs. It includes tools for displaying results and objects, and for formatting them in the correct format.
This package provides and extends the Fuzzy Coco algorithm by wrapping the FuzzyCoCo C++ Library, cf <https://github.com/Lonza-RND-Data-Science/fuzzycoco>. Fuzzy Coco constructs systems that predict the outcome of a human decision-making process while providing an understandable explanation of a possible reasoning leading to it. The constructed fuzzy systems are composed of rules and linguistic variables. This package provides a S3 classic interface (fit_xy()/fit()/predict()/evaluate()) and a tidymodels'/'parsnip interface, a custom engine with custom iteration stop criterion and progress bar support as well as a systematic implementation that do not rely on genetic programming but rather explore all possible combinations.
Build regular expressions piece by piece using human readable code. This package contains Unicode functionality, and is primarily intended to be used by package developers.
Processing logical operations such as AND/OR/NOT operations dynamically. It also handles nesting in the operations.
Allow for easy-to-use testing or evaluating of linear equality and inequality restrictions about parameters and effects in (generalized) linear statistical models.