Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides platform for Vedic calendar system having several functionalities to facilitate conversion between Gregorian and Vedic calendar systems, and helpful in examining its impact in the time series analysis domain.
Full model selection (detection of the relevant features and estimation of the number of clusters) for model-based clustering (see reference here <doi:10.1007/s11222-016-9670-1>). Data to analyze can be continuous, categorical, integer or mixed. Moreover, missing values can occur and do not necessitate any pre-processing. Shiny application permits an easy interpretation of the results.
ProPublica <https://projects.propublica.org/represent/> makes United States Congress member votes available and has developed their own unique cartogram to visually represent this data. Tools are provided to retrieve voting data, prepare voting data for plotting with ggplot2', create vote cartograms and theme them.
This package provides an easy to calculate local variable importance measure based on Ceteris Paribus profile and global variable importance measure based on Partial Dependence Profiles.
Visual contour and 2D point and contour plots for binary classification modeling under algorithms such as glm', rf', gbm', nnet and svm', presented over two dimensions generated by famd and mca methods. Package FactoMineR for multivariate reduction functions and package MBA for interpolation functions are used. The package can be used to visualize the discriminant power of input variables and algorithmic modeling, explore outliers, compare algorithm behaviour, etc. It has been created initially for teaching purposes, but it has also many practical uses under the XAI paradigm.
This is a sparklyr extension integrating VariantSpark and R. VariantSpark is a framework based on scala and spark to analyze genome datasets, see <https://bioinformatics.csiro.au/>. It was tested on datasets with 3000 samples each one containing 80 million features in either unsupervised clustering approaches and supervised applications, like classification and regression. The genome datasets are usually writing in VCF, a specific text file format used in bioinformatics for storing gene sequence variations. So, VariantSpark is a great tool for genome research, because it is able to read VCF files, run analyses and return the output in a spark data frame.
Applying Monte Carlo permutation to generate pointwise variogram envelope and checking for spatial dependence at different scales using permutation test. Empirical Brown's method and Fisher's method are used to compute overall p-value for hypothesis test.
Static and dynamic 3D plots to be used with ordination results and in diversity analysis, especially with the vegan package.
Implementation of Azure DevOps <https://azure.microsoft.com/> API calls. It enables the extraction of information about repositories, build and release definitions and individual releases. It also helps create repositories and work items within a project without logging into Azure DevOps'. There is the ability to use any API service with a shell for any non-predefined call.
Automatically selects and visualises statistical hypothesis tests between two vectors, based on their class, distribution, sample size, and a user-defined confidence level (conf.level). Visual outputs - including box plots, bar charts, regression lines with confidence bands, mosaic plots, residual plots, and Q-Q plots - are annotated with relevant test statistics, assumption checks, and post-hoc analyses where applicable. The algorithmic workflow helps the user focus on the interpretation of test results rather than test selection. It is particularly suited for quick data analysis, e.g., in statistical consulting projects or educational settings. The test selection algorithm proceeds as follows: Input vectors of class numeric or integer are considered numerical; those of class factor are considered categorical. Assumptions of residual normality and homogeneity of variances are considered met if the corresponding test yields a p-value greater than the significance level alpha = 1 - conf.level. (1) When the response vector is numerical and the predictor vector is categorical, a test of central tendencies is selected. If the categorical predictor has exactly two levels, t.test() is applied when group sizes exceed 30 (Lumley et al. (2002) <doi:10.1146/annurev.publhealth.23.100901.140546>). For smaller samples, normality of residuals is tested using shapiro.test(); if met, t.test() is used; otherwise, wilcox.test(). If the predictor is categorical with more than two levels, an aov() is initially fitted. Residual normality is evaluated using both shapiro.test() and ad.test(); residuals are considered approximately normal if at least one test yields a p-value above alpha. If this assumption is met, bartlett.test() assesses variance homogeneity. If variances are homogeneous, aov() is used; otherwise oneway.test(). Both tests are followed by TukeyHSD(). If residual normality cannot be assumed, kruskal.test() is followed by pairwise.wilcox.test(). (2) When both the response and predictor vectors are numerical, a simple linear regression model is fitted using lm(). (3) When both vectors are categorical, Cochran's rule (Cochran (1954) <doi:10.2307/3001666>) is applied to test independence either by chisq.test() or fisher.test().
Facilitates modeling species ecological niches and geographic distributions based on occurrences and environments that have a vertical as well as horizontal component, and projecting models into three-dimensional geographic space. Working in three dimensions is useful in an aquatic context when the organisms one wishes to model can be found across a wide range of depths in the water column. The package also contains functions to automatically generate marine training model training regions using machine learning, and interpolate and smooth patchily sampled environmental rasters using thin plate splines. Davis Rabosky AR, Cox CL, Rabosky DL, Title PO, Holmes IA, Feldman A, McGuire JA (2016) <doi:10.1038/ncomms11484>. Nychka D, Furrer R, Paige J, Sain S (2021) <doi:10.5065/D6W957CT>. Pateiro-Lopez B, Rodriguez-Casal A (2022) <https://CRAN.R-project.org/package=alphahull>.
The biomarker data set by Vermeulen et al. (2009) <doi:10.1016/S1470-2045(09)70154-8> is provided. The data source, however, is by Ruijter et al. (2013) <doi:10.1016/j.ymeth.2012.08.011>. The original data set may be downloaded from <https://medischebiologie.nl/wp-content/uploads/2019/02/qpcrdatamethods.zip>. This data set is for a real-time quantitative polymerase chain reaction (PCR) experiment that comprises the raw fluorescence data of 24,576 amplification curves. This data set comprises 59 genes of interest and 5 reference genes. Each gene was assessed on 366 neuroblastoma complementary DNA (cDNA) samples and on 18 standard dilution series samples (10-fold 5-point dilution series x 3 replicates + no template controls (NTC) x 3 replicates).
Collection of common methods to determine growing season length in a simple manner. Start and end dates of the vegetation periods are calculated solely based on daily mean temperatures and the day of the year.
This package performs analysis of various genetic parameters like genotypic and phenotypic coefficient of variance, heritability, genetic advance, genetic advance as a percentage of mean. The package also has functions for genotypic and phenotypic covariance, correlation and path analysis. Dataset has been added to facilitate example. For more information refer Singh, R.K. and Chaudhary, B.D. (1977, ISBN:81766330709788176633079).
Designed to help the user to determine the sensitivity of an proposed causal effect to unconsidered common causes. Users can create visualizations of sensitivity, effect sizes, and determine which pattern of effects would support a causal claim for between group differences. Number needed to treat formula from Kraemer H.C. & Kupfer D.J. (2006) <doi:10.1016/j.biopsych.2005.09.014>.
This package provides a variational Bayesian finite mixture model for the clustering of categorical data, and can implement variable selection and semi-supervised outcome guiding if desired. Incorporates an option to perform model averaging over multiple initialisations to reduce the effects of local optima and improve the automatic estimation of the true number of clusters. For further details, see the paper by Rao and Kirk (2024) <doi:10.48550/arXiv.2406.16227>.
This package provides tools for analyzing the relationship between direct prices (based on labor values) and prices of production using Bayesian generalized linear models, panel data methods, partial least squares regression, canonical correlation analysis, and panel vector autoregression. Includes functions for model comparison, out-of-sample validation, and structural break detection. Here, methods use raw accounting data with explicit temporal structure, following Gomez Julian (2023) <doi:10.17605/OSF.IO/7J8KF> and standard econometric techniques for panel data analysis.
Video interactivity within shiny applications using video.js'. Enables the status of the video to be sent from the UI to the server, and allows events such as playing and pausing the video to be triggered from the server.
An R client for the vatcheckapi.com VAT number validation API. The API requires registration of an API key. Basic features are free, some require a paid subscription. You can find the full API documentation at <https://vatcheckapi.com/docs> .
Estimates joint marker (longitudinal) and survival (time-to-event) outcomes using variational approximations. The package supports multivariate markers allowing for correlated error terms and multiple types of survival outcomes which may be left-truncated, right-censored, and recurrent. Time-varying fixed and random covariate effects are supported along with non-proportional hazards.
This package provides an R interface for interacting with the Tableau Server. It allows users to perform various operations such as publishing workbooks, refreshing data extracts, and managing users using the Tableau REST API (see <https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref.htm> for details). Additionally, it includes functions to perform manipulations on local Tableau workbooks.
This package implements D-vine quantile regression models with parametric or nonparametric pair-copulas. See Kraus and Czado (2017) <doi:10.1016/j.csda.2016.12.009> and Schallhorn et al. (2017) <doi:10.48550/arXiv.1705.08310>.
Although model selection is ubiquitous in scientific discovery, the stability and uncertainty of the selected model is often hard to evaluate. How to characterize the random behavior of the model selection procedure is the key to understand and quantify the model selection uncertainty. This R package offers several graphical tools to visualize the distribution of the selected model. For example, Gplot(), Hplot(), VDSM_scatterplot() and VDSM_heatmap(). To the best of our knowledge, this is the first attempt to visualize such a distribution. About what distribution of selected model is and how it work please see Qin,Y.and Wang,L. (2021) "Visualization of Model Selection Uncertainty" <https://homepages.uc.edu/~qinyn/VDSM/VDSM.html>.
The qda() function from package MASS is extended to calculate a weighted linear (LDA) and quadratic discriminant analysis (QDA) by changing the group variances and group means based on cell-wise uncertainties. The uncertainties can be derived e.g. through relative errors for each individual measurement (cell), not only row-wise or column-wise uncertainties. The method can be applied compositional data (e.g. portions of substances, concentrations) and non-compositional data.