Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Craft polished tables and plots in Markdown reports. Simply choose whether to treat your data as counts or metrics, and the package will automatically generate well-designed default tables and plots for you. Boiled down to the basics, with labeling features and simple interactive reports. All functions are tidyverse compatible.
This package provides a way of visualizing collections of time series and, optionally their future values, forecasts for their future values and prediction intervals for the forecasts. A web-based GUI can be used to display the information in a collection of time series.
This package provides a collection of statistical tests for martingale difference hypothesis, including automatic portmanteau test (Escansiano and Lobato, 2009) <doi:10.1016/j.jeconom.2009.03.001> and automatic variance ratio test (Kim, 2009) <doi:10.1016/j.frl.2009.04.003>.
Position adjustments for ggplot2 to implement "visualize as you randomize" principles, which can be especially useful when plotting experimental data.
This package provides a set of basic tools to transform functions into functions with input validation checks, in a manner suitable for both programmatic and interactive use.
Gaze data from the Visual World Paradigm requires significant preprocessing prior to plotting and analyzing the data. This package provides functions for preparing visual world eye-tracking data for statistical analysis and plotting. It can prepare data for linear analyses (e.g., ANOVA, Gaussian-family LMER, Gaussian-family GAMM) as well as logistic analyses (e.g., binomial-family LMER and binomial-family GAMM). Additionally, it contains various plotting functions for creating grand average and conditional average plots. See the vignette for samples of the functionality. Currently, the functions in this package are designed for handling data collected with SR Research Eyelink eye trackers using Sample Reports created in SR Research Data Viewer. While we would like to add functionality for data collected with other systems in the future, the current package is considered to be feature-complete; further updates will mainly entail maintenance and the addition of minor functionality.
Describe in words the genealogical relationship between two members of a given pedigree, using the algorithm in Vigeland (2022) <doi:10.1186/s12859-022-04759-y>. verbalisr is part of the pedsuite collection of packages for pedigree analysis. For a demonstration of verbalisr', see the online app QuickPed at <https://magnusdv.shinyapps.io/quickped>.
Implementation of Azure DevOps <https://azure.microsoft.com/> API calls. It enables the extraction of information about repositories, build and release definitions and individual releases. It also helps create repositories and work items within a project without logging into Azure DevOps'. There is the ability to use any API service with a shell for any non-predefined call.
Computation of volatility impulse response function for multivariate time series model using algorithm by Jin, Lin and Tamvakis (2012) <doi:10.1016/j.eneco.2012.03.003>.
The Vega-Lite JavaScript framework provides a higher-level grammar for visual analysis, akin to ggplot or Tableau', that generates complete Vega specifications. Functions exist which enable building a valid spec from scratch or importing a previously created spec file. Functions also exist to export spec files and to generate code which will enable plots to be embedded in properly configured web pages. The default behavior is to generate an htmlwidget'.
RcppArmadillo implementation for the Matlab code of the Variational Mode Decomposition and Two-Dimensional Variational Mode Decomposition'. For more information, see (i) Variational Mode Decomposition by K. Dragomiretskiy and D. Zosso in IEEE Transactions on Signal Processing, vol. 62, no. 3, pp. 531-544, Feb.1, 2014, <doi:10.1109/TSP.2013.2288675>; (ii) Two-Dimensional Variational Mode Decomposition by Dragomiretskiy, K., Zosso, D. (2015), In: Tai, XC., Bae, E., Chan, T.F., Lysaker, M. (eds) Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2015. Lecture Notes in Computer Science, vol 8932. Springer, <doi:10.1007/978-3-319-14612-6_15>.
This package provides a binding for the valection program which offers various ways to sample the outputs of competing algorithms or parameterizations, and fairly assess their performance against each other. The valection C library is required to use this package and can be downloaded from: <http://labs.oicr.on.ca/boutros-lab/software/valection>. Cooper CI, et al; Valection: Design Optimization for Validation and Verification Studies; Biorxiv 2018; <doi:10.1101/254839>.
This package provides functions for validating the structure and properties of data frames. Answers essential questions about a data set after initial import or modification. What are the unique or missing values? What columns form a primary key? What are the properties of the numeric or categorical columns? What kind of overlap or mapping exists between 2 columns?
Converts Vietnam's provinces names and ID across different formats. Handles diacritics and different spellings.
Designed to help the user to determine the sensitivity of an proposed causal effect to unconsidered common causes. Users can create visualizations of sensitivity, effect sizes, and determine which pattern of effects would support a causal claim for between group differences. Number needed to treat formula from Kraemer H.C. & Kupfer D.J. (2006) <doi:10.1016/j.biopsych.2005.09.014>.
Constructs a virtual population from fertility and mortality rates for any country, calendar year and birth cohort in the Human Mortality Database <https://www.mortality.org> and the Human Fertility Database <https://www.humanfertility.org>. Fertility histories are simulated for every individual and their offspring, producing a multi-generation virtual population.
Interactive adverse event (AE) volcano plot for monitoring clinical trial safety. This tool allows users to view the overall distribution of AEs in a clinical trial using standard (e.g. MedDRA preferred term) or custom (e.g. Gender) categories using a volcano plot similar to proposal by Zink et al. (2013) <doi:10.1177/1740774513485311>. This tool provides a stand-along shiny application and flexible shiny modules allowing this tool to be used as a part of more robust safety monitoring framework like the Shiny app from the safetyGraphics R package.
This package provides pedagogical tools for visualization and numerical computation in vector calculus. Includes functions for parametric curves, scalar and vector fields, gradients, divergences, curls, line and surface integrals, and dynamic 2D/3D graphical analysis to support teaching and learning. The implemented methods follow standard treatments in vector calculus and multivariable analysis as presented in Marsden and Tromba (2011) <ISBN:9781429215084>, Stewart (2015) <ISBN:9781285741550>, Thomas, Weir and Hass (2018) <ISBN:9780134438986>, Larson and Edwards (2016) <ISBN:9781285255869>, Apostol (1969) <ISBN:9780471000051>, Spivak (1971) <ISBN:9780805390216>, Schey (2005) <ISBN:9780071369080>, Colley (2019) <ISBN:9780321982384>, Lizarazo Osorio (2020) <ISBN:9789585450103>, Sievert (2020) <ISBN:9780367180165>, and Borowko (2013) <ISBN:9781439870791>.
Three steps variable selection procedure based on random forests. Initially developed to handle high dimensional data (for which number of variables largely exceeds number of observations), the package is very versatile and can treat most dimensions of data, for regression and supervised classification problems. First step is dedicated to eliminate irrelevant variables from the dataset. Second step aims to select all variables related to the response for interpretation purpose. Third step refines the selection by eliminating redundancy in the set of variables selected by the second step, for prediction purpose. Genuer, R. Poggi, J.-M. and Tuleau-Malot, C. (2015) <https://journal.r-project.org/articles/RJ-2015-018/>.
This package performs 20 omnibus tests for testing the composite hypothesis of variance homogeneity.
R functions are not supposed to print text without giving the user the option to turn the printing off or on using a Boolean verbose in a construct like if(verbose) print(...)'. But this black/white approach is rather rigid, and an approach with shades of gray might be more appropriate in many circumstances.
Implementation of Weighted Fast Greedy algorithm for community detection in networks with mixed types of attributes.
Allows to generate on-demand or by batch, any R documentation file, whatever is kind, data, function, class or package. It populates documentation sections, either automatically or by considering your input. Input code could be standard R code or offensive programming code. Documentation content completeness depends on the type of code you use. With offensive programming code, expect generated documentation to be fully completed, from a format and content point of view. With some standard R code, you will have to activate post processing to fill-in any section that requires complements. Produced manual page validity is automatically tested against R documentation compliance rules. Documentation language proficiency, wording style, and phrasal adjustments remains your job.
The distributions of the weight of evidence (log Bayes factor) favouring case over noncase status in a test dataset (or test folds generated by cross-validation) can be used to quantify the performance of a diagnostic test (McKeigue (2019), <doi:10.1177/0962280218776989>). The package can be used with any test dataset on which you have observed case-control status and have computed prior and posterior probabilities of case status using a model learned on a training dataset. To quantify how the predictor will behave as a risk stratifier, the quantiles of the distributions of weight of evidence in cases and controls can be calculated and plotted.