Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The Variable Infiltration Capacity (VIC) model is a macroscale hydrologic model that solves full water and energy balances, originally developed by Xu Liang at the University of Washington (UW). The version of VIC source code used is of 5.0.1 on <https://github.com/UW-Hydro/VIC/>, see Hamman et al. (2018). Development and maintenance of the current official version of the VIC model at present is led by the UW Hydro (Computational Hydrology group) in the Department of Civil and Environmental Engineering at UW. VIC is a research model and in its various forms it has been applied to most of the major river basins around the world, as well as globally <http://vic.readthedocs.io/en/master/Documentation/References/>. References: "Liang, X., D. P. Lettenmaier, E. F. Wood, and S. J. Burges (1994), A simple hydrologically based model of land surface water and energy fluxes for general circulation models, J. Geophys. Res., 99(D7), 14415-14428, <doi:10.1029/94JD00483>"; "Hamman, J. J., Nijssen, B., Bohn, T. J., Gergel, D. R., and Mao, Y. (2018), The Variable Infiltration Capacity model version 5 (VIC-5): infrastructure improvements for new applications and reproducibility, Geosci. Model Dev., 11, 3481-3496, <doi:10.5194/gmd-11-3481-2018>".
Offers a wide range of functions for reading and writing data in various file formats, including CSV, RDS, Excel and ZIP files. Additionally, it provides functions for retrieving metadata associated with files, such as file size and creation date, making it easy to manage and organize large data sets. This package is designed to simplify data import and export tasks, and provide users with a comprehensive set of tools to work with different types of data files.
This package provides easy-to-use tools for data analysis and visualization for hyperspectral remote sensing (also known as imaging spectroscopy), with a particular focus on vegetation hyperspectral data analysis. It consists of a set of functions, ranging from the organization of hyperspectral data in the proper data structure for spectral feature selection, calculation of vegetation index, multivariate analysis, as well as to the visualization of spectra and results of analysis in the ggplot2 style.
This package provides methods to calculate diagnostics for multicollinearity among predictors in a linear or generalized linear model. It also provides methods to visualize those diagnostics following Friendly & Kwan (2009), "Whereâ s Waldo: Visualizing Collinearity Diagnostics", <doi:10.1198/tast.2009.0012>. These include better tabular presentation of collinearity diagnostics that highlight the important numbers, a semi-graphic tableplot of the diagnostics to make warning and danger levels more salient, and a "collinearity biplot" of the smallest dimensions of predictor space, where collinearity is most apparent.
Constructs a virtual population from fertility and mortality rates for any country, calendar year and birth cohort in the Human Mortality Database <https://www.mortality.org> and the Human Fertility Database <https://www.humanfertility.org>. Fertility histories are simulated for every individual and their offspring, producing a multi-generation virtual population.
An R interface to the Project VoteSmart'<https://justfacts.votesmart.org/> API.
Procedures for the manipulation, normalization, and plotting of phonetic and sociophonetic vowel formant data. vowels is the backend for the NORM website.
This package provides a set of functions for generating HTML to embed hosted video in your R Markdown documents or Shiny applications.
This package provides direct access to linked names for the same entity across the world's major name authority files, including national and regional variations in language, character set, and spelling. For more information go to <https://viaf.org/>.
Automatically selects and visualises statistical hypothesis tests between two vectors, based on their class, distribution, sample size, and a user-defined confidence level (conf.level). Visual outputs - including box plots, bar charts, regression lines with confidence bands, mosaic plots, residual plots, and Q-Q plots - are annotated with relevant test statistics, assumption checks, and post-hoc analyses where applicable. The algorithmic workflow helps the user focus on the interpretation of test results rather than test selection. It is particularly suited for quick data analysis, e.g., in statistical consulting projects or educational settings. The test selection algorithm proceeds as follows: Input vectors of class numeric or integer are considered numerical; those of class factor are considered categorical. Assumptions of residual normality and homogeneity of variances are considered met if the corresponding test yields a p-value greater than the significance level alpha = 1 - conf.level. (1) When the response vector is numerical and the predictor vector is categorical, a test of central tendencies is selected. If the categorical predictor has exactly two levels, t.test() is applied when group sizes exceed 30 (Lumley et al. (2002) <doi:10.1146/annurev.publhealth.23.100901.140546>). For smaller samples, normality of residuals is tested using shapiro.test(); if met, t.test() is used; otherwise, wilcox.test(). If the predictor is categorical with more than two levels, an aov() is initially fitted. Residual normality is evaluated using both shapiro.test() and ad.test(); residuals are considered approximately normal if at least one test yields a p-value above alpha. If this assumption is met, bartlett.test() assesses variance homogeneity. If variances are homogeneous, aov() is used; otherwise oneway.test(). Both tests are followed by TukeyHSD(). If residual normality cannot be assumed, kruskal.test() is followed by pairwise.wilcox.test(). (2) When both the response and predictor vectors are numerical, a simple linear regression model is fitted using lm(). (3) When both vectors are categorical, Cochran's rule (Cochran (1954) <doi:10.2307/3001666>) is applied to test independence either by chisq.test() or fisher.test().
This package provides a set of visual input controls for Shiny apps to facilitate filtering across multiple outputs.
Under a different representation of the multivariate normal (MVN) probability, we can use the Vecchia approximation to sample the integrand at a linear complexity with respect to n. Additionally, both the SOV algorithm from Genz (92) and the exponential-tilting method from Botev (2017) can be adapted to linear complexity. The reference for the method implemented in this package is Jian Cao and Matthias Katzfuss (2024) "Linear-Cost Vecchia Approximation of Multivariate Normal Probabilities" <doi:10.48550/arXiv.2311.09426>. Two major references for the development of our method are Alan Genz (1992) "Numerical Computation of Multivariate Normal Probabilities" <doi:10.1080/10618600.1992.10477010> and Z. I. Botev (2017) "The Normal Law Under Linear Restrictions: Simulation and Estimation via Minimax Tilting" <doi:10.48550/arXiv.1603.04166>.
This package provides functions for metrics and plots for model evaluation. Based on vectors of observed and predicted values. Method: Kristin Piikki, Johanna Wetterlind, Mats Soderstrom and Bo Stenberg (2021). <doi:10.1111/SUM.12694>.
This package provides fast sampling from von Mises-Fisher distribution using the method proposed by Andrew T.A Wood (1994) <doi:10.1080/03610919408813161>.
Alternative splicing produces a variety of different protein products from a given gene. VALERIE enables visualisation of alternative splicing events from high-throughput single-cell RNA-sequencing experiments. VALERIE computes percent spliced-in (PSI) values for user-specified genomic coordinates corresponding to alternative splicing events. PSI is the proportion of sequencing reads supporting the included exon/intron as defined by Shiozawa (2018) <doi:10.1038/s41467-018-06063-x>. PSI are inferred from sequencing reads data based on specialised infrastructures for representing and computing annotated genomic ranges by Lawrence (2013) <doi:10.1371/journal.pcbi.1003118>. Computed PSI for each single cell are subsequently presented in the form of a heatmap implemented using the pheatmap package by Kolde (2010) <https://CRAN.R-project.org/package=pheatmap>. Board overview of the mean PSI difference and associated p-values across different user-defined groups of single cells are presented in the form of a line graph using the ggplot2 package by Wickham (2007) <https://CRAN.R-project.org/package=ggplot2>.
Historical results for the state of Virginia lottery draw games. Data were downloaded from https://www.valottery.com/.
This package provides a suite of analytical functionalities to process and analyze visual meteor observations from the Visual Meteor Database of the International Meteor Organization <https://www.imo.net/>.
This package provides an R interface for interacting with the Tableau Server. It allows users to perform various operations such as publishing workbooks, refreshing data extracts, and managing users using the Tableau REST API (see <https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref.htm> for details). Additionally, it includes functions to perform manipulations on local Tableau workbooks.
This package provides a set of functions for data transformations. Transformations are performed on character and numeric data. As the scope of the package is within Student Analytics, there are functions focused around the academic year.
This package provides a reference implementation of the Vertical Weighted Strips method explored by Raim, Livsey, and Irimata (2025) <doi:10.48550/arXiv.2401.09696> for rejection sampling.
Variance function estimation for models proposed by W. Sadler in his variance function program ('VFP', www.aacb.asn.au/AACB/Resources/Variance-Function-Program). Here, the idea is to fit multiple variance functions to a data set and consequently assess which function reflects the relationship Var ~ Mean best. For in-vitro diagnostic ('IVD') assays modeling this relationship is of great importance when individual test-results are used for defining follow-up treatment of patients.
Utilities for verifying discrete, continuous and probabilistic forecasts, and forecasts expressed as parametric distributions are included.
This package provides a library for creating time based charts, like Gantt or timelines. Possible outputs include ggplot2 diagrams, plotly.js graphs, Highcharts.js widgets and data.frames. Results can be used in the RStudio viewer pane, in RMarkdown documents or in Shiny apps. In the interactive outputs created by vistime() and hc_vistime(), you can interact with the plot using mouse hover or zoom.
Empirical models for runoff, erosion, and phosphorus loss across a vegetated filter strip, given slope, soils, climate, and vegetation (Gall et al., 2018) <doi:10.1007/s00477-017-1505-x>. It also includes functions for deriving climate parameters from measured daily weather data, and for simulating rainfall. Models implemented include MUSLE (Williams, 1975) and APLE (Vadas et al., 2009 <doi:10.2134/jeq2008.0337>).