Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Helps users standardise data to the Darwin Core Standard, a global data standard to store, document, and share biodiversity data like species occurrence records. The package provides tools to manipulate data to conform with, and check validity against, the Darwin Core Standard. Using corella allows users to verify that their data can be used to build Darwin Core Archives using the galaxias package.
Allows clinicians to predict survival probabilities over the next two years for cystic fibrosis patients, based on the clinical prediction models published in Stanojevic et al. (2019) <doi:10.1183/13993003.00224-2019>.
This package provides a collection of functions for modeling fissile material operations in nuclear facilities, based on Zywiec et al (2021) <doi:10.1016/j.ress.2020.107322>.
Images are cropped to a circle with a transparent background. The function takes a vector of images, either local or from a link, and circle crops the image. Paths to the cropped image are returned for plotting with ggplot2'. Also includes cropping to a hexagon, heart, parallelogram, and square.
Explore and normalize American campaign finance data. Created by the Investigative Reporting Workshop to facilitate work on The Accountability Project, an effort to collect public data into a central, standard database that is more easily searched: <https://publicaccountability.org/>.
This package provides conversion functionality between a broad range of scientific, historical, and industrial unit types.
Allows for the easy computation of complexity: the proportion of the parameter space in line with the hypothesis by chance. The package comes with a Shiny application in which the calculations can be conducted as well.
This package performs simple correspondence analysis on a two-way contingency table, or multiple correspondence analysis (homogeneity analysis) on data with p categorical variables, and produces bootstrap-based elliptical confidence regions around the projected coordinates for the category points. Includes routines to plot the results in a variety of styles. Also reports the standard numerical output for correspondence analysis.
Fast application of Continuous Wavelet Transformation ('CWT') on time series with special attention to spectroscopy. It is written using data.table and C++ language and in some functions it is possible to use parallel processing to speed-up the computation over samples. Currently, only the second derivative of a Gaussian wavelet function is implemented.
Implementation of cross-validation method for testing the forecasting accuracy of several multi-population mortality models. The family of multi-population includes several multi-population mortality models proposed through the actuarial and demography literature. The package includes functions for fitting and forecast the mortality rates of several populations. Additionally, we include functions for testing the forecasting accuracy of different multi-population models. References, <https://journal.r-project.org/articles/RJ-2025-018/>. Atance, D., Debon, A., and Navarro, E. (2020) <doi:10.3390/math8091550>. Bergmeir, C. & Benitez, J.M. (2012) <doi:10.1016/j.ins.2011.12.028>. Debon, A., Montes, F., & Martinez-Ruiz, F. (2011) <doi:10.1007/s13385-011-0043-z>. Lee, R.D. & Carter, L.R. (1992) <doi:10.1080/01621459.1992.10475265>. Russolillo, M., Giordano, G., & Haberman, S. (2011) <doi:10.1080/03461231003611933>. Santolino, M. (2023) <doi:10.3390/risks11100170>.
Enables creation of visualizations using the CanvasXpress framework in R. CanvasXpress is a standalone JavaScript library for reproducible research with complete tracking of data and end-user modifications stored in a single PNG image that can be played back. See <https://www.canvasxpress.org> for more information.
Download imagery tiles to a standard cache and load the data into raster objects. Facilities for AWS terrain <https://registry.opendata.aws/terrain-tiles/> terrain and Mapbox <https://www.mapbox.com/> servers are provided.
Multivariate random forests with compositional responses and Euclidean predictors is performed. The compositional data are first transformed using the additive log-ratio transformation, or the alpha-transformation of Tsagris, Preston and Wood (2011), <doi:10.48550/arXiv.1106.1451>, and then the multivariate random forest of Rahman R., Otridge J. and Pal R. (2017), <doi:10.1093/bioinformatics/btw765>, is applied.
This package provides a consistent interface for connecting R to various data sources including file systems and databases. Designed for clinical research, connector streamlines access to ADAM', SDTM for example. It helps to deal with multiple data formats through a standardized API and centralized configuration.
Wraps cytoscape.js as a shiny widget. cytoscape.js <https://js.cytoscape.org/> is a Javascript-based graph theory (network) library for visualization and analysis. This package supports the visualization of networks with custom visual styles and several available layouts. Demo Shiny applications are provided in the package code.
Extend cxxfunction by saving the dynamic shared objects for reusing across R sessions.
This package provides a tidied subset of the US College Scorecard dataset, containing institutional characteristics, enrollment, student aid, costs, and student outcomes at institutions of higher education in the United States.
This package provides functions to simplify the process of preparing event and transaction for cohort analysis.
Access chemical, hazard, bioactivity, and exposure data from the Computational Toxicology and Exposure ('CTX') APIs <https://www.epa.gov/comptox-tools/computational-toxicology-and-exposure-apis>. ctxR was developed to streamline the process of accessing the information available through the CTX APIs without requiring prior knowledge of how to use APIs. Most data is also available on the CompTox Chemical Dashboard ('CCD') <https://comptox.epa.gov/dashboard/> and other resources found at the EPA Computational Toxicology and Exposure Online Resources <https://www.epa.gov/comptox-tools>.
This package provides methods to help selecting General Circulation Models (GCMs) in the context of projecting models to future scenarios. It is provided clusterization algorithms, distance and correlation metrics, as well as a tailor-made algorithm to detect the optimum subset of GCMs that recreate the environment of all GCMs as proposed in Esser et al. (2025) <doi:10.1111/gcb.70008>.
Calculate confidence and consistency that measure the goodness-of-fit and transferability of predictive/potential distribution models (including species distribution models) as described by Somodi & Bede-Fazekas et al. (2024) <doi:10.1016/j.ecolmodel.2024.110667>.
Cross-validate one or multiple regression and classification models and get relevant evaluation metrics in a tidy format. Validate the best model on a test set and compare it to a baseline evaluation. Alternatively, evaluate predictions from an external model. Currently supports regression and classification (binary and multiclass). Described in chp. 5 of Jeyaraman, B. P., Olsen, L. R., & Wambugu M. (2019, ISBN: 9781838550134).
Provide functions for overlaps clustering, fuzzy clustering and interval-valued data manipulation. The package implement the following algorithms: OKM (Overlapping Kmeans) from Cleuziou, G. (2007) <doi:10.1109/icpr.2008.4761079> ; NEOKM (Non-exhaustive overlapping Kmeans) from Whang, J. J., Dhillon, I. S., and Gleich, D. F. (2015) <doi:10.1137/1.9781611974010.105> ; Fuzzy Cmeans from Bezdek, J. C. (1981) <doi:10.1007/978-1-4757-0450-1> ; Fuzzy I-Cmeans from de A.T. De Carvalho, F. (2005) <doi:10.1016/j.patrec.2006.08.014>.
Solves optimal pairing and matching problems using linear assignment algorithms. Provides implementations of the Hungarian method (Kuhn 1955) <doi:10.1002/nav.3800020109>, Jonker-Volgenant shortest path algorithm (Jonker and Volgenant 1987) <doi:10.1007/BF02278710>, Auction algorithm (Bertsekas 1988) <doi:10.1007/BF02186476>, cost-scaling (Goldberg and Kennedy 1995) <doi:10.1007/BF01585996>, scaling algorithms (Gabow and Tarjan 1989) <doi:10.1137/0218069>, push-relabel (Goldberg and Tarjan 1988) <doi:10.1145/48014.61051>, and Sinkhorn entropy-regularized transport (Cuturi 2013) <doi:10.48550/arxiv.1306.0895>. Designed for matching plots, sites, samples, or any pairwise optimization problem. Supports rectangular matrices, forbidden assignments, data frame inputs, batch solving, k-best solutions, and pixel-level image morphing for visualization. Includes automatic preprocessing with variable health checks, multiple scaling methods (standardized, range, robust), greedy matching algorithms, and comprehensive balance diagnostics for assessing match quality using standardized differences and distribution comparisons.