Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Given a CSV file with titles and abstracts, the package creates a document-term matrix that is lemmatized and stemmed and can directly be used to train machine learning methods for automatic title-abstract screening in the preparation of a meta analysis.
This package contains functions intended to facilitate the production of plant taxonomic monographs. The package includes functions to convert tables into taxonomic descriptions, lists of collectors, examined specimens, identification keys (dichotomous and interactive), and can generate a monograph skeleton. Additionally, wrapper functions to batch the production of phenology histograms and distributional and diversity maps are also available.
Offers an easy and automated way to scale up individual-level space use analysis to that of groups. Contains a function from the move package to calculate a dynamic Brownian bridge movement model from movement data for individual animals, as well as functions to visualize and quantify space use for individuals aggregated in groups. Originally written with passive acoustic telemetry in mind, this package also provides functionality to account for unbalanced acoustic receiver array designs, and satellite tag data.
This package implements methods to normalize multiplexed imaging data, including statistical metrics and visualizations to quantify technical variation in this data type. Reference for methods listed here: Harris, C., Wrobel, J., & Vandekar, S. (2022). mxnorm: An R Package to Normalize Multiplexed Imaging Data. Journal of Open Source Software, 7(71), 4180, <doi:10.21105/joss.04180>.
This package implements the three parallel forecast combinations of Markov Switching GARCH and extreme learning machine model along with the selection of appropriate model for volatility forecasting. For method details see Hsiao C, Wan SK (2014). <doi:10.1016/j.jeconom.2013.11.003>, Hansen BE (2007). <doi:10.1111/j.1468-0262.2007.00785.x>, Elliott G, Gargano A, Timmermann A (2013). <doi:10.1016/j.jeconom.2013.04.017>.
According to a phenomenon known as "the wisdom of the crowds," combining point estimates from multiple judges often provides a more accurate aggregate estimate than using a point estimate from a single judge. However, if the judges use shared information in their estimates, the simple average will over-emphasize this common component at the expense of the judgesâ private information. Asa Palley & Ville Satopää (2021) "Boosting the Wisdom of Crowds Within a Single Judgment Problem: Selective Averaging Based on Peer Predictions" <https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=3504286> proposes a procedure for calculating a weighted average of the judgesâ individual estimates such that resulting aggregate estimate appropriately combines the judges collective information within a single estimation problem. The authors use both simulation and data from six experimental studies to illustrate that the weighting procedure outperforms existing averaging-like methods, such as the equally weighted average, trimmed average, and median. This aggregate estimate -- know as "the knowledge-weighted estimate" -- inputs a) judges estimates of a continuous outcome (E) and b) predictions of others average estimate of this outcome (P). In this R-package, the function knowledge_weighted_estimate(E,P) implements the knowledge-weighted estimate. Its use is illustrated with a simple stylized example and on real-world experimental data.
This package provides a multi action button for usage in shiny applications.
The sample mean and standard deviation are two commonly used statistics in meta-analyses, but some trials use other summary statistics such as the median and quartiles to report the results. Therefore, researchers need to transform those information back to the sample mean and standard deviation. This package implemented sample mean estimators by Luo et al. (2016) <arXiv:1505.05687>, sample standard deviation estimators by Wan et al. (2014) <arXiv:1407.8038>, and the best linear unbiased estimators (BLUEs) of location and scale parameters by Yang et al. (2018, submitted) based on sample quantiles derived summaries in a meta-analysis.
Simplifies Brazilian names phonetically using a custom metaphoneBR algorithm that preserves ending vowels. Useful for name matching processing preserving gender information carried generally by ending vowels in Portuguese. Mation (2025) <doi:10.6082/uchicago.15104>.
This package provides a collection of methods for large scale single mediator hypothesis testing. The six included methods for testing the mediation effect are Sobel's test, Max P test, joint significance test under the composite null hypothesis, high dimensional mediation testing, divide-aggregate composite null test, and Sobel's test under the composite null hypothesis. Du et al (2023) <doi:10.1002/gepi.22510>.
There are two functions-meta2d and meta3d for detecting rhythmic signals from time-series datasets. For analyzing time-series datasets without individual information, meta2d is suggested, which could incorporates multiple methods from ARSER, JTK_CYCLE and Lomb-Scargle in the detection of interested rhythms. For analyzing time-series datasets with individual information, meta3d is suggested, which takes use of any one of these three methods to analyze time-series data individual by individual and gives out integrated values based on analysis result of each individual.
Constructs the normalized Laplacian matrix of a square matrix, returns the eigenvectors (singular vectors) and visualization of normalized Laplacian map.
Regression methods to quantify the relation between two measurement methods are provided by this package. In particular it addresses regression problems with errors in both variables and without repeated measurements. It implements the Clinical Laboratory Standard International (CLSI) recommendations (see J. A. Budd et al. (2018, <https://clsi.org/standards/products/method-evaluation/documents/ep09/>) for analytical method comparison and bias estimation using patient samples. Furthermore, algorithms for Theil-Sen and equivariant Passing-Bablok estimators are implemented, see F. Dufey (2020, <doi:10.1515/ijb-2019-0157>) and J. Raymaekers and F. Dufey (2022, <arXiv:2202:08060>). Further the robust M-Deming and MM-Deming (experimental) are available, see G. Pioda (2021, <arXiv:2105:04628>). A comprehensive overview over the implemented methods and references can be found in the manual pages mcrPioda-package and mcreg'.
Quickly and conveniently create interactive visualisations of spatial data with or without background maps. Attributes of displayed features are fully queryable via pop-up windows. Additional functionality includes methods to visualise true- and false-color raster images and bounding boxes.
Supplementary materials and datasets for the book "Modern Psychometrics With R" (Mair, 2018, Springer useR! series).
Extends the mlr3 ecosystem to functional analysis by adding support for irregular and regular functional data as defined in the tf package. The package provides PipeOps for preprocessing functional columns and for extracting scalar features, thereby allowing standard machine learning algorithms to be applied afterwards. Available operations include simple functional features such as the mean or maximum, smoothing, interpolation, flattening, and functional PCA'.
Distance between multivariate t distributions, as presented by N. Bouhlel and D. Rousseau (2023) <doi:10.1109/LSP.2023.3324594>.
Incorporates a Bayesian monotonic single-index mixed-effect model with a multivariate skew-t likelihood, specifically designed to handle survey weights adjustments. Features include a simulation program and an associated Gibbs sampler for model estimation. The single-index function is constrained to be monotonic increasing, utilizing a customized Gaussian process prior for precise estimation. The model assumes random effects follow a canonical skew-t distribution, while residuals are represented by a multivariate Student-t distribution. Offers robust Bayesian adjustments to integrate survey weight information effectively.
This package provides a way to estimate and test marginal mediation effects for zero-inflated compositional mediators. Estimates of Natural Indirect Effect (NIE), Natural Direct Effect (NDE) of each taxon, as well as their standard errors and confident intervals, were provided as outputs. Zeros will not be imputed during analysis. See Wu et al. (2022) <doi:10.3390/genes13061049>.
This package provides a complement to all editions of *Modern Data Science with R* (ISBN: 978-0367191498, publisher URL: <https://www.routledge.com/Modern-Data-Science-with-R/Baumer-Kaplan-Horton/p/book/9780367191498>). This package contains data and code to complete exercises and reproduce examples from the text. It also facilitates connections to the SQL database server used in the book. All editions of the book are supported by this package.
Visualization of multi-dimensional data arising in multi-objective optimization, including plots of the empirical attainment function (EAF), M. López-Ibáñez, L. Paquete, and T. Stützle (2010) <doi:10.1007/978-3-642-02538-9_9>, and symmetric Vorob'ev expectation and deviation, M. Binois, D. Ginsbourger, O. Roustant (2015) <doi:10.1016/j.ejor.2014.07.032>, among others.
This package provides functions for diagnostic meta-analysis. Next to basic analysis and visualization the bivariate Model of Reitsma et al. (2005) that is equivalent to the HSROC of Rutter & Gatsonis (2001) can be fitted. A new approach based to diagnostic meta-analysis of Holling et al. (2012) is also available. Standard methods like summary, plot and so on are provided.
Functionality for generating and plotting random mazes. The mazes are based on matrices, so can only consist of vertical and horizontal lines along a regular grid. But there is no need to use every possible space, so they can take on many different shapes.
Estimation and comparison of the performances of diagnostic tests in multi-reader multi-case studies where true case statuses (or ground truths) are known and one or more readers provide test ratings for multiple cases. Reader performance metrics are provided for area under and expected utility of ROC curves, likelihood ratio of positive or negative tests, and sensitivity and specificity. ROC curves can be estimated empirically or with binormal or binormal likelihood-ratio models. Statistical comparisons of diagnostic tests are based on the ANOVA model of Obuchowski-Rockette and the unified framework of Hillis (2005) <doi:10.1002/sim.2024>. The ANOVA can be conducted with data from a full factorial, nested, or partially paired study design; with random or fixed readers or cases; and covariances estimated with the DeLong method, jackknifing, or an unbiased method. Smith and Hillis (2020) <doi:10.1117/12.2549075>.