Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Enables computationally efficient parameters-estimation by variational Bayesian methods for various diagnostic classification models (DCMs). DCMs are a class of discrete latent variable models for classifying respondents into latent classes that typically represent distinct combinations of skills they possess. Recently, to meet the growing need of large-scale diagnostic measurement in the field of educational, psychological, and psychiatric measurements, variational Bayesian inference has been developed as a computationally efficient alternative to the Markov chain Monte Carlo methods, e.g., Yamaguchi and Okada (2020a) <doi:10.1007/s11336-020-09739-w>, Yamaguchi and Okada (2020b) <doi:10.3102/1076998620911934>, Yamaguchi (2020) <doi:10.1007/s41237-020-00104-w>, Oka and Okada (2023) <doi:10.1007/s11336-022-09884-4>, and Yamaguchi and Martinez (2023) <doi:10.1111/bmsp.12308>. To facilitate their applications, variationalDCM is developed to provide a collection of recently-proposed variational Bayesian estimation methods for various DCMs.
This package implements the Vine Copula Change Point (VCCP) methodology for the estimation of the number and location of multiple change points in the vine copula structure of multivariate time series. The method uses vine copulas, various state-of-the-art segmentation methods to identify multiple change points, and a likelihood ratio test or the stationary bootstrap for inference. The vine copulas allow for various forms of dependence between time series including tail, symmetric and asymmetric dependence. The functions have been extensively tested on simulated multivariate time series data and fMRI data. For details on the VCCP methodology, please see Xiong & Cribben (2021).
Data from the United Nation's World Population Prospects 2008.
An interface to WordNet using the Jawbone Java API to WordNet. WordNet (<https://wordnet.princeton.edu/>) is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. Please note that WordNet(R) is a registered tradename. Princeton University makes WordNet available to research and commercial users free of charge provided the terms of their license (<https://wordnet.princeton.edu/license-and-commercial-use>) are followed, and proper reference is made to the project using an appropriate citation (<https://wordnet.princeton.edu/citing-wordnet>). The WordNet database files need to be made available separately, either via package wordnetDicts from <https://datacube.wu.ac.at>, installing system packages where available, or direct download from <https://wordnetcode.princeton.edu/3.0/WNdb-3.0.tar.gz>.
Simulates the results of completed randomized controlled trials, as if they had been conducted as adaptive Multi-Arm Bandit (MAB) trials instead. Augmented inverse probability weighted estimation (AIPW), outlined by Hadad et al. (2021) <doi:10.1073/pnas.2014602118>, is used to robustly estimate the probability of success for each treatment arm under the adaptive design. Provides customization options to simulate perfect/imperfect information, stationary/non-stationary bandits, blocked treatment assignments, along with control augmentation, and other hybrid strategies for assigning treatment arms. The methods used in simulation were inspired by Offer-Westort et al. (2021) <doi:10.1111/ajps.12597>.
This package provides a collection of functions related to novel methods for estimating R(t), created by the lab of Professor Laura White. Currently implemented methods include two-step Bayesian back-calculation and now-casting for line-list data with missing reporting delays, adapted in STAN from Li (2021) <doi:10.1371/journal.pcbi.1009210>, and calculation of time-varying reproduction number assuming a flux between various adjacent states, adapted into STAN from Zhou (2021) <doi:10.1371/journal.pcbi.1010434>.
K-means clustering, hierarchical clustering, and PCA with observational weights and/or variable weights. It also includes the corresponding functions for data nuggets which serve as representative samples of large datasets. Cherasia et al., (2022) <doi:10.1007/978-3-031-22687-8_20>. Amaratunga et al., (2009) <doi:10.1002/9780470317129>.
This package provides a fast visualization tool for creating wordcloud by using wordcloud2.js'. wordcloud2.js is a JavaScript library to create wordle presentation on 2D canvas or HTML <https://timdream.org/wordcloud2.js/>.
New tools for the imputation of missing values in high-dimensional data are introduced using the non-parametric nearest neighbor methods. It includes weighted nearest neighbor imputation methods that use specific distances for selected variables. It includes an automatic procedure of cross validation and does not require prespecified values of the tuning parameters. It can be used to impute missing values in high-dimensional data when the sample size is smaller than the number of predictors. For more information see Faisal and Tutz (2017) <doi:10.1515/sagmb-2015-0098>.
Helper functions to easily add functionality to functions. The package can assign functions to have an lazy evaluation allowing you to save and update the arguments before and after each function call. You can set a temporary working directory within functions and wrap console messages around other functions.
Evaluation of prediction performance of smaller regions of spectra for Chemometrics. Segmentation of spectra, evolving dimensions regions and sliding windows as selection methods. Election of the best model among those computed based on error metrics. Chen et al.(2017) <doi:10.1007/s00216-017-0218-9>.
The Wordle game. Players have six attempts to guess a five-letter word. After each guess, the player is informed which letters in their guess are either: anywhere in the word; in the right position in the word. This can be used to inform the next guess. Can be played interactively in the console, or programmatically. Based on Josh Wardle's game <https://www.powerlanguage.co.uk/wordle/>.
This package provides a convenient data set, a set of helper functions, and a benchmark function for economically (profit) driven wind farm layout optimization. This enables researchers in the field of the NP-hard (non-deterministic polynomial-time hard) problem of wind farm layout optimization to focus on their optimization methodology contribution and also provides a realistic benchmark setting for comparability among contributions. See Croonenbroeck, Carsten & Hennecke, David (2020) <doi:10.1016/j.energy.2020.119244>.
An adaptation for estuaries (tidal waters) of weighted regression on time, discharge, and season to evaluate trends in water quality time series. Please see Beck and Hagy (2015) <doi:10.1007/s10666-015-9452-8> for details.
Access Wikipedia through the several MediaWiki APIs (<https://www.mediawiki.org/wiki/API>), as well as through the XTools API (<https://www.mediawiki.org/wiki/XTools/API>). Ensure your API calls are correct, and receive results in tidy tibbles.
Queries online WikiPathway graphics and allows mapping user data (e.g., expression values) on the graph. The package designs a grammar of graphic syntax that using pipe operator to add graphic layer.
Create plots and tables in a consistent style with WaSHI (Washington Soil Health Initiative) branding. Use washi to easily style your ggplot2 plots and flextable tables.
Weighted Piecewise Kernel Density Estimation for large data.
Implementation of integrative weighting approaches for multiple observational studies and causal inferences. The package features three weighting approaches, each representing a special case of the unified weighting framework, introduced by Guha and Li (2024) <doi:10.1093/biomtc/ujae070>, which includes an extension of inverse probability weights for data integration settings.
This package performs a sensitivity analysis using weighted rank tests in observational studies with I blocks of size J; see Rosenbaum (2024) <doi:10.1080/01621459.2023.2221402>. The package can perform adaptive inference in block designs; see Rosenbaum (2012) <doi:10.1093/biomet/ass032>. The package can increase design sensitivity using the conditioning tactic in Rosenbaum (2025) <doi:10.1093/jrsssb/qkaf007>. The main functions are wgtRank(), wgtRankCI(), wgtRanktt() and wgtRankC().
This package provides a collection of tools to fit and work with trophic Species Distribution Models. Trophic Species Distribution Models combine knowledge of trophic interactions with Bayesian structural equation models that model each species as a function of its prey (or predators) and environmental conditions. It exploits the topological ordering of the known trophic interaction network to predict species distribution in space and/or time, where the prey (or predator) distribution is unavailable. The method implemented by the package is described in Poggiato, Andréoletti, Pollock and Thuiller (2022) <doi:10.22541/au.166853394.45823739/v1>.
Adds ... to a function's argument list so that it can tolerate non-matching arguments.
Assess Water Quality Trends for Long-Term Monitoring Data in Estuaries using Generalized Additive Models following Wood (2017) <doi:10.1201/9781315370279> and Error Propagation with Mixed-Effects Meta-Analysis following Sera et al. (2019) <doi:10.1002/sim.8362>. Methods are available for model fitting, assessment of fit, annual and seasonal trend tests, and visualization of results.
This package provides functions for the import, transformation, and analysis of data from muscle physiology experiments. The work loop technique is used to evaluate the mechanical work and power output of muscle. Josephson (1985) <doi:10.1242/jeb.114.1.493> modernized the technique for application in comparative biomechanics. Although our initial motivation was to provide functions to analyze work loop experiment data, as we developed the package we incorporated the ability to analyze data from experiments that are often complementary to work loops. There are currently three supported experiment types: work loops, simple twitches, and tetanus trials. Data can be imported directly from .ddf files or via an object constructor function. Through either method, data can then be cleaned or transformed via methods typically used in studies of muscle physiology. Data can then be analyzed to determine the timing and magnitude of force development and relaxation (for isometric trials) or the magnitude of work, net power, and instantaneous power among other things (for work loops). Although we do not provide plotting functions, all resultant objects are designed to be friendly to visualization via either base-R plotting or tidyverse functions. This package has been peer-reviewed by rOpenSci (v. 1.1.0).