Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An implementation of the Weighted Portmanteau Tests described in "New Weighted Portmanteau Statistics for Time Series Goodness-of-Fit Testing" published by the Journal of the American Statistical Association, Volume 107, Issue 498, pages 777-787, 2012.
This estimates precise weaning ages for a given skeletal population by analyzing the stable nitrogen isotope ratios of them. Bone collagen turnover rates estimated anew and the approximate Bayesian computation (ABC) were adopted in this package.
Adds ... to a function's argument list so that it can tolerate non-matching arguments.
Takes screenshots of web pages, including Shiny applications and R Markdown documents. webshot2 uses headless Chrome or Chromium as the browser back-end.
Predicts individual race/ethnicity using surname, first name, middle name, geolocation, and other attributes, such as gender and age. The method utilizes Bayes Rule (with optional measurement error correction) to compute the posterior probability of each racial category for any given individual. The package implements methods described in Imai and Khanna (2016) "Improving Ecological Inference by Predicting Individual Ethnicity from Voter Registration Records" Political Analysis <DOI:10.1093/pan/mpw001> and Imai, Olivella, and Rosenman (2022) "Addressing census data problems in race imputation via fully Bayesian Improved Surname Geocoding and name supplements" <DOI:10.1126/sciadv.adc9824>. The package also incorporates the data described in Rosenman, Olivella, and Imai (2023) "Race and ethnicity data for first, middle, and surnames" <DOI:10.1038/s41597-023-02202-2>.
This is a collection of tools for conducting both basic and advanced statistical power analysis including correlation, proportion, t-test, one-way ANOVA, two-way ANOVA, linear regression, logistic regression, Poisson regression, mediation analysis, longitudinal data analysis, structural equation modeling and multilevel modeling. It also serves as the engine for conducting power analysis online at <https://webpower.psychstat.org>.
Analyzing pedigree data of wild populations. While primarily designed to process outputs from the COLONY (Jones & Wang (2010) <doi:10.1111/j.1755-0998.2009.02787.x>) pedigree reconstruction software, it can also accommodate data from other sources. By linking reconstructed pedigrees with genetic sample metadata, wpeR produces spatial and temporal visualizations as well as tabular summaries that support interpretation of family structures and dynamics. The main goal of the package is to provide a solution for the analysis of complex wild pedigree data and to help the user to gain insights into genetic relationships within wild animal populations.
Speech-to-text transcription using a native R torch implementation of OpenAI Whisper model <https://github.com/openai/whisper>. Supports multiple model sizes from tiny (39M parameters) to large-v3 (1.5B parameters) with integrated download from HuggingFace <https://huggingface.co/> via the hfhub package. Provides automatic speech recognition with optional language detection and translation to English. Audio preprocessing, mel spectrogram computation, and transformer-based encoder-decoder inference are all implemented in R using the torch package.
This is a set of minimization tools (maximum likelihood estimation and least square fitting) to solve examples in the Johan Gabrielsson and Dan Weiner's book "Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications" 5th ed. (ISBN:9198299107). Examples include linear and nonlinear compartmental model, turn-over model, single or multiple dosing bolus/infusion/oral models, allometry, toxicokinetics, reversible metabolism, in-vitro/in-vivo extrapolation, enterohepatic circulation, metabolite modeling, Emax model, inhibitory model, tolerance model, oscillating response model, enantiomer interaction model, effect compartment model, drug-drug interaction model, receptor occupancy model, and rebound phenomena model.
Construct a Canonical Variate Analysis Biplot via the Generalised Singular Value Decomposition, for cases when the number of samples is less than the number of variables. For more information on biplots, see Gower JC, Lubbe SG, Le Roux NJ (2011) <doi:10.1002/9780470973196> and for more information on the generalised singular value decomposition, see Edelman A, Wang Y (2020) <doi:10.1137/18M1234412>.
Fast computation of Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) for weighted binary classification problems (weights are example-specific cost values).
This package provides a user-friendly factor-like interface for converting strings of text into numeric vectors and rectangular data structures.
This package provides a new inverse probability of selection weighted Cox model to deal with outcome-dependent sampling in survival analysis.
Generates balancing weights for causal effect estimation in observational studies with binary, multi-category, or continuous point or longitudinal treatments by easing and extending the functionality of several R packages and providing in-house estimation methods. Available methods include those that rely on parametric modeling, optimization, and machine learning. Also allows for assessment of weights and checking of covariate balance by interfacing directly with the cobalt package. Methods for estimating weighted regression models that take into account uncertainty in the estimation of the weights via M-estimation or bootstrapping are available. See the vignette "Installing Supporting Packages" for instructions on how to install any package WeightIt uses, including those that may not be on CRAN.
Lossless webp images are 26% smaller in size compared to PNG. Lossy webp images are 25-34% smaller in size compared to JPEG. This package reads and writes webp images into a 3 (rgb) or 4 (rgba) channel bitmap array using conventions from the jpeg and png packages.
Using a time-varying random parameters model developed in Koutchade et al., (2024) <https://hal.science/hal-04318163>, this package allows allocating variable input costs among crops produced by farmers based on panel data including information on input expenditure aggregated at the farm level and acreage shares. It also considers in fairly way the weighting data and can allow integrating time-varying and time-constant control variables.
Access and analyze the World Bank's International Debt Statistics (IDS) <https://www.worldbank.org/en/programs/debt-statistics/ids>. IDS provides creditor-debtor relationships between countries, regions, and institutions. wbids enables users to download, process and work with IDS series across multiple geographies, counterparts, and time periods.
Meta testing is the ability to test a function without having to provide its parameter values. Those values will be generated, based on semantic naming of parameters, as introduced by package wyz.code.offensiveProgramming'. Value generation logic can be completed with your own data types and generation schemes. This to meet your most specific requirements and to answer to a wide variety of usages, from general use case to very specific ones. While using meta testing, it becomes easier to generate stress test campaigns, non-regression test campaigns and robustness test campaigns, as generated tests can be saved and reused from session to session. Main benefits of using wyz.code.metaTesting is ability to discover valid and invalid function parameter combinations, ability to infer valid parameter values, and to provide smart summaries that allows you to focus on dysfunctional cases.
Assess Water Quality Trends for Long-Term Monitoring Data in Estuaries using Generalized Additive Models following Wood (2017) <doi:10.1201/9781315370279> and Error Propagation with Mixed-Effects Meta-Analysis following Sera et al. (2019) <doi:10.1002/sim.8362>. Methods are available for model fitting, assessment of fit, annual and seasonal trend tests, and visualization of results.
Supports systematic scrutiny, modification, and integration of data. The function status() counts rows that have missing values in grouping columns (returned by na() ), have non-unique combinations of grouping columns (returned by dup() ), and that are not locally sorted (returned by unsorted() ). Functions enumerate() and itemize() give sorted unique combinations of columns, with or without occurrence counts, respectively. Function ignore() drops columns in x that are present in y, and informative() drops columns in x that are entirely NA; constant() returns values that are constant, given a key. Data that have defined unique combinations of grouping values behave more predictably during merge operations.
Graphical data analysis of accelerated life tests. Methods derived from Wayne Nelson (1990, ISBN: 9780471522775), William Q. Meeker and Lois A. Escobar (1998, ISBN: 1-471-14328-6).
Efficiently read and write Waveform (WAV) audio files <https://en.wikipedia.org/wiki/WAV>. Support for unsigned 8 bit Pulse-code modulation (PCM), signed 12, 16, 24 and 32 bit PCM and other encodings.
This package provides a utility for working with women's basketball data. A scraping and aggregating interface for the WNBA Stats API <https://stats.wnba.com/> and ESPN's <https://www.espn.com> women's college basketball and WNBA statistics. It provides users with the capability to access the game play-by-plays, box scores, standings and results to analyze the data for themselves.
In Multidimensional Systems the When dimension allows us to express when the analysed facts have occurred. The purpose of this package is to provide support for implementing this dimension in the form of date and time tables for Relational On-Line Analytical Processing star database systems.