Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Forest Many-Objective Robust Decision Making ('FoRDM') is a R toolkit for supporting robust forest management under deep uncertainty. It provides a forestry-focused application of Many-Objective Robust Decision Making ('MORDM') to forest simulation outputs, enabling users to evaluate robustness using regret- and satisficing'-based measures. FoRDM identifies robust solutions, generates Pareto fronts, and offers interactive 2D, 3D, and parallel-coordinate visualizations.
R implementations of standard financial engineering codes; vanilla option pricing models such as Black-Scholes, Bachelier, CEV, and SABR.
This package provides a friendly (flexible) Markov Chain Monte Carlo (MCMC) framework for implementing Metropolis-Hastings algorithm in a modular way allowing users to specify automatic convergence checker, personalized transition kernels, and out-of-the-box multiple MCMC chains using parallel computing. Most of the methods implemented in this package can be found in Brooks et al. (2011, ISBN 9781420079425). Among the methods included, we have: Haario (2001) <doi:10.1007/s11222-011-9269-5> Adaptive Metropolis, Vihola (2012) <doi:10.1007/s11222-011-9269-5> Robust Adaptive Metropolis, and Thawornwattana et al. (2018) <doi:10.1214/17-BA1084> Mirror transition kernels.
This package provides tools and features for "Exploratory Landscape Analysis (ELA)" of single-objective continuous optimization problems. Those features are able to quantify rather complex properties, such as the global structure, separability, etc., of the optimization problems.
Proposes non-parametric estimates of the Fisher information measure and the Shannon entropy power. More theoretical and implementation details can be found in Guignard et al. <doi:10.3389/feart.2020.00255>. A python version of this work is available on github and PyPi ('FiShPy').
This package contains functions for operations with fuzzy cognitive maps using t-norm and s-norm operators. T-norms and S-norms are described by Dov M. Gabbay and George Metcalfe (2007) <doi:10.1007/s00153-007-0047-1>. System indicators are described by Cox, Earl D. (1995) <isbn:1886801010>. Executable examples are provided in the "inst/examples" folder.
Developed for the following tasks. 1 ) Computing the probability density function, cumulative distribution function, random generation, and estimating the parameters of the eleven mixture models. 2 ) Point estimation of the parameters of two - parameter Weibull distribution using twelve methods and three - parameter Weibull distribution using nine methods. 3 ) The Bayesian inference for the three - parameter Weibull distribution. 4 ) Estimating parameters of the three - parameter Birnbaum - Saunders, generalized exponential, and Weibull distributions fitted to grouped data using three methods including approximated maximum likelihood, expectation maximization, and maximum likelihood. 5 ) Estimating the parameters of the gamma, log-normal, and Weibull mixture models fitted to the grouped data through the EM algorithm, 6 ) Estimating parameters of the nonlinear height curve fitted to the height - diameter observation, 7 ) Estimating parameters, computing probability density function, cumulative distribution function, and generating realizations from gamma shape mixture model introduced by Venturini et al. (2008) <doi:10.1214/07-AOAS156> , 8 ) The Bayesian inference, computing probability density function, cumulative distribution function, and generating realizations from univariate and bivariate Johnson SB distribution, 9 ) Robust multiple linear regression analysis when error term follows skewed t distribution, 10 ) Estimating parameters of a given distribution fitted to grouped data using method of maximum likelihood, and 11 ) Estimating parameters of the Johnson SB distribution through the Bayesian, method of moment, conditional maximum likelihood, and two - percentile method.
Comparisons of floating point numbers are problematic due to errors associated with the binary representation of decimal numbers. Despite being aware of these problems, people still use numerical methods that fail to account for these and other rounding errors (this pitfall is the first to be highlighted in Circle 1 of Burns (2012) The R Inferno <https://www.burns-stat.com/pages/Tutor/R_inferno.pdf>). This package provides new relational operators useful for performing floating point number comparisons with a set tolerance.
This package provides an alternative to facilitate the construction of a phylogeny for fish species from a list of species or a community matrix using as a backbone the phylogenetic tree proposed by Rabosky et al. (2018) <doi:10.1038/s41586-018-0273-1>.
Read and process a large delimited file block by block. A block consists of all the contiguous rows that have the same value in the first field. The result can be returned as a list or a data.table, or even directly printed to an output file.
Unified regularized estimating equation solver. Currently the package includes one solver with the l1 penalty only. More solvers and penalties are under development. Reference: Yi Yang, Yuwen Gu, Yue Zhao, Jun Fan (2021) <doi:10.48550/arXiv.2110.11074>.
Perform factorial analysis with a menu and draw graphs interactively thanks to FactoMineR and a Shiny application.
This package performs functional regression, and some related approaches, for intensive longitudinal data (see the book by Walls & Schafer, 2006, Models for Intensive Longitudinal Data, Oxford) when such data is not necessarily observed on an equally spaced grid of times. The approach generally follows the ideas of Goldsmith, Bobb, Crainiceanu, Caffo, and Reich (2011)<DOI:10.1198/jcgs.2010.10007> and the approach taken in their sample code, but with some modifications to make it more feasible to use with long rather than wide, non-rectangular longitudinal datasets with unequal and potentially random measurement times. It also allows easy plotting of the correlation between the smoothed covariate and the outcome as a function of time, which can add additional insights on how to interpret a functional regression. Additionally, it also provides several permutation tests for the significance of the functional predictor. The heuristic interpretation of ``time is used to describe the index of the functional predictor, but the same methods can equally be used for another unidimensional continuous index, such as space along a north-south axis. Note that most of the functionality of this package has been superseded by added features after 2016 in the pfr function by Jonathan Gellar, Mathew W. McLean, Jeff Goldsmith, and Fabian Scheipl, in the refund package built by Jeff Goldsmith and co-authors and maintained by Julia Wrobel. The development of the funreg package in 2015 and 2016 was part of a research project supported by Award R03 CA171809-01 from the National Cancer Institute and Award P50 DA010075 from the National Institute on Drug Abuse. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Drug Abuse, the National Cancer Institute, or the National Institutes of Health.
Miscellaneous utilities, tools and helper functions for finding and searching files on disk, searching for and removing R objects from the workspace. Does not import or depend on any third party package, but on core R only (i.e. it may depend on packages with priority base').
Aids in analysing data from a food frequency questionnaire known as the Harvard Service Food Frequency Questionnaire (HSFFQ). Functions from this package use answers from the HSFFQ to generate estimates of daily consumed micronutrients, calories, macronutrients on an individual level. The package also calculates food quotients on individual and group levels. Foodquotient calculation is an often tedious step in the calculation of total human energy expenditure (TEE) using the doubly labeled water method, which is the gold standard for measuring TEE.
Implementation to perform forecasting of locally stationary wavelet processes by examining the local second order structure of the time series.
Allows maximum likelihood fitting of cluster-weighted models, a class of mixtures of regression models with random covariates. Methods are described in Angelo Mazza, Antonio Punzo, Salvatore Ingrassia (2018) <doi:10.18637/jss.v086.i02>.
An interface to the Fish Tree of Life API to download taxonomies, phylogenies, fossil calibrations, and diversification rate information for ray-finned fishes.
This package produces forest plots using ggplot2 from models produced by functions such as stats::lm(), stats::glm() and survival::coxph().
Computes the power and sample size (PASS) required to test for the difference in the mean function between two groups under a repeatedly measured longitudinal or sparse functional design. See the manuscript by Koner and Luo (2023) <https://salilkoner.github.io/assets/PASS_manuscript.pdf> for details of the PASS formula and computational details. The details of the testing procedure for univariate and multivariate response are presented in Wang (2021) <doi:10.1214/21-EJS1802> and Koner and Luo (2023) <arXiv:2302.05612> respectively.
This package provides a streamlined, standard evaluation-based approach to multivariate function composition. Allows for chaining commands via a forward-pipe operator, %>%.
Average rating and number of votes reported by IMDb for films and shorts with over 100 votes in 2022. The data are analysed in Chapter 3 of the Book Getting (more out of) Graphics (Antony Unwin, CRC Press 2024).
Estimation of Rosenthal's fail safe number including confidence intervals. The relevant papers are the following. Konstantinos C. Fragkos, Michail Tsagris and Christos C. Frangos (2014). "Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number". International Scholarly Research Notices, Volume 2014. <doi:10.1155/2014/825383>. Konstantinos C. Fragkos, Michail Tsagris and Christos C. Frangos (2017). "Exploring the distribution for the estimator of Rosenthal's fail-safe number of unpublished studies in meta-analysis". Communications in Statistics-Theory and Methods, 46(11):5672--5684. <doi:10.1080/03610926.2015.1109664>.
This package provides a faster implementation of Bayesian Causal Forests (BCF; Hahn et al. (2020) <doi:10.1214/19-BA1195>), which uses regression tree ensembles to estimate the conditional average treatment effect of a binary treatment on a scalar output as a function of many covariates. This implementation avoids many redundant computations and memory allocations present in the original BCF implementation, allowing the model to be fit to larger datasets. The implementation was originally developed for the 2022 American Causal Inference Conference's Data Challenge. See Kokandakar et al. (2023) <doi:10.1353/obs.2023.0024> for more details.