Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package performs backward elimination with similar syntax to the stepAIC() function from the MASS package. A bounding algorithm is used to avoid fitting unnecessary models, making it much faster.
This package performs alignment, PCA, and modeling of multidimensional and unidimensional functions using the square-root velocity framework (Srivastava et al., 2011 <doi:10.48550/arXiv.1103.3817> and Tucker et al., 2014 <DOI:10.1016/j.csda.2012.12.001>). This framework allows for elastic analysis of functional data through phase and amplitude separation.
An R client for the "fixer.io" currency conversion and exchange rate API. The API requires registration and some features are only available on paid accounts. The full API documentation is available at <https://fixer.io/documentation>.
Offers calculation, visualization and comparison of algorithmic fairness metrics. Fair machine learning is an emerging topic with the overarching aim to critically assess whether ML algorithms reinforce existing social biases. Unfair algorithms can propagate such biases and produce predictions with a disparate impact on various sensitive groups of individuals (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities). Fair algorithms possess the underlying foundation that these groups should be treated similarly or have similar prediction outcomes. The fairness R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. These methods are described by Calders and Verwer (2010) <doi:10.1007/s10618-010-0190-x>, Chouldechova (2017) <doi:10.1089/big.2016.0047>, Feldman et al. (2015) <doi:10.1145/2783258.2783311> , Friedler et al. (2018) <doi:10.1145/3287560.3287589> and Zafar et al. (2017) <doi:10.1145/3038912.3052660>. The package also offers convenient visualizations to help understand fairness metrics.
Allows prophet models from the prophet package to be used in a tidy workflow with the modelling interface of fabletools'. This extends prophet to provide enhanced model specification and management, performance evaluation methods, and model combination tools.
Routines for exploratory and descriptive analysis of functional data such as depth measurements, atypical curves detection, regression models, supervised classification, unsupervised classification and functional analysis of variance.
Allows the user to implement easily canvas elements within a shiny app or an RMarkdown document. The user can create shapes, images and text elements within the canvas which can also be used as a drawing tool for taking notes. The package relies on the fabricjs JavaScript library. See <http://fabricjs.com/>.
Support the extraction and seamless integration of species ecological traits or preferences from the www.freshwaterecology.info into several ecological model workflows. During data extraction, different taxonomic levels are acceptable, including species, genus, and family, based on the availability of data in the database. The data is cached after the first search and can be accessed during and after online interactions. Only scientific names are acceptable in the search; local or English names are not allowed. A user API key is required to start using the package.
Allows users to create and deploy the workflow with multiple functions in Function-as-a-Service (FaaS) cloud computing platforms. The FaaSr package makes it simpler for R developers to use FaaS platforms by providing the following functionality: 1) Parsing and validating a JSON-based payload compliant to FaaSr schema supporting multiple FaaS platforms 2) Invoking user functions written in R in a Docker container (derived from rocker), using a list generated from the parser as argument 3) Downloading/uploading of files from/to S3 buckets using simple primitives 4) Logging to files in S3 buckets 5) Triggering downstream actions supporting multiple FaaS platforms 6) Generating FaaS-specific API calls to simplify the registering of a user's workflow with a FaaS platform Supported FaaS platforms: Apache OpenWhisk <https://openwhisk.apache.org/> GitHub Actions <https://github.com/features/actions> Amazon Web Services (AWS) Lambda <https://aws.amazon.com/lambda/> Supported cloud data storage for persistent storage: Amazon Web Services (AWS) Simple Storage Service (S3) <https://aws.amazon.com/s3/>.
This package implements the statistic FAVA, an Fst-based Assessment of Variability across vectors of relative Abundances, as well as a suite of helper functions which enable the visualization and statistical analysis of relative abundance data. The FAVA R package accompanies the paper, â Quantifying compositional variability in microbial communities with FAVAâ by Morrison, Xue, and Rosenberg (2025) <doi:10.1073/pnas.2413211122>.
Download Data from the FAOSTAT Database of the Food and Agricultural Organization (FAO) of the United Nations. A list of functions to download statistics from FAOSTAT (database of the FAO <https://www.fao.org/faostat/>) and WDI (database of the World Bank <https://data.worldbank.org/>), and to perform some harmonization operations.
Efficient approximation of first passage time densities for diffusion processes based on the First Passage Time Location (FPTL) function.
This package provides a study based on the screened selection design (SSD) is an exploratory phase II randomized trial with two or more arms but without concurrent control. The primary aim of the SSD trial is to pick a desirable treatment arm (e.g., in terms of the response rate) to recommend to the subsequent randomized phase IIb (with the concurrent control) or phase III. The proposed designs can â partiallyâ control or provide the empirical type I error/false positive rate by an optimal algorithm (implemented by the optimal_2arm_binary() or optimal_3arm_binary() function) for each arm. All the design needed components (sample size, operating characteristics) are supported.
It provides classifiers which can be used for discrete variables and for continuous variables based on the Naive Bayes and Fuzzy Naive Bayes hypothesis. Those methods were developed by researchers belong to the Laboratory of Technologies for Virtual Teaching and Statistics (LabTEVE) and Laboratory of Applied Statistics to Image Processing and Geoprocessing (LEAPIG) at Federal University of Paraiba, Brazil'. They considered some statistical distributions and their papers were published in the scientific literature, as for instance, the Gaussian classifier using fuzzy parameters, proposed by Moraes, Ferreira and Machado (2021) <doi:10.1007/s40815-020-00936-4>.
Over sixty clustering algorithms are provided in this package with consistent input and output, which enables the user to try out algorithms swiftly. Additionally, 26 statistical approaches for the estimation of the number of clusters as well as the mirrored density plot (MD-plot) of clusterability are implemented. The packages is published in Thrun, M.C., Stier Q.: "Fundamental Clustering Algorithms Suite" (2021), SoftwareX, <DOI:10.1016/j.softx.2020.100642>. Moreover, the fundamental clustering problems suite (FCPS) offers a variety of clustering challenges any algorithm should handle when facing real world data, see Thrun, M.C., Ultsch A.: "Clustering Benchmark Datasets Exploiting the Fundamental Clustering Problems" (2020), Data in Brief, <DOI:10.1016/j.dib.2020.105501>.
Anonymized data from surveys conducted by Forwards <https://forwards.github.io/>, the R Foundation task force on women and other under-represented groups. Currently, a single data set of responses to a survey of attendees at useR! 2016 <https://www.r-project.org/useR-2016/>, the R user conference held at Stanford University, Stanford, California, USA, June 27 - June 30 2016.
An implementation of methods presented by Spiegelhalter (2005) <doi:10.1002/sim.1970> Funnel plots for comparing institutional performance, for standardised ratios, ratios of counts and proportions with additive overdispersion adjustment.
This package provides a set of simplified functions for creating funnel plots for proportion data. This package supports user defined benchmarks, confidence limits and estimation methods (i.e. exact or approximate) based on Spiegelhalter (2005) <doi:10.1002/sim.1970>. Additional routines for returning scored unit level data according to a set of specifications is also implemented for convenience. Specifically, both a categorical and a continuous score variable is returned to the sample data frame, which identifies which observations are deemed extreme or in control. Typically, such variables are useful as stratifications or covariates in further exploratory analyses. Lastly, the plotting routine returns a base funnel plot ('ggplot2'), which can also be tailored.
This package provides a dynamic programming algorithm for the fast segmentation of univariate signals into piecewise constant profiles. The fpop package is a wrapper to a C++ implementation of the fpop (Functional Pruning Optimal Partioning) algorithm described in Maidstone et al. 2017 <doi:10.1007/s11222-016-9636-3>. The problem of detecting changepoints in an univariate sequence is formulated in terms of minimising the mean squared error over segmentations. The fpop algorithm exactly minimizes the mean squared error for a penalty linear in the number of changepoints.
This package provides a flexible permutation framework for making inference such as point estimation, confidence intervals or hypothesis testing, on any kind of data, be it univariate, multivariate, or more complex such as network-valued data, topological data, functional data or density-valued data.
Implementation of the Factorized Binary Search (FaBiSearch) methodology for the estimation of the number and the location of multiple change points in the network (or clustering) structure of multivariate high-dimensional time series. The method is motivated by the detection of change points in functional connectivity networks for functional magnetic resonance imaging (fMRI) data. FaBiSearch uses non-negative matrix factorization (NMF), an unsupervised dimension reduction technique, and a new binary search algorithm to identify multiple change points. It requires minimal assumptions. Lastly, we provide interactive, 3-dimensional, brain-specific network visualization capability in a flexible, stand-alone function. This function can be conveniently used with any node coordinate atlas, and nodes can be color coded according to community membership, if applicable. The output is an elegantly displayed network laid over a cortical surface, which can be rotated in the 3-dimensional space. The main routines of the package are detect.cps(), for multiple change point detection, est.net(), for estimating a network between stationary multivariate time series, net.3dplot(), for plotting the estimated functional connectivity networks, and opt.rank(), for finding the optimal rank in NMF for a given data set. The functions have been extensively tested on simulated multivariate high-dimensional time series data and fMRI data. For details on the FaBiSearch methodology, please see Ondrus et al. (2021) <arXiv:2103.06347>. For a more detailed explanation and applied examples of the fabisearch package, please see Ondrus and Cribben (2022), preprint.
Quantify the serial correlation across lags of a given functional time series using the autocorrelation function and a partial autocorrelation function for functional time series proposed in Mestre et al. (2021) <doi:10.1016/j.csda.2020.107108>. The autocorrelation functions are based on the L2 norm of the lagged covariance operators of the series. Functions are available for estimating the distribution of the autocorrelation functions under the assumption of strong functional white noise.
This package implements instrumental variable estimators for 2^K factorial experiments with noncompliance.
Flipbooks present code step-by-step and side-by-side with its output. flipbookr helps creators build flipbooks efficiently because code pipelines are automatically parsed and prepped for presentation as flipbooks.