Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a wide variety of tools for general data analysis, wrangling, spelling, statistics, visualizations, package development, and more. All functions have vectorized implementations whenever possible. Exported names are designed to be readable, with longer names possessing short aliases.
This package provides functionality for testing familial hypotheses. Supports testing centers belonging to the Huber family. Testing is carried out using the Bayesian bootstrap. One- and two-sample tests are supported, as are directional tests. Methods for visualizing output are provided.
Flexible parametric mixture and non-mixture cure models for time-to-event data.
All data sets from "Forecasting: methods and applications" by Makridakis, Wheelwright & Hyndman (Wiley, 3rd ed., 1998) <https://robjhyndman.com/forecasting/>.
Weighted-L2 FPOP Maidstone et al. (2017) <doi:10.1007/s11222-016-9636-3> and pDPA/FPSN Rigaill (2010) <arXiv:1004.0887> algorithm for detecting multiple changepoints in the mean of a vector. Also includes a few model selection functions using Lebarbier (2005) <doi:10.1016/j.sigpro.2004.11.012> and the capsushe package.
Converts R data frames and sf spatial objects into JSON and GeoJSON strings. The core encoders are implemented in Rust using the extendr framework and are designed to efficiently serialize large tabular and spatial datasets. Returns serialized JSON text, allowing applications such as shiny or web APIs to transfer data to client-side JavaScript libraries without additional encoding overhead.
Interactive forest plot for clinical trial safety analysis using metalite', reactable', plotly', and Analysis Data Model (ADaM) datasets. Includes functionality for adverse event filtering, incidence-based group filtering, hover-over reveals, and search and sort operations. The workflow allows for metadata construction, data preparation, output formatting, and interactive plot generation.
This package provides a collection of functions for calculating Floristic Quality Assessment (FQA) metrics using regional FQA databases that have been approved or approved with reservations as ecological planning models by the U.S. Army Corps of Engineers (USACE). For information on FQA see Spyreas (2019) <doi:10.1002/ecs2.2825>. These databases are stored in a sister R package, fqadata'. Both packages were developed for the USACE by the U.S. Army Engineer Research and Development CenterĂ¢ s Environmental Laboratory.
Computes factorial A-, D- and E-optimal designs for two-colour cDNA microarray experiments.
This package provides functions for printing the contents of a folder as columns in a ragged-bottom data.frame and for viewing the details (size, time created, time modified, etc.) of a folder's top level contents.
This package creates a HTML widget which displays the results of searching for a pattern in files in a given folder. The results can be viewed in the RStudio viewer pane, included in a R Markdown document or in a Shiny application. Also provides a Shiny application allowing to run this widget and to navigate in the files found by the search. Instead of creating a HTML widget, it is also possible to get the results of the search in a tibble'. The search is performed by the grep command-line utility.
Analysis of Bayesian adaptive enrichment clinical trial using Free-Knot Bayesian Model Averaging (FK-BMA) method of Maleyeff et al. (2024) for Gaussian data. Maleyeff, L., Golchi, S., Moodie, E. E. M., & Hudson, M. (2024) "An adaptive enrichment design using Bayesian model averaging for selection and threshold-identification of predictive variables" <doi:10.1093/biomtc/ujae141>.
This data contains a large variety of information on players and their current attributes on Fantasy Premier League <https://fantasy.premierleague.com/>. In particular, it contains a `next_gw_points` (next gameweek points) value for each player given their attributes in the current week. Rows represent player-gameweeks, i.e. for each player there is a row for each gameweek. This makes the data suitable for modelling a player's next gameweek points, given attributes such as form, total points, and cost at the current gameweek. This data can therefore be used to create Fantasy Premier League bots that may use a machine learning algorithm and a linear programming solver (for example) to return the best possible transfers and team to pick for each gameweek, thereby fully automating the decision making process in Fantasy Premier League. This function simply supplies the required data for such a task.
Visualise sequential distributions using a range of plotting styles. Sequential distribution data can be input as either simulations or values corresponding to percentiles over time. Plots are added to existing graphic devices using the fan function. Users can choose from four different styles, including fan chart type plots, where a set of coloured polygon, with shadings corresponding to the percentile values are layered to represent different uncertainty levels. Full details in R Journal article; Abel (2015) <doi:10.32614/RJ-2015-002>.
Screens daily streamflow time series for temporal trends and change-points. This package has been primarily developed for assessing the quality of daily streamflow time series. It also contains tools for plotting and calculating many different streamflow metrics. The package can be used to produce summary screening plots showing change-points and significant temporal trends for high flow, low flow, and/or baseflow statistics, or it can be used to perform more detailed hydrological time series analyses. The package was designed for screening daily streamflow time series from Water Survey Canada and the United States Geological Survey but will also work with streamflow time series from many other agencies. Package update to version 2.0 made updates to read.flows function to allow loading of GRDC and ROBIN streamflow record formats. This package uses the `changepoint` package for change point detection. For more information on change point methods, see the changepoint package at <https://cran.r-project.org/package=changepoint>.
Infrastrcture for creating rich, dynamic web content using R scripts while maintaining very fast response time.
FastGit <https://doc.fastgit.org/> works like a mirror of GitHub to make significant acceleration. fgitR is a package to do git operation with FastGit automatically.
Basic analysis of all penalties taken in the German men's Bundesliga between the start of its inaugural season and May 2017. The main functions are suitable printing and plotting functions. Flexible selection of a player is supported via grep. Missed penalties can easily be included or excluded, depending on the user's wishes.
Construction and smart selection of Gaussian process models for analysis of computer experiments with emphasis on treatment of functional inputs that are regularly sampled. This package offers: (i) flexible modeling of functional-input regression problems through the fairly general Gaussian process model; (ii) built-in dimension reduction for functional inputs; (iii) heuristic optimization of the structural parameters of the model (e.g., active inputs, kernel function, type of distance). An in-depth tutorial in the use of funGp is provided in Betancourt et al. (2024) <doi:10.18637/jss.v109.i05> and Metamodeling background is provided in Betancourt et al. (2020) <doi:10.1016/j.ress.2020.106870>. The algorithm for structural parameter optimization is described in <https://hal.science/hal-02532713>.
Aim: Supports the most frequently used methods to combine forecasts. Among others: Simple average, Ordinary Least Squares, Least Absolute Deviation, Constrained Least Squares, Variance-based, Best Individual model, Complete subset regressions and Information-theoretic (information criteria based).
Fits probability distributions to data and plugs into the probaverse suite of R packages so distribution objects are ready for further manipulation and evaluation. Supports methods such as maximum likelihood and L-moments, and provides diagnostics including empirical ranking and quantile score.
Offers a set of tools for visualizing and analyzing size and power properties of the test for equal predictive accuracy, the Diebold-Mariano test that is based on heteroskedasticity and autocorrelation-robust (HAR) inference. A typical HAR inference is involved with non-parametric estimation of the long-run variance, and one of its tuning parameters, the truncation parameter, trades off a size and power. Lazarus, Lewis, and Stock (2021)<doi:10.3982/ECTA15404> theoretically characterize the size-power frontier for the Gaussian multivariate location model. ForeComp computes and visualizes the finite-sample size-power frontier of the Diebold-Mariano test based on fixed-b asymptotics together with the Bartlett kernel. To compute the finite-sample size and power, it works with the best approximating ARMA process to the given dataset. It informs the user how their choice of the truncation parameter performs and how robust the testing outcomes are.
Routines for exploratory and descriptive analysis of functional data such as depth measurements, atypical curves detection, regression models, supervised classification, unsupervised classification and functional analysis of variance.
An interface to the fastText <https://github.com/facebookresearch/fastText> library for efficient learning of word representations and sentence classification. The fastText algorithm is explained in detail in (i) "Enriching Word Vectors with subword Information", Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, 2017, <doi:10.1162/tacl_a_00051>; (ii) "Bag of Tricks for Efficient Text Classification", Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov, 2017, <doi:10.18653/v1/e17-2068>; (iii) "FastText.zip: Compressing text classification models", Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jegou, Tomas Mikolov, 2016, <doi:10.48550/arXiv.1612.03651>.