Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Demographic data on the United States at the county and state levels spanning multiple years.
Calculates one-sample unbiased central moment estimates and two-sample pooled estimates up to 6th order, including estimates of powers and products of central moments. Provides the machinery for obtaining unbiased central moment estimators beyond 6th order by generating expressions for expectations of raw sample moments and their powers and products. Gerlovina and Hubbard (2019) <doi:10.1080/25742558.2019.1701917>.
Unit-Gompertz density, cumulative distribution, quantile functions and random deviate generation of the unit-Gompertz distribution. In addition, there are a function for fitting the Generalized Additive Models for Location, Scale and Shape.
This package provides a method for estimating log-normalizing constants (or free energies) and expectations from multiple distributions (such as multiple generalized ensembles).
This package contains a WGS84 datum map of the USA, which includes all Commonwealth and State boundaries & also includes Puerto Rico and the U.S. Virgin Islands. This map is a reprojection of the NAD83 datum map from the USGS National Map. This package contains a subset of the data included in the USA.state.boundaries.data package, which is available in a drat repository. To install that data package, please follow the instructions at <https://gitlab.com/iembry/usa.state.boundaries.data>.
Calculate several understandability metrics of BPMN models. BPMN stands for business process modelling notation and is a language for expressing business processes into business process diagrams. Examples of these understandability metrics are: average connector degree, maximum connector degree, sequentiality, cyclicity, diameter, depth, token split, control flow complexity, connector mismatch, connector heterogeneity, separability, structuredness and cross connectivity. See R documentation and paper on metric implementation included in this package for more information concerning the metrics.
The "ussher" data set is drawn from original chronological textual historic events. Commonly known as James Ussher's Annals of the World, the source text was originally written in Latin in 1650, and published in English translation in 1658.The data are classified by index, year, epoch (or one of the 7 ancient "Ages of the World"), Biblical source book if referenced (rarely), as well as alternate dating mechanisms, such as "Anno Mundi" (age of the world) or "Julian Period" (dates based upon the Julian calendar). Additional file "usshfull" includes variables that may be of further interest to historians, such as Southern Kingdom and Northern Kingdom discrepant dates, and the original amalgamated dating mechanic used by Ussher in the original text. The raw data can also be called using "usshraw", as described in: Ussher, J. (1658) <https://archive.org/stream/AnnalsOfTheWorld/Annals_djvu.txt>.
An engine for univariate time series forecasting using different regression models in an autoregressive way. The engine provides an uniform interface for applying the different models. Furthermore, it is extensible so that users can easily apply their own regression models to univariate time series forecasting and benefit from all the features of the engine, such as preprocessings or estimation of forecast accuracy.
This package provides a variational mapping approach that reveals and expands future temporal dynamics from folded high-dimensional geometric distance spaces, unfold turns a set of time series into a 4D block of pairwise distances between reframed windows, learns a variational mapper that maps those distances to the next reframed window, and produces horizon-wise predictive functions for each input series. In short: it unfolds the future path of each series from a folded geometric distance representation.
This package provides functions to implement the methods of the Flood Estimation Handbook (FEH), associated updates and the revitalised flood hydrograph model (ReFH). Currently the package uses NRFA peak flow dataset version 14. Aside from FEH functionality, further hydrological functions are available. Most of the methods implemented in this package are described in one or more of the following: "Flood Estimation Handbook", Centre for Ecology & Hydrology (1999, ISBN:0 948540 94 X). "Flood Estimation Handbook Supplementary Report No. 1", Kjeldsen (2007, ISBN:0 903741 15 7). "Regional Frequency Analysis - an approach based on L-moments", Hosking & Wallis (1997, ISBN: 978 0 521 01940 8). "Making better use of local data in flood frequency estimation", Environment Agency (2017, ISBN: 978 1 84911 387 8). "Sampling uncertainty of UK design flood estimation" , Hammond (2021, <doi:10.2166/nh.2021.059>). "The FEH 2025 statistical method update", UK Centre for Ecology and Hydrology (2025). "Low flow estimation in the United Kingdom", Institute of Hydrology (1992, ISBN 0 948540 45 1). Data from the UK National River Flow Archive (<https://nrfa.ceh.ac.uk/>, terms and conditions: <https://nrfa.ceh.ac.uk/help/costs-terms-and-conditions>).
Two Phase I designs are implemented in the package: the classical 3+3 and the Continual Reassessment Method (<doi:10.2307/2531628>). Simulations tools are also available to estimate the operating characteristics of the methods with several user-dependent options.
UpSet.js is a re-implementation of UpSetR to create interactive set visualizations for more than three sets. This is a htmlwidget wrapper around the JavaScript library UpSet.js'.
Serves as a R wrapper for the University of California San Francisco's [Industry Documents Digital Library]<https://www.industrydocuments.ucsf.edu/> API. The API, and this wrapper, serve to pull metadata about of items within the digital library. For more information the API, see the [API's documentation]<https://www.industrydocuments.ucsf.edu/wp-content/uploads/2020/08/IndustryDocumentsDataAPI_v7.pdf>.
Access data from Land Registry Open Data <http://landregistry.data.gov.uk/> through SPARQL queries. uklr supports the house price index, transaction and price paid data.
This package provides an algorithm to detect and characterize disturbances (start, end dates, intensity) that can occur at different hierarchical levels by studying the dynamics of longitudinal observations at the unit level and group level based on Nadaraya-Watson's smoothing curves, but also a shiny app which allows to visualize the observations and the detected disturbances. Finally the package provides a dataframe mimicking a pig farming system subsected to disturbances simulated according to Le et al.(2022) <doi:10.1016/j.animal.2022.100496>.
User-friendly maximum likelihood estimation (Fisher (1921) <doi:10.1098/rsta.1922.0009>) of univariate densities.
When updating major or minor R versions all packages should be re-installed. The utilities in this package assist in getting a user up-and-running again by installing all previously installed R packages. The package uses renv to install; immediately replenishing your renv package cache.
Dataset contains select attributes for each match result since 1949-1950 season for UNC men's basketball team.
Model data with a suspected clustering structure (either in co-variate space, regression space or both) using a Bayesian product model with a logistic regression likelihood. Observations are represented graphically and clusters are formed through various edge removals or additions. Cluster quality is assessed through the log Bayesian evidence of the overall model, which is estimated using either a Sequential Monte Carlo sampler or a suitable transformation of the Bayesian Information Criterion as a fast approximation of the former. The internal Iterated Batch Importance Sampling scheme (Chopin (2002 <doi:10.1093/biomet/89.3.539>)) is made available as a free standing function.
Uniform Error Index is the weighted average of different error measures. Uniform Error Index utilizes output from different error function and gives more robust and stable error values. This package has been developed to compute Uniform Error Index from ten different loss function like Error Square, Square of Square Error, Quasi Likelihood Error, LogR-Square, Absolute Error, Absolute Square Error etc. The weights are determined using Principal Component Analysis (PCA) algorithm of Yeasin and Paul (2024) <doi:10.1007/s11227-023-05542-3>.
This package provides a container for data used by the usmap package. The data used by usmap has been extracted into this package so that the file size of the usmap package can be reduced greatly. The data in this package will be updated roughly once per year as new map data files are provided by the US Census Bureau.
Uniform sampling on various geometric shapes, such as spheres, ellipsoids, simplices.
Code snippets to fit models using the tidymodels framework can be easily created for a given data set.
Assess the significance of identified clusters and estimates the true number of clusters by comparing the explained variation due to the clustering from the original data to that produced by clustering a unimodal reference distribution which preserves the covariance structure in the data. The reference distribution is generated using kernel density estimation and a Gaussian copula framework. A dimension reduction strategy and sparse covariance estimation optimize this method for the high-dimensional, low-sample size setting. This method is described in Helgeson, Vock, and Bair (2021) <doi:10.1111/biom.13376>.