Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Predict Scope 1, 2 and 3 carbon emissions for UK Small and Medium-sized Enterprises (SMEs), using Standard Industrial Classification (SIC) codes and annual turnover data, as well as Scope 1 carbon emissions for UK farms. The carbonpredict package provides single and batch prediction, plotting, and workflow tools for carbon accounting and reporting. The package utilises pre-trained models, leveraging rich classified transaction data to accurately predict Scope 1, 2 and 3 carbon emissions for UK SMEs as well as identifying emissions hotspots. It also provides Scope 1 carbon emissions predictions for UK farms of types: Cereals ex. rice, Dairy, Mixed farming, Sheep and goats, Cattle & buffaloes, Poultry, Animal production and Support for crop production. The methodology used to produce the estimates in this package is fully detailed in the following peer-reviewed publication in the Journal of Industrial Ecology: Phillpotts, A., Owen. A., Norman, J., Trendl, A., Gathergood, J., Jobst, Norbert., Leake, D. (2025) <doi:10.1111/jiec.70106> "Bridging the SME Reporting Gap: A New Model for Predicting Scope 1 and 2 Emissions".
This package provides functions for calculating the conditional power for different models in survival time analysis within randomized clinical trials with two different treatments to be compared and survival as an endpoint.
Apply styles to tag elements directly and with the .style pronoun. Using the pronoun, styles are created within the context of a tag element. Change borders, backgrounds, text, margins, layouts, and more.
Create rich command line applications, with colors, headings, lists, alerts, progress bars, etc. It uses CSS for custom themes. This package is now superseded by the cli package. Please use cli instead in new projects.
Data on international and other major cricket matches from ESPNCricinfo <https://www.espncricinfo.com> and Cricsheet <https://cricsheet.org>. This package provides some functions to download the data into tibbles ready for analysis.
This package provides tools for evaluating link prediction and clustering algorithms with respect to ground truth. Includes efficient implementations of common performance measures such as pairwise precision/recall, cluster homogeneity/completeness, variation of information, Rand index etc.
Maximum likelihood estimation of the Cauchy-Cacoullos (discrete Cauchy) distribution. Probability mass, distribution and quantile function are also included. The reference paper is: Papadatos N. (2022). "The Characteristic Function of the Discrete Cauchy Distribution in Memory of T. Cacoullos". Journal of Statistical Theory Practice, 16(3): 47. <doi:10.1007/s42519-022-00268-6>.
Model-free selection of covariates under unconfoundedness for situations where the parameter of interest is an average causal effect. This package is based on model-free backward elimination algorithms proposed in de Luna, Waernbaum and Richardson (2011). Marginal co-ordinate hypothesis testing is used in situations where all covariates are continuous while kernel-based smoothing appropriate for mixed data is used otherwise.
This package provides a modified boxplot with a new fence coefficient determined by Lin et al. (2025). The traditional fence coefficient k=1.5 in Tukey's boxplot is replaced by a coefficient based on Chauvenet's criterion, as described in their formula (9). The new boxplot can be implemented in base R with function chau_boxplot(), and in ggplot2 with function geom_chau_boxplot().
This package provides a flexible tool for calculating carbon-equivalent emissions. Mostly using data from the UK Government's Greenhouse Gas Conversion Factors report <https://www.gov.uk/government/publications/greenhouse-gas-reporting-conversion-factors-2024>, it facilitates transparent emissions calculations for various sectors, including travel, accommodation, and clinical activities. The package is designed for easy integration into R workflows, with additional support for shiny applications and community-driven extensions.
This package provides a system for creating R Markdown reports with a sequential syntax.
This package contains the adaptation of bubblebath from MATLAB', developed by Adam Danz and available through the MATLAB Central File Exchange, and the tools to transform a dataframe of radii and points to plot-able paths.
This package provides a tool to easily run and visualise supervised and unsupervised state of the art customer segmentation. It is built like a pipeline covering the 3 main steps in a segmentation project: pre-processing, modelling, and plotting. Users can either run the pipeline as a whole, or choose to run any one of the three individual steps. It is equipped with a supervised option (tree optimisation) and an unsupervised option (k-clustering) as default models.
This package provides a new method for identification of clusters of genomic regions within chromosomes. Primarily, it is used for calling clusters of cis-regulatory elements (COREs). CREAM uses genome-wide maps of genomic regions in the tissue or cell type of interest, such as those generated from chromatin-based assays including DNaseI, ATAC or ChIP-Seq. CREAM considers proximity of the elements within chromosomes of a given sample to identify COREs in the following steps: 1) It identifies window size or the maximum allowed distance between the elements within each CORE, 2) It identifies number of elements which should be clustered as a CORE, 3) It calls COREs, 4) It filters the COREs with lowest order which does not pass the threshold considered in the approach.
Sample size estimation in cluster (group) randomized trials. Contains traditional power-based methods, empirical smoothing (Rotondi and Donner, 2009), and updated meta-analysis techniques (Rotondi and Donner, 2012).
This package provides a chess program which allows the user to create a game, add moves, check for legal moves and game result, plot the board, take back, read and write FEN (Forsythâ Edwards Notation). A basic chess engine based on minimax is implemented.
Computes a single scalar metric for diurnal cortisol cycle analysis, the Cortisol Sine Score (CSS). The score is calculated as the sum over time points of concentration multiplied by sin(2 * pi * time / 24), giving positive weights to morning time points and negative weights to evening ones. The method is model-free, robust, and suitable for regression, classification, clustering, and biomarker research.
Solves multivariate least squares (MLS) problems subject to constraints on the coefficients, e.g., non-negativity, orthogonality, equality, inequality, monotonicity, unimodality, smoothness, etc. Includes flexible functions for solving MLS problems subject to user-specified equality and/or inequality constraints, as well as a wrapper function that implements 24 common constraint options. Also does k-fold or generalized cross-validation to tune constraint options for MLS problems. See ten Berge (1993, ISBN:9789066950832) for an overview of MLS problems, and see Goldfarb and Idnani (1983) <doi:10.1007/BF02591962> for a discussion of the underlying quadratic programming algorithm.
The implementation of bias-corrected sandwich variance estimators for the analysis of cluster randomized trials with time-to-event outcomes using the marginal Cox model, proposed by Wang et al. (under review).
This package implements a joint cointegration testing approach that combines Engle-Granger, Johansen maximum eigenvalue, Boswijk, and Banerjee tests into a unified test-statistic for the null of non-cointegration. Also see Bayer and Hanck (2013) <doi:10.1111/j.1467-9892.2012.00814.x>.
Draws systematic samples from a population that follows linear trend. The function returns a matrix comprising of the required samples as its column vectors. The samples produced are highly efficient and the inter sampling variance is minimum. The scheme will be useful in various field like Bioinformatics where the samples are expensive and must be precise in reflecting the population by possessing least sampling variance.
The phenology of plants (i.e. the timing of their annual life phases) depends on climatic cues. For temperate trees and many other plants, spring phases, such as leaf emergence and flowering, have been found to result from the effects of both cool (chilling) conditions and heat. Fruit tree scientists (pomologists) have developed some metrics to quantify chilling and heat (e.g. see Luedeling (2012) <doi:10.1016/j.scienta.2012.07.011>). chillR contains functions for processing temperature records into chilling (Chilling Hours, Utah Chill Units and Chill Portions) and heat units (Growing Degree Hours). Regarding chilling metrics, Chill Portions are often considered the most promising, but they are difficult to calculate. This package makes it easy. chillR also contains procedures for conducting a PLS analysis relating phenological dates (e.g. bloom dates) to either mean temperatures or mean chill and heat accumulation rates, based on long-term weather and phenology records (Luedeling and Gassner (2012) <doi:10.1016/j.agrformet.2011.10.020>). As of version 0.65, it also includes functions for generating weather scenarios with a weather generator, for conducting climate change analyses for temperature-based climatic metrics and for plotting results from such analyses. Since version 0.70, chillR contains a function for interpolating hourly temperature records.
Use C++ Standard Template Library containers interactively in R. Includes sets, unordered sets, multisets, unordered multisets, maps, unordered maps, multimaps, unordered multimaps, stacks, queues, priority queues, vectors, deques, forward lists, and lists.
DNA copy number data evaluation using both their initial form (copy number as a noisy function of genomic position) and their approximation by a piecewise-constant function (segmentation), for the purpose of identifying genomic regions where the copy number differs from the norm.