Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package contains the function calendR() for creating fully customizable monthly and yearly calendars (colors, fonts, formats, ...) and even heatmap calendars. In addition, it allows saving the calendars in ready to print A4 format PDF files.
Different methods to conduct causal inference for multiple treatments with a binary outcome, including regression adjustment, vector matching, Bayesian additive regression trees, targeted maximum likelihood and inverse probability of treatment weighting using different generalized propensity score models such as multinomial logistic regression, generalized boosted models and super learner. For more details, see the paper by Hu et al. <doi:10.1177/0962280220921909>.
Supporting functionality to run caret with spatial or spatial-temporal data. caret is a frequently used package for model training and prediction using machine learning. CAST includes functions to improve spatial or spatial-temporal modelling tasks using caret'. It includes the newly suggested Nearest neighbor distance matching cross-validation to estimate the performance of spatial prediction models and allows for spatial variable selection to selects suitable predictor variables in view to their contribution to the spatial model performance. CAST further includes functionality to estimate the (spatial) area of applicability of prediction models. Methods are described in Meyer et al. (2018) <doi:10.1016/j.envsoft.2017.12.001>; Meyer et al. (2019) <doi:10.1016/j.ecolmodel.2019.108815>; Meyer and Pebesma (2021) <doi:10.1111/2041-210X.13650>; Milà et al. (2022) <doi:10.1111/2041-210X.13851>; Meyer and Pebesma (2022) <doi:10.1038/s41467-022-29838-9>; Linnenbrink et al. (2023) <doi:10.5194/egusphere-2023-1308>; Schumacher et al. (2024) <doi:10.5194/egusphere-2024-2730>. The package is described in detail in Meyer et al. (2024) <doi:10.48550/arXiv.2404.06978>.
Perform censored quantile regression of Huang (2010) <doi:10.1214/09-AOS771>, and restore monotonicity respecting via adaptive interpolation for dynamic regression of Huang (2017) <doi:10.1080/01621459.2016.1149070>. The monotonicity-respecting restoration applies to general dynamic regression models including (uncensored or censored) quantile regression model, additive hazards model, and dynamic survival models of Peng and Huang (2007) <doi:10.1093/biomet/asm058>, among others.
This package contains functions for solving commonly encountered problems while programming in R. This package is intended to provide a lightweight supplement to Base R, and will be useful for almost any R user.
Implementation of the ageâ periodâ cohort models for claim development presented in Pittarello G, Hiabu M, Villegas A (2025) â Replicating and Extending Chainâ Ladder via an Ageâ Periodâ Cohort Structure on the Claim Development in a Runâ Off Triangleâ <doi:10.1080/10920277.2025.2496725>.
Using polygenic scores (PGS, or PRS/GRS for binary outcomes), this package allows to investigate shared predisposition between different conditions, and do fast association analysis, export plots and views of the PGS distribution using ggplot2 object.
When causal quantities are not identifiable from the observed data, it still may be possible to bound these quantities using the observed data. We outline a class of problems for which the derivation of tight bounds is always a linear programming problem and can therefore, at least theoretically, be solved using a symbolic linear optimizer. We extend and generalize the approach of Balke and Pearl (1994) <doi:10.1016/B978-1-55860-332-5.50011-0> and we provide a user friendly graphical interface for setting up such problems via directed acyclic graphs (DAG), which only allow for problems within this class to be depicted. The user can then define linear constraints to further refine their assumptions to meet their specific problem, and then specify a causal query using a text interface. The program converts this user defined DAG, query, and constraints, and returns tight bounds. The bounds can be converted to R functions to evaluate them for specific datasets, and to latex code for publication. The methods and proofs of tightness and validity of the bounds are described in a paper by Sachs, Jonzon, Gabriel, and Sjölander (2022) <doi:10.1080/10618600.2022.2071905>.
Predict Scope 1, 2 and 3 carbon emissions for UK Small and Medium-sized Enterprises (SMEs), using Standard Industrial Classification (SIC) codes and annual turnover data, as well as Scope 1 carbon emissions for UK farms. The carbonpredict package provides single and batch prediction, plotting, and workflow tools for carbon accounting and reporting. The package utilises pre-trained models, leveraging rich classified transaction data to accurately predict Scope 1, 2 and 3 carbon emissions for UK SMEs as well as identifying emissions hotspots. It also provides Scope 1 carbon emissions predictions for UK farms of types: Cereals ex. rice, Dairy, Mixed farming, Sheep and goats, Cattle & buffaloes, Poultry, Animal production and Support for crop production. The methodology used to produce the estimates in this package is fully detailed in the following peer-reviewed publication in the Journal of Industrial Ecology: Phillpotts, A., Owen. A., Norman, J., Trendl, A., Gathergood, J., Jobst, Norbert., Leake, D. (2025) <doi:10.1111/jiec.70106> "Bridging the SME Reporting Gap: A New Model for Predicting Scope 1 and 2 Emissions".
Finds a low-dimensional embedding of high-dimensional data, conditioning on available manifold information. The current version supports conditional MDS (based on either conditional SMACOF in Bui (2021) <arXiv:2111.13646> or closed-form solution in Bui (2022) <doi:10.1016/j.patrec.2022.11.007>) and conditional ISOMAP in Bui (2021) <arXiv:2111.13646>.
Accelerate Bayesian analytics workflows in R through interactive modelling, visualization, and inference. Define probabilistic graphical models using directed acyclic graphs (DAGs) as a unifying language for business stakeholders, statisticians, and programmers. This package relies on interfacing with the numpyro python package.
This package produces statistical indicators of the impact of migration on the socio-demographic composition of an area. Three measures can be used: ratios, percentages and the Duncan index of dissimilarity. The input data files are assumed to be in an origin-destination matrix format, with each cell representing a flow count between an origin and a destination area. Columns are expected to represent origins, and rows are expected to represent destinations. The first row and column are assumed to contain labels for each area. See Rodriguez-Vignoli and Rowe (2018) <doi:10.1080/00324728.2017.1416155> for technical details.
Calculates confidence intervals after variable selection using repeated data splits. The package offers methods to address the challenges of post-selection inference, ensuring more accurate confidence intervals in models involving variable selection. The two main functions are lmps', which records the different models selected across multiple data splits as well as the corresponding coefficient estimates, and cips', which takes the lmps object as input to select variables and perform inferences using two types of voting.
This package implements a method for identifying and removing the cell-cycle effect from scRNA-Seq data. The description of the method is in Barron M. and Li J. (2016) <doi:10.1038/srep33892>. Identifying and removing the cell-cycle effect from single-cell RNA-Sequencing data. Submitted. Different from previous methods, ccRemover implements a mechanism that formally tests whether a component is cell-cycle related or not, and thus while it often thoroughly removes the cell-cycle effect, it preserves other features/signals of interest in the data.
This project provides a group of new functions to calculate the outputs of the two main components of the Canadian Forest Fire Danger Rating System (CFFDRS) Van Wagner and Pickett (1985) <https://ostrnrcan-dostrncan.canada.ca/entities/publication/29706108-2891-4e5d-a59a-a77c96bc507c>) at various time scales: the Fire Weather Index (FWI) System Wan Wagner (1985) <https://ostrnrcan-dostrncan.canada.ca/entities/publication/d96e56aa-e836-4394-ba29-3afe91c3aa6c> and the Fire Behaviour Prediction (FBP) System Forestry Canada Fire Danger Group (1992) <https://cfs.nrcan.gc.ca/pubwarehouse/pdfs/10068.pdf>. Some functions have two versions, table and raster based.
An implementation of robust estimation in Cox model. Functionality includes fitting efficiently and robustly Cox proportional hazards regression model in its basic form, where explanatory variables are time independent with one event per subject. Method is based on a smooth modification of the partial likelihood.
This package provides tools for advanced analysis of continuous glucose monitoring (CGM) time-series, implementing GRID (Glucose Rate Increase Detector) and GRID-based algorithms for postprandial peak detection, and detection of hypoglycemic and hyperglycemic episodes (Levels 1/2/Extended) aligned with international consensus CGM metrics. Core algorithms are implemented in optimized C++ using Rcpp to provide accurate and fast analysis on large datasets.
Compare double-precision floating point vectors using relative differences. All equality operations are calculated using cpp11'.
Implementation of Tobit type I and type II families for censored regression using the mgcv package, based on methods detailed in Woods (2016) <doi:10.1080/01621459.2016.1180986>.
Variable selection for Gaussian model-based clustering as implemented in the mclust package. The methodology allows to find the (locally) optimal subset of variables in a data set that have group/cluster information. A greedy or headlong search can be used, either in a forward-backward or backward-forward direction, with or without sub-sampling at the hierarchical clustering stage for starting mclust models. By default the algorithm uses a sequential search, but parallelisation is also available.
This package produces descriptive interpretations of confidence intervals. Includes (extensible) support for various test types, specified as sets of interpretations dependent on where the lower and upper confidence limits sit. Provides plotting functions for graphical display of interpretations.
Allows one to assess the stability of individual objects, clusters and whole clustering solutions based on repeated runs of the K-means and K-medoids partitioning algorithms.
This package provides a pair of functions for renaming and encoding data frames using external crosswalk files. It is especially useful when constructing master data sets from multiple smaller data sets that do not name or encode variables consistently across files. Based on similar commands in Stata'.
This package provides the facility to perform the chi-square and G-square test of independence, calculates the retrospective power of the traditional chi-square test, compute permutation and Monte Carlo p-value, and provides measures of association for tables of any size such as Phi, Phi corrected, odds ratio with 95 percent CI and p-value, Yule Q and Y, adjusted contingency coefficient, Cramer's V, V corrected, V standardised, bias-corrected V, W, Cohen's w, Goodman-Kruskal's lambda, and tau. It also calculates standardised, moment-corrected standardised, and adjusted standardised residuals, and their significance, as well as the Quetelet Index, IJ association factor, and adjusted standardised counts. It also computes the chi-square-maximising version of the input table. Different outputs are returned in nicely formatted tables.