Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Access and manipulate spatial tracking data, with straightforward coercion from and to other formats. Filter for speed and create time spent maps from tracking data. There are coercion methods to convert between trip and ltraj from adehabitatLT', and between trip and psp and ppp from spatstat'. Trip objects can be created from raw or grouped data frames, and from types in the sp', sf', amt', trackeR', mousetrap', and other packages, Sumner, MD (2011) <https://figshare.utas.edu.au/articles/thesis/The_tag_location_problem/23209538>.
The TEQR package contains software to calculate the operating characteristics for the TEQR and the ACT designs.The TEQR (toxicity equivalence range) design is a toxicity based cumulative cohort design with added safety rules. The ACT (Activity constrained for toxicity) design is also a cumulative cohort design with additional safety rules. The unique feature of this design is that dose is escalated based on lack of activity rather than on lack of toxicity and is de-escalated only if an unacceptable level of toxicity is experienced.
This package provides functions for extracting tidy data from Bayesian treatment effect models, in particular BART, but extensions are possible. Functionality includes extracting tidy posterior summaries as in tidybayes <https://github.com/mjskay/tidybayes>, estimating (average) treatment effects, common support calculations, and plotting useful summaries of these.
Efficient estimation of the population-level causal effects of stochastic interventions on a continuous-valued exposure. Both one-step and targeted minimum loss estimators are implemented for the counterfactual mean value of an outcome of interest under an additive modified treatment policy, a stochastic intervention that may depend on the natural value of the exposure. To accommodate settings with outcome-dependent two-phase sampling, procedures incorporating inverse probability of censoring weighting are provided to facilitate the construction of inefficient and efficient one-step and targeted minimum loss estimators. The causal parameter and its estimation were first described by DÃ az and van der Laan (2013) <doi:10.1111/j.1541-0420.2011.01685.x>, while the multiply robust estimation procedure and its application to data from two-phase sampling designs is detailed in NS Hejazi, MJ van der Laan, HE Janes, PB Gilbert, and DC Benkeser (2020) <doi:10.1111/biom.13375>. The software package implementation is described in NS Hejazi and DC Benkeser (2020) <doi:10.21105/joss.02447>. Estimation of nuisance parameters may be enhanced through the Super Learner ensemble model in sl3', available for download from GitHub using remotes::install_github("tlverse/sl3")'.
This package provides a suite of descriptive and inferential methods designed to evaluate one or more biomarkers for their ability to guide patient treatment recommendations. Package includes functions to assess the calibration of risk models; and plot, evaluate, and compare markers. Please see the reference Janes H, Brown MD, Huang Y, et al. (2014) <doi:10.1515/ijb-2012-0052> for further details.
Download daily interest rates from the US Treasury XML feed. Leveraging <https://home.treasury.gov/treasury-daily-interest-rate-xml-feed>, this package serves as a wrapper, facilitating the retrieval of daily treasury rates across various categories, including par yield curves, treasury bills, long-term rates, and real yield curves.
This package provides a collection of functions for generating frequency tables and cross-tabulations of categorical variables. The resulting tables can be exported to various formats (Excel, PDF, HTML, etc.) with extensive formatting and layout customization options.
This package provides functions for managing cashflows and interest rate curves.
Transport theory has seen much success in many fields of statistics and machine learning. We provide a variety of algorithms to compute Wasserstein distance, barycenter, and others. See Peyré and Cuturi (2019) <doi:10.1561/2200000073> for the general exposition to the study of computational optimal transport.
The twelvedata REST service offers access to current and historical data on stocks, standard as well as digital crypto currencies, and other financial assets covering a wide variety of course and time spans. See <https://twelvedata.com/> for details, to create an account, and to request an API key for free-but-capped access to the data.
Built on top of the tibble package, tibbletime is an extension that allows for the creation of time aware tibbles. Some immediate advantages of this include: the ability to perform time-based subsetting on tibbles, quickly summarising and aggregating results by time periods, and creating columns that can be used as dplyr time-based groups.
Uses the optimal test design approach by Birnbaum (1968, ISBN:9781593119348) and van der Linden (2018) <doi:10.1201/9781315117430> to construct fixed, adaptive, and parallel tests. Supports the following mixed-integer programming (MIP) solver packages: Rsymphony', highs', gurobi', lpSolve', and Rglpk'. The gurobi package is not available from CRAN; see <https://www.gurobi.com/downloads/>.
Time series methods for intermittent demand forecasting. Includes Croston's method and its variants (Moving Average, SBA), and the TSB method. Users can obtain optimal parameters on a variety of loss functions, or use fixed ones (Kourenztes (2014) <doi:10.1016/j.ijpe.2014.06.007>). Intermittent time series classification methods and iMAPA that uses multiple temporal aggregation levels are also provided (Petropoulos & Kourenztes (2015) <doi:10.1057/jors.2014.62>).
This package provides customizable 3D tree models (as OBJ files) for use in data visualization. Includes both planar and solid tree models, various crown types (columnar, oval, palm, pyramidal, rounded, spreading, vase, weeping), and options to change the diameter, height, and color of the tree's crown and trunk.
This package implements rank preserving structural failure time model (RPSFTM), iterative parameter estimation (IPE), inverse probability of censoring weights (IPCW), marginal structural model (MSM), simple two-stage estimation (TSEsimp), and improved two-stage estimation with g-estimation (TSEgest) methods for treatment switching in randomized clinical trials.
Calculates trait moments from trait and community data using the methods developed in Maitner et al (2021) <doi:10.22541/au.162196147.76797968/v1>.
The Twilio web service provides an API for computer programs to interact with telephony. The included functions wrap the SMS and MMS portions of Twilio's API, allowing users to send and receive text messages from R. See <https://www.twilio.com/docs/> for more information.
This package implements tipping point sensitivity analysis for time-to-event endpoints under different missing data scenarios, as described in Oodally et al. (2025) <doi:10.48550/arXiv.2506.19988>. Supports both model-based and model-free imputation, multiple imputation workflows, plausibility assessment and visualizations. Enables robust assessment for regulatory and exploratory analyses.
Tuning random forest with one line. The package is mainly based on the packages ranger and mlrMBO'.
Attaches a set of packages commonly used for spatial plotting with tmap'. It includes tmap and its extensions ('tmap.glyphs', tmap.networks', tmap.cartogram', tmap.mapgl'), as well as supporting spatial data packages ('sf', stars', terra') and cols4all for exploring color palettes. The collection is designed for thematic mapping workflows and does not include the full set of packages from the R-spatial ecosystem.
This package creates a framework to store and apply display metadata to Analysis Results Datasets (ARDs). The use of tfrmt allows users to define table format and styling without the data, and later apply the format to the data.
This package implements two tests for same-source of toolmarks. The chumbley_non_random() test follows the paper "An Improved Version of a Tool Mark Comparison Algorithm" by Hadler and Morris (2017) <doi:10.1111/1556-4029.13640>. This is an extension of the Chumbley score as previously described in "Validation of Tool Mark Comparisons Obtained Using a Quantitative, Comparative, Statistical Algorithm" by Chumbley et al (2010) <doi:10.1111/j.1556-4029.2010.01424.x>. fixed_width_no_modeling() is based on correlation measures in a diamond shaped area of the toolmark as described in Hadler (2017).
This package provides a user-friendly R data package that is intended to make Turkish higher education statistics more accessible.
This package provides tools to import, clean, validate, and analyze freshwater quality data in Brazil. Implements water quality indices including the Water Quality Index (WQI/IQA), the Trophic State Index (TSI/IET) after Carlson (1977) <doi:10.4319/lo.1977.22.2.0361> and Lamparelli (2004) <https://www.teses.usp.br/teses/disponiveis/41/41134/tde-20032006-075813/publico/TeseLamparelli2004.pdf>, and the National Sanitation Foundation Water Quality Index (NSF WQI) <doi:10.1007/s11157-023-09650-7>. The package also checks compliance with Brazilian standard CONAMA Resolution 357/2005 <https://conama.mma.gov.br/?id=450&option=com_sisconama&task=arquivo.download> and generates reproducible reports for routine monitoring workflows. The example dataset (`wq_demo`) is now a real subset from monitoring data (BURANHEM river, 2020-2024, 4 points, 20 rows, 14 columns including extra `rio`, `lat`, `lon`). All core examples and vignettes use this realistic sample, improving reproducibility and documentation value for users.