Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Easy access to data from Brazil's population censuses. The package provides a simple and efficient way to download and read the data sets and the documentation of all the population censuses taken in and after 1960 in the country. The package is built on top of the Arrow platform <https://arrow.apache.org/docs/r/>, which allows users to work with larger-than-memory census data using dplyr familiar functions. <https://arrow.apache.org/docs/r/articles/arrow.html#analyzing-arrow-data-with-dplyr>.
While data from randomized experiments remain the gold standard for causal inference, estimation of causal estimands from observational data is possible through various confounding adjustment methods. However, the challenge of unmeasured confounding remains a concern in causal inference, where failure to account for unmeasured confounders can lead to biased estimates of causal estimands. Sensitivity analysis within the framework of causal inference can help adjust for possible unmeasured confounding. In `causens`, three main methods are implemented: adjustment via sensitivity functions (Brumback, Hernán, Haneuse, and Robins (2004) <doi:10.1002/sim.1657> and Li, Shen, Wu, and Li (2011) <doi:10.1093/aje/kwr096>), Bayesian parametric modelling and Monte Carlo approaches (McCandless, Lawrence C and Gustafson, Paul (2017) <doi:10.1002/sim.7298>).
Broken adaptive ridge estimator for censored data is used to select variables and estimate their coefficients in the semi-parametric accelerated failure time model for right-censored survival data.
This package provides recent kernel density estimation methods for circular data, including adaptive and higher-order techniques. The implementation is based on recent advances in bandwidth selection and circular smoothing. Key methods include adaptive bandwidth selection methods by ZámeÄ nà k et al. (2024) <doi:10.1007/s00180-023-01401-0>, complete cross-validation by Hasilová et al. (2024) <doi:10.59170/stattrans-2024-024>, Fourier-based plug-in rules by Tenreiro (2022) <doi:10.1080/10485252.2022.2057974>, and higher-order kernels by Tsuruta & Sagae (2017) <doi:10.1016/j.spl.2017.08.003>.
Constrained ordinary least squares is performed. One constraint is that all beta coefficients (including the constant) cannot be negative. They can be either 0 or strictly positive. Another constraint is that the sum of the beta coefficients equals a constant. References: Hansen, B. E. (2022). Econometrics, Princeton University Press. <ISBN:9780691235899>.
Calculate agrometeorological variables for crops including growing degree days (McMaster, GS & Wilhelm, WW (1997) <doi:10.1016/S0168-1923(97)00027-0>), cumulative rainfall, number of stress days and cumulative or mean radiation and evaporation. Convert dates to day of year and vice versa. Also, download curated and interpolated Australian weather data from the Queensland Government DES longpaddock website <https://www.longpaddock.qld.gov.au/>. This data is freely available under the Creative Commons 4.0 licence.
Cochran-Mantel-Haenszel methods (Cochran (1954) <doi:10.2307/3001616>; Mantel and Haenszel (1959) <doi:10.1093/jnci/22.4.719>; Landis et al. (1978) <doi:10.2307/1402373>) are a suite of tests applicable to categorical data. A competitor to those tests is the procedure of Nonparametric ANOVA which was initially introduced in Rayner and Best (2013) <doi:10.1111/anzs.12041>. The methodology was then extended in Rayner et al. (2015) <doi:10.1111/anzs.12113>. This package employs functions related to both methodologies and serves as an accompaniment to the book: An Introduction to Cochranâ Mantelâ Haenszel and Non-Parametric ANOVA. The package also contains the data sets used in that text.
Make fake data that looks realistic, supporting addresses, person names, dates, times, colors, coordinates, currencies, digital object identifiers ('DOIs'), jobs, phone numbers, DNA sequences, doubles and integers from distributions and within a range.
This high-level API client provides open access to cryptocurrency market data, sentiment indicators, and interactive charting tools. The data is sourced from major cryptocurrency exchanges via curl and returned in xts'-format. The data comes in open, high, low, and close (OHLC) format with flexible granularity, ranging from seconds to months. This flexibility makes it ideal for developing and backtesting trading strategies or conducting detailed market analysis.
Plots calibration curves and computes statistics for assessing calibration performance. See Lasai et al. (2025) <doi:10.48550/arXiv.2503.08389>, De Cock Campo (2023) <doi:10.48550/arXiv.2309.08559> and Van Calster et al. (2016) <doi:10.1016/j.jclinepi.2015.12.005>.
Enables: (1) plotting two-dimensional confidence regions, (2) coverage analysis of confidence region simulations, (3) calculating confidence intervals and the associated actual coverage for binomial proportions, (4) calculating the support values and the probability mass function of the Kaplan-Meier product-limit estimator, and (5) plotting the actual coverage function associated with a confidence interval for the survivor function from a randomly right-censored data set. Each is given in greater detail next. (1) Plots the two-dimensional confidence region for probability distribution parameters (supported distribution suffixes: cauchy, gamma, invgauss, logis, llogis, lnorm, norm, unif, weibull) corresponding to a user-given complete or right-censored dataset and level of significance. The crplot() algorithm plots more points in areas of greater curvature to ensure a smooth appearance throughout the confidence region boundary. An alternative heuristic plots a specified number of points at roughly uniform intervals along its boundary. Both heuristics build upon the radial profile log-likelihood ratio technique for plotting confidence regions given by Jaeger (2016) <doi:10.1080/00031305.2016.1182946>, and are detailed in a publication by Weld et al. (2019) <doi:10.1080/00031305.2018.1564696>. (2) Performs confidence region coverage simulations for a random sample drawn from a user- specified parametric population distribution, or for a user-specified dataset and point of interest with coversim(). (3) Calculates confidence interval bounds for a binomial proportion with binomTest(), calculates the actual coverage with binomTestCoverage(), and plots the actual coverage with binomTestCoveragePlot(). Calculates confidence interval bounds for the binomial proportion using an ensemble of constituent confidence intervals with binomTestEnsemble(). Calculates confidence interval bounds for the binomial proportion using a complete enumeration of all possible transitions from one actual coverage acceptance curve to another which minimizes the root mean square error for n <= 15 and follows the transitions for well-known confidence intervals for n > 15 using binomTestMSE(). (4) The km.support() function calculates the support values of the Kaplan-Meier product-limit estimator for a given sample size n using an induction algorithm described in Qin et al. (2023) <doi:10.1080/00031305.2022.2070279>. The km.outcomes() function generates a matrix containing all possible outcomes (all possible sequences of failure times and right-censoring times) of the value of the Kaplan-Meier product-limit estimator for a particular sample size n. The km.pmf() function generates the probability mass function for the support values of the Kaplan-Meier product-limit estimator for a particular sample size n, probability of observing a failure h at the time of interest expressed as the cumulative probability percentile associated with X = min(T, C), where T is the failure time and C is the censoring time under a random-censoring scheme. The km.surv() function generates multiple probability mass functions of the Kaplan-Meier product-limit estimator for the same arguments as those given for km.pmf(). (5) The km.coverage() function plots the actual coverage function associated with a confidence interval for the survivor function from a randomly right-censored data set for one or more of the following confidence intervals: Greenwood, log-minus-log, Peto, arcsine, and exponential Greenwood. The actual coverage function is plotted for a small number of items on test, stated coverage, failure rate, and censoring rate. The km.coverage() function can print an optional table containing all possible failure/censoring orderings, along with their contribution to the actual coverage function.
This package provides methods for difference-in-differences with a continuous treatment and staggered treatment adoption. Includes estimation of treatment effects and causal responses as a function of the dose, event studies indexed by length of exposure to the treatment, and aggregation into overall average effects. Uniform inference procedures are included, along with both parametric and nonparametric models for treatment effects. The methods are based on Callaway, Goodman-Bacon, and Sant'Anna (2025) <doi:10.48550/arXiv.2107.02637>.
Fast categorization of items based on external code data identified by regular expressions. A typical use case considers patient with medically coded data, such as codes from the International Classification of Diseases ('ICD') or the Anatomic Therapeutic Chemical ('ATC') classification system. Functions of the package relies on a triad of objects: (1) case data with unit id:s and possible dates of interest; (2) external code data for corresponding units in (1) and with optional dates of interest and; (3) a classification scheme ('classcodes object) with regular expressions to identify and categorize relevant codes from (2). It is easy to introduce new classification schemes ('classcodes objects) or to use default schemes included in the package. Use cases includes patient categorization based on comorbidity indices such as Charlson', Elixhauser', RxRisk V', or the comorbidity-polypharmacy score (CPS), as well as adverse events after hip and knee replacement surgery.
Fits constrained groupwise additive index models and provides functions for inference and interpretation of these models. The method is described in Masselot, Chebana, Campagna, Lavigne, Ouarda, Gosselin (2022) "Constrained groupwise additive index models" <doi:10.1093/biostatistics/kxac023>.
This package provides functions to work with directed (asymmetric) and undirected (symmetric) spatial networks. It makes the creation of connectivity matrices easier, i.e. a binary matrix of dimension n x n, where n is the number of nodes (sampling units) indicating the presence (1) or the absence (0) of an edge (link) between pairs of nodes. Different network objects can be produced by chessboard': node list, neighbor list, edge list, connectivity matrix. It can also produce objects that will be used later in Moran's Eigenvector Maps (Dray et al. (2006) <doi:10.1016/j.ecolmodel.2006.02.015>) and Asymetric Eigenvector Maps (Blanchet et al. (2008) <doi:10.1016/j.ecolmodel.2008.04.001>), methods available in the package adespatial (Dray et al. (2023) <https://CRAN.R-project.org/package=adespatial>). This work is part of the FRB-CESAB working group Bridge <https://www.fondationbiodiversite.fr/en/the-frb-in-action/programs-and-projects/le-cesab/bridge/>.
This package provides functions for visualizing, animating, solving and analyzing the Rubik's cube. Includes data structures for solvable and unsolvable cubes, random moves and random state scrambles and cubes, 3D displays and animations using OpenGL', patterned cube generation, and lightweight solvers. See Rokicki, T. (2008) <arXiv:0803.3435> for the Kociemba solver.
Calculates daily climate water balance for irrigation purposes and also calculates the reference evapotranspiration (ET) using three methods, Penman and Monteith (Allen et al. 1998, ISBN:92-5-104219-5); Priestley and Taylor (1972) <doi:10/cr3qwn>; or Hargreaves and Samani (1985) <doi:10.13031/2013.26773>. Users may specify a management allowed depletion (MAD), which is used to suggest when to irrigate. The functionality allows for the use of crop and water stress coefficients as well.
Calculate confidence and consistency that measure the goodness-of-fit and transferability of predictive/potential distribution models (including species distribution models) as described by Somodi & Bede-Fazekas et al. (2024) <doi:10.1016/j.ecolmodel.2024.110667>.
An automated and streamlined workflow for predictive climate mapping using climate station data. Works within an environment the user provides a destined path to - otherwise it's tempdir(). Quick and relatively easy creation of resilient and reproducible climate models, predictions and climate maps, shortening the usually long and complicated work of predictive modelling. For more information, please find the provided URL. Many methods in this package are new, but the main method is based on a workflow from Meyer (2019) <doi:10.1016/j.ecolmodel.2019.108815> and Meyer (2022) <doi:10.1038/s41467-022-29838-9> , however, it was generalized and adjusted in the context of this package.
Contrast trees represent a new approach for assessing the accuracy of many types of machine learning estimates that are not amenable to standard (cross) validation methods; see "Contrast trees and distribution boosting", Jerome H. Friedman (2020) <doi:10.1073/pnas.1921562117>. In situations where inaccuracies are detected, boosted contrast trees can often improve performance. Functions are provided to to build such trees in addition to a special case, distribution boosting, an assumption free method for estimating the full probability distribution of an outcome variable given any set of joint input predictor variable values.
Dissects a package environment or covr coverage object in order to cross reference tested code with the lines that are evaluated, as well as linking those evaluated lines to the documentation that they are described within. Connecting these three pieces of information provides a mechanism of linking tests to documented behaviors.
Interacting with binary files can be difficult because R's types are a subset of what is generally supported by C'. This package provides a suite of functions for reading and writing binary data (with files, connections, and raw vectors) using C type descriptions. These functions convert data between C types and R types while checking for values outside the type limits, NA values, etc.
This package performs the colocalisation tests described in Giambartolomei et al (2013) <doi:10.1371/journal.pgen.1004383>, Wallace (2020) <doi:10.1371/journal.pgen.1008720>, Wallace (2021) <doi:10.1371/journal.pgen.1009440>.
Produce forest plots to visualize covariate effects using either the command line or an interactive Shiny application.