Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements cluster-polarization coefficient for measuring distributional polarization in single or multiple dimensions, as well as associated functions. Contains support for hierarchical clustering, k-means, partitioning around medoids, density-based spatial clustering with noise, and manually imposed cluster membership. Mehlhaff (2024) <doi:10.1017/S0003055423001041>.
This package provides a framework for modeling relationships between functional traits and both quantitative and qualitative environmental variables at the community level. It includes tools for trait binning, likelihood-based environmental estimation, model evaluation, fossil projection into modern ecometric space, and result visualization. For more details see Vermillion et al. (2018) <doi:10.1007/978-3-319-94265-0_17>, Polly et al. (2011) <doi:10.1098/rspb.2010.2233> and Polly and Head (2015) <doi:10.1017/S1089332600002953>.
The Crunch.io service <https://crunch.io/> provides a cloud-based data store and analytic engine, as well as an intuitive web interface. Using this package, analysts can interact with and manipulate Crunch datasets from within R. Importantly, this allows technical researchers to collaborate naturally with team members, managers, and clients who prefer a point-and-click interface.
This package creates compact letter displays (CLDs) for pairwise comparisons from statistical post-hoc tests. Groups sharing the same letter are not significantly different from each other. Supports multiple input formats including results from stats pairwise tests, DescTools', PMCMRplus', rstatix', symmetric matrices of p-values, and data frames. Provides a consistent interface for visualizing statistical groupings across different testing frameworks.
Evaluates stimuli using Large Language Models APIs with URL support.
The data and meta data from Statistics Netherlands (<https://www.cbs.nl>) can be browsed and downloaded. The client uses the open data API of Statistics Netherlands.
Estimates latent class vector-autoregressive models via EM algorithm on time-series data for model-based clustering and classification. Includes model selection criteria for selecting the number of lags and clusters.
This package provides a publication-ready toolkit for modern survival and competing risks analysis with a minimal, formula-based interface. Both nonparametric estimation and direct polytomous regression of cumulative incidence functions (CIFs) are supported. The main functions cifcurve()', cifplot()', and cifpanel() estimate survival and CIF curves and produce high-quality graphics with risk tables, censoring and competing-risk marks, and multi-panel or inset layouts built on ggplot2 and ggsurvfit'. The modeling function polyreg() performs direct polytomous regression for coherent joint modeling of all cause-specific CIFs to estimate risk ratios, odds ratios, or subdistribution hazard ratios at user-specified time points. All core functions adopt a formula-and-data syntax and return tidy and extensible outputs that integrate smoothly with modelsummary', broom', and the broader tidyverse ecosystem. Key numerical routines are implemented in C++ via Rcpp'.
Obtain coordinate system metadata from various data formats. There are functions to extract a CRS (coordinate reference system, <https://en.wikipedia.org/wiki/Spatial_reference_system>) in EPSG (European Petroleum Survey Group, <http://www.epsg.org/>), PROJ4 <https://proj.org/>, or WKT2 (Well-Known Text 2, <http://docs.opengeospatial.org/is/12-063r5/12-063r5.html>) forms. This is purely for getting simple metadata from in-memory formats, please use other tools for out of memory data sources.
The Concordance Test is a non-parametric method for testing whether two o more samples originate from the same distribution. It extends the Kendall Tau correlation coefficient when there are only two groups. For details, see Alcaraz J., Anton-Sanchez L., Monge J.F. (2022) The Concordance Test, an Alternative to Kruskal-Wallis Based on the Kendall-tau Distance: An R Package. The R Journal 14, 26â 53 <doi:10.32614/RJ-2022-039>.
Estimation of average treatment effects (ATE) of point interventions on time-to-event outcomes with K competing risks (K can be 1). The method uses propensity scores and inverse probability weighting for emulation of baseline randomization, which is described in Charpignon et al. (2022) <doi:10.1038/s41467-022-35157-w>.
Data stored in text file can be processed chunkwise using dplyr commands. These are recorded and executed per data chunk, so large files can be processed with limited memory using the LaF package.
This package provides functions for fitting GEV and POT (via point process fitting) models for extremes in climate data, providing return values, return probabilities, and return periods for stationary and nonstationary models. Also provides differences in return values and differences in log return probabilities for contrasts of covariate values. Functions for estimating risk ratios for event attribution analyses, including uncertainty. Under the hood, many of the functions use functions from extRemes', including for fitting the statistical models. Details are given in Paciorek, Stone, and Wehner (2018) <doi:10.1016/j.wace.2018.01.002>.
Simplifies the execution of command line interface (CLI) tools within isolated and reproducible environments. It enables users to effortlessly manage Conda environments, execute command line tools, handle dependencies, and ensure reproducibility in their data analysis workflows.
This package implements higher order likelihood-based inference for logistic and loglinear models.
An implementation of efficiency first conformal prediction (EFCP) and validity first conformal prediction (VFCP) that demonstrates both validity (coverage guarantee) and efficiency (width guarantee). To learn how to use it, check the vignettes for a quick tutorial. The package is based on the work by Yang Y., Kuchibhotla A.,(2021) <arxiv:2104.13871>.
This package provides a cascade select widget for usage in Shiny applications. This is useful for selection of hierarchical choices (e.g. continent, country, city). It is taken from the JavaScript library PrimeReact'.
This package implements a Ward-like hierarchical clustering algorithm including soft spatial/geographical constraints.
This package provides a wrapper around the COVID Tracking Project API <https://covidtracking.com/api/> providing data on cases of COVID-19 in the US.
The nonparametric methods for estimating copula entropy, transfer entropy, and the statistics for multivariate normality test and two-sample test are implemented. The methods for estimating transfer entropy and the statistics for multivariate normality test and two-sample test are based on the method for estimating copula entropy. The method for change point detection with copula entropy based two-sample test is also implemented. Please refer to Ma and Sun (2011) <doi:10.1016/S1007-0214(11)70008-6>, Ma (2019) <doi:10.48550/arXiv.1910.04375>, Ma (2022) <doi:10.48550/arXiv.2206.05956>, Ma (2023) <doi:10.48550/arXiv.2307.07247>, and Ma (2024) <doi:10.48550/arXiv.2403.07892> for more information.
This package provides a set of state-of-the-art probabilistic modeling approaches to derive estimates of individual customer lifetime values (CLV). Commonly, probabilistic approaches focus on modelling 3 processes, i.e. individuals attrition, transaction, and spending process. Latent customer attrition models, which are also known as "buy-'til-you-die models", model the attrition as well as the transaction process. They are used to make inferences and predictions about transactional patterns of individual customers such as their future purchase behavior. Moreover, these models have also been used to predict individualsâ long-term engagement in activities such as playing an online game or posting to a social media platform. The spending process is usually modelled by a separate probabilistic model. Combining these results yields in lifetime values estimates for individual customers. This package includes fast and accurate implementations of various probabilistic models for non-contractual settings (e.g., grocery purchases or hotel visits). All implementations support time-invariant covariates, which can be used to control for e.g., socio-demographics. If such an extension has been proposed in literature, we further provide the possibility to control for time-varying covariates to control for e.g., seasonal patterns. Currently, the package includes the following latent attrition models to model individuals attrition and transaction process: [1] Pareto/NBD model (Pareto/Negative-Binomial-Distribution), [2] the Extended Pareto/NBD model (Pareto/Negative-Binomial-Distribution with time-varying covariates), [3] the BG/NBD model (Beta-Gamma/Negative-Binomial-Distribution) and the [4] GGom/NBD (Gamma-Gompertz/Negative-Binomial-Distribution). Further, we provide an implementation of the Gamma/Gamma model to model the spending process of individuals.
Confirms if the number is Luhn compliant. Can check if credit card, IMEI number or any other Luhn based number is correct. For more info see: <https://en.wikipedia.org/wiki/Luhn_algorithm>.
Direct sparse covariance matrix estimation via the covariance graphical lasso by Bien, Tibshirani (2011) <doi:10.1093/biomet/asr054> using the fast coordinate descent algorithm of Wang (2014) <doi:10.1007/s11222-013-9385-5>.
This package provides a local haplotyping visualization toolbox to capture major patterns of co-inheritance between clusters of linked variants, whilst connecting findings to phenotypic and demographic traits across individuals. crosshap enables users to explore and understand genomic variation across a trait-associated region. For an example of successful local haplotype analysis, see Marsh et al. (2022) <doi:10.1007/s00122-022-04045-8>.