Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Enables reading and writing binary and ASCII data to RS232/RS422/RS485 or any other virtual serial interface of the computer.
The skew logistic distribution is a quantile-defined generalisation of the logistic distribution (van Staden and King 2015). Provides random numbers, quantiles, probabilities, densities and density quantiles for the distribution. It provides Quantile-Quantile plots and method of L-Moments estimation (including asymptotic standard errors) for the distribution.
This package provides a search interface to look up terms on Google', Bing', DuckDuckGo', Startpage', Ecosia', rseek', Twitter', StackOverflow', RStudio Community', GitHub', and BitBucket'. Upon searching, a browser window will open with the aforementioned search results.
This package contains functions that help to determine event boundaries in event segmentation experiments by bootstrapping a critical segmentation magnitude under the null hypothesis that all key presses were randomly distributed across the experiment. Segmentation magnitude is defined as the sum of Gaussians centered at the times of the segmentation key presses performed by the participants. Within a participant, the maximum of the overlaid Gaussians is used to prevent an excessive influence of a single participant on the overall outcome (e.g. if a participant is pressing the key multiple times in succession). Further functions are included, such as plotting the results.
This package contains more modern tools for causal inference using regression standardization. Four general classes of models are implemented; generalized linear models, conditional generalized estimating equation models, Cox proportional hazards models, and shared frailty gamma-Weibull models. Methodological details are described in Sjölander, A. (2016) <doi:10.1007/s10654-016-0157-3>. Also includes functionality for doubly robust estimation for generalized linear models in some special cases, and the ability to implement custom models.
Download data from StatsWales into R. Removes the need for the user to write their own loops when parsing data from the StatsWales API. Provides functions for datasets (<http://open.statswales.gov.wales/en-gb/dataset>) and metadata (<http://open.statswales.gov.wales/en-gb/discover/metadata>) endpoints.
Computes the Akaike information criterion for the generalized linear models (logistic regression, Poisson regression, and Gaussian graphical models) estimated by the lasso.
An enterprise-targeted scalable and customizable shiny module providing an easy way to incorporate free-form note taking or discussion boards into applications. The package includes a shiny module that can be included in any shiny application to create a panel containing searchable, editable text broken down by section headers. Can be used with a local SQLite database, or a compatible remote database of choice.
Copula based Cox proportional hazards models for survival data subject to dependent censoring. This approach does not assume that the parameter defining the copula is known. The dependency parameter is estimated with other finite model parameters by maximizing a Pseudo likelihood function. The cumulative hazard function is estimated via estimating equations derived based on martingale ideas. Available copula functions include Frank, Gumbel and Normal copulas. Only Weibull and lognormal models are allowed for the censoring model, even though any parametric model that satisfies certain identifiability conditions could be used. Implemented methods are described in the article "Copula based Cox proportional hazards models for dependent censoring" by Deresa and Van Keilegom (2024) <doi:10.1080/01621459.2022.2161387>.
This package provides functions for the analysis of occupational and environmental data with non-detects. Maximum likelihood (ML) methods for censored log-normal data and non-parametric methods based on the product limit estimate (PLE) for left censored data are used to calculate all of the statistics recommended by the American Industrial Hygiene Association (AIHA) for the complete data case. Functions for the analysis of complete samples using exact methods are also provided for the lognormal model. Revised from 2007-11-05 survfit~1'.
Enables the creation of Chain Event Graphs over spatial areas, with an optional Shiny user interface. Allows users to fully customise both the structure and underlying model of the Chain Event Graph, offering a high degree of flexibility for tailored analyses. For more details on Chain Event Graphs, see Freeman, G., & Smith, J. Q. (2011) <doi:10.1016/j.jmva.2011.03.008>, Collazo R. A., Görgen C. and Smith J. Q. (2018, ISBN:9781498729604) and Barclay, L. M., Hutton, J. L., & Smith, J. Q. (2014) <doi:10.1214/13-BA843>.
This package provides a statistical learning method to simultaneously predict a range of target phenotypes using codified and natural language processing (NLP)-derived Electronic Health Record (EHR) data. See Ahuja et al (2020) JAMIA <doi:10.1093/jamia/ocaa079> for details.
This package provides a set of functions allowing to implement the SpiceFP approach which is iterative. It involves transformation of functional predictors into several candidate explanatory matrices (based on contingency tables), to which relative edge matrices with contiguity constraints are associated. Generalized Fused Lasso regression are performed in order to identify the best candidate matrix, the best class intervals and related coefficients at each iteration. The approach is stopped when the maximal number of iterations is reached or when retained coefficients are zeros. Supplementary functions allow to get coefficients of any candidate matrix or mean of coefficients of many candidates. The methods in this package are describing in Girault Gnanguenon Guesse, Patrice Loisel, Bénedicte Fontez, Thierry Simonneau, Nadine Hilgert (2021) "An exploratory penalized regression to identify combined effects of functional variables -Application to agri-environmental issues" <https://hal.archives-ouvertes.fr/hal-03298977>.
This package provides tools for the integration and exploration of data tables measured on the same set of observational units. The package includes methods to assess similarities among tables, extract common patterns across variable blocks, and create visual summaries that highlight shared structures in multiblock data.
Design, build, and deploy R packages demo presentations by an interactive wizard. Set up unique title, logo and themes. Add personalized tabs exposing applicability. And deploy as a part of a package or an independent app.
This package provides a comprehensive logging framework for R applications that provides hierarchical logging levels, database integration, and contextual logging capabilities. The package supports SQLite storage for persistent logs, provides colour-coded console output for better readability, includes parallel processing support, and implements structured error reporting with JSON formatting.
Offers a comprehensive approach for analysing stratified 2x2 contingency tables. It facilitates the calculation of odds ratios, 95% confidence intervals, and conducts chi-squared, Cochran-Mantel-Haenszel, Mantel-Haenszel, and Breslow-Day-Tarone tests. The package is particularly useful in fields like epidemiology and social sciences where stratified analysis is essential. The package also provides interpretative insights into the results, aiding in the understanding of statistical outcomes.
This package provides tools to design experiments, compute Sobol sensitivity indices, and summarise stochastic responses inspired by the strategy described by Zhu and Sudret (2021) <doi:10.1016/j.ress.2021.107815>. Includes helpers to optimise toy models implemented in C++, visualise indices with uncertainty quantification, and derive reliability-oriented sensitivity measures based on failure probabilities. It is further detailed in Logosha, Maumy and Bertrand (2022) <doi:10.1063/5.0246026> and (2023) <doi:10.1063/5.0246024> or in Bertrand, Logosha and Maumy (2024) <https://hal.science/hal-05371803>, <https://hal.science/hal-05371795> and <https://hal.science/hal-05371798>.
This package provides a scalable Gibbs sampling implementation for high dimensional Bayesian regression with the continuous spike-and-slab prior. Niloy Biswas, Lester Mackey and Xiao-Li Meng, "Scalable Spike-and-Slab" (2022) <arXiv:2204.01668>.
This package provides a suite of functions that allow a full, fast, and efficient Bayesian treatment of the Bradley--Terry model. Prior assumptions about the model parameters can be encoded through a multivariate normal prior distribution. Inference is performed using a latent variable representation of the model.
This package implements several methods to estimate effects of generalized time-varying treatment strategies on the mean of an outcome at one or more selected follow-up times of interest. Specifically, the package implements the time-smoothed inverse probability weighted estimators described in McGrath et al. (2025) <doi:10.48550/arXiv.2509.13971>. Outcomes may be repeatedly, non-monotonically, informatively, and sparsely measured in the data source. The package also supports settings where outcomes are truncated by death, i.e. some individuals die during follow-up which renders the outcome of interest undefined at the follow-up time of interest.
Stock-and-flow models are a computational method from the field of system dynamics. They represent how systems change over time and are mathematically equivalent to ordinary differential equations. sdbuildR (system dynamics builder) provides an intuitive interface for constructing stock-and-flow models without requiring extensive domain knowledge. Models can quickly be simulated and revised, supporting iterative development. sdbuildR simulates models in R and Julia', where Julia offers unit support and large-scale ensemble simulations. Additionally, sdbuildR can import models created in Insight Maker (<https://insightmaker.com/>).
This is an implementation of the algorithm described in Section 3 of Hosszejni and Frühwirth-Schnatter (2026) <doi:10.1016/j.jmva.2025.105536>. The algorithm is used to verify that the counting rule CR(r,1) holds for the sparsity pattern of the transpose of a factor loading matrix. As detailed in Section 2 of the same paper, if CR(r,1) holds, then the idiosyncratic variances are generically identified. If CR(r,1) does not hold, then we do not know whether the idiosyncratic variances are identified or not.
This package provides functions for generating Standardized Climate Indices (SCI). Functions for generating Standardized Climate Indices (SCI). SCI is a transformation of (smoothed) climate (or environmental) time series that removes seasonality and forces the data to take values of the standard normal distribution. SCI was originally developed for precipitation. In this case it is known as the Standardized Precipitation Index (SPI).