Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Bland-Altman Plots using either base graphics or ggplot2, augmented with confidence intervals, with detailed return values and a sunflowerplot option for data with ties.
This package implements a backward procedure for single and multiple change point detection proposed by Shin et al. <arXiv:1812.10107>. The backward approach is particularly useful to detect short and sparse signals which is common in copy number variation (CNV) detection.
This package provides the facility to calculate the Brainerd-Robinson similarity coefficient for the rows of an input table, and to calculate the significance of each coefficient based on a permutation approach; a heatmap is produced to visually represent the similarity matrix. Optionally, hierarchical agglomerative clustering can be performed and the silhouette method is used to identify an optimal number of clusters; the results of the clustering can be optionally used to sort the heatmap.
Estimates survival and mortality with covariates from census or capture-recapture/recovery data in a Bayesian framework when many individuals are of unknown age. It includes tools for data checking, model diagnostics and outputs such as life-tables and plots, as described in Colchero, Jones, and Rebke (2012) <doi:10.1111/j.2041-210X.2012.00186.x> and Colchero et al. (2021) <doi:10.1038/s41467-021-23894-3>.
This package provides an efficient and robust implementation for estimating marginal Hazard Ratio (HR) and Restricted Mean Survival Time (RMST) with covariate adjustment using Daniel et al. (2021) <doi:10.1002/bimj.201900297> and Karrison et al. (2018) <doi:10.1177/1740774518759281>.
This package provides methods for detecting and visualizing cladogenic shifts in multivariate trait data on phylogenies. Implements penalized-likelihood multivariate generalized least squares models, enabling analyses of high-dimensional trait datasets and large trees via searchOptimalConfiguration(). Includes a greedy step-wise shift-search algorithm following approaches developed in Smith et al. (2023) <doi:10.1111/nph.19099> and Berv et al. (2024) <doi:10.1126/sciadv.adp0114>. Methods build on multivariate GLS approaches described in Clavel et al. (2019) <doi:10.1093/sysbio/syy045> and implemented in the mvgls() function from the mvMORPH package. Documentation and vignettes are available at <https://jakeberv.com/bifrost/>, including the introductory vignette at <https://jakeberv.com/bifrost/articles/jaw-shape-vignette.html>.
Intended to facilitate acoustic analysis of (animal) sound propagation experiments, which typically aim to quantify changes in signal structure when transmitted in a given habitat by broadcasting and re-recording animal sounds at increasing distances. The package offers a workflow with functions to prepare the data set for analysis as well as to calculate and visualize several degradation metrics, including blur ratio, signal-to-noise ratio, excess attenuation and envelope correlation among others (Dabelsteen et al 1993 <doi:10.1121/1.406682>).
Fast, dependency free, and vectorized base32 encoding and decoding. b32 supports the Crockford, Z, RFC 4648 lower, hex, and lower hex alphabets.
The main purpose of this package is to propose a transparent methodological framework to compare bioregionalization methods based on hierarchical and non-hierarchical clustering algorithms (Kreft & Jetz (2010) <doi:10.1111/j.1365-2699.2010.02375.x>) and network algorithms (Lenormand et al. (2019) <doi:10.1002/ece3.4718> and Leroy et al. (2019) <doi:10.1111/jbi.13674>).
Comprehensive Business Process Analysis toolkit. Creates S3-class for event log objects, and related handler functions. Imports related packages for filtering event data, computation of descriptive statistics, handling of Petri Net objects and visualization of process maps. See also packages edeaR','processmapR', eventdataR and processmonitR'.
This package provides methods for the group testing identification problem: 1) Operating characteristics (e.g., expected number of tests) for commonly used hierarchical and array-based algorithms, and 2) Optimal testing configurations for these same algorithms. Methods for the group testing estimation problem: 1) Estimation and inference procedures for an overall prevalence, and 2) Regression modeling for commonly used hierarchical and array-based algorithms.
Make some distributions from the C++ library Boost available in R'. In addition, the normal-inverse Gaussian distribution and the generalized inverse Gaussian distribution are provided. The distributions are represented by R6 classes. The method to sample from the generalized inverse Gaussian distribution is the one given in "Random variate generation for the generalized inverse Gaussian distribution" Luc Devroye (2012) <doi:10.1007/s11222-012-9367-z>.
Statistical decision in proteomics data using a hierarchical Bayesian model. There are two regression models for describing the mean-variance trend, a gamma regression or a latent gamma mixture regression. The regression model is then used as an Empirical Bayes estimator for the prior on the variance in a peptide. Further, it assumes that each measurement has an uncertainty (increased variance) associated with it that is also inferred. Finally, it tries to estimate the posterior distribution (by Hamiltonian Monte Carlo) for the differences in means for each peptide in the data. Once the posterior is inferred, it integrates the tails to estimate the probability of error from which a statistical decision can be made. See Berg and Popescu for details (<doi:10.1016/j.mcpro.2023.100658>).
Subgroup analyses are routinely performed in clinical trial analyses. From a methodological perspective, two key issues of subgroup analyses are multiplicity (even if only predefined subgroups are investigated) and the low sample sizes of subgroups which lead to highly variable estimates, see e.g. Yusuf et al (1991) <doi:10.1001/jama.1991.03470010097038>. This package implements subgroup estimates based on Bayesian shrinkage priors, see Carvalho et al (2019) <https://proceedings.mlr.press/v5/carvalho09a.html>. In addition, estimates based on penalized likelihood inference are available, based on Simon et al (2011) <doi:10.18637/jss.v039.i05>. The corresponding shrinkage based forest plots address the aforementioned issues and can complement standard forest plots in practical clinical trial analyses.
This package provides a tuneable and interpretable method for relaxing the instrumental variables (IV) assumptions to infer treatment effects in the presence of unobserved confounding. For a treatment-associated covariate to be a valid IV, it must be (a) unconfounded with the outcome and (b) have a causal effect on the outcome that is exclusively mediated by the exposure. There is no general test of the validity of these IV assumptions for any particular pre-treatment covariate. However, if different pre-treatment covariates give differing causal effect estimates when treated as IVs, then we know at least some of the covariates violate these assumptions. budgetIVr exploits this fact by taking as input a minimum budget of pre-treatment covariates assumed to be valid IVs and idenfiying the set of causal effects that are consistent with the user's data and budget assumption. The following generalizations of this principle can be used in this package: (1) a vector of multiple budgets can be assigned alongside corresponding thresholds that model degrees of IV invalidity; (2) budgets and thresholds can be chosen using specialist knowledge or varied in a principled sensitivity analysis; (3) treatment effects can be nonlinear and/or depend on multiple exposures (at a computational cost). The methods in this package require only summary statistics. Confidence sets are constructed under the "no measurement error" (NOME) assumption from the Mendelian randomization literature. For further methodological details, please refer to Penn et al. (2024) <doi:10.48550/arXiv.2411.06913>.
Tutorials for statistics, aimed at biological scientists. Subjects range from basic descriptive statistics through to complex linear modelling. The tutorials include text, videos, interactive coding exercises and multiple choice quizzes. The package also includes 19 datasets which are used in the tutorials.
This package provides a Gibbs sampler algorithm was developed to estimate change points in constant-wise data sequences while performing clustering simultaneously. The algorithm is described in da Cruz, A. C. and de Souza, C. P. E "A Bayesian Approach for Clustering Constant-wise Change-point Data" <doi:10.48550/arXiv.2305.17631>.
Finds the best block diagonal matrix approximation of a symmetric matrix. This can be exploited for divisive hierarchical clustering using singular vectors, named HC-SVD. The method is described in Bauer (202Xa) <doi:10.48550/arXiv.2308.06820>.
Bayesian models to estimate causal effects of biological treatments on time-to-event endpoints in clinical trials with principal strata defined by the occurrence of antidrug antibodies. The methodology is based on Frangakis and Rubin (2002) <doi:10.1111/j.0006-341x.2002.00021.x> and Imbens and Rubin (1997) <doi:10.1214/aos/1034276631>, and here adapted to a specific time-to-event setting.
Computation of bootstrap confidence intervals in an almost automatic fashion as described in Efron and Narasimhan (2020, <doi:10.1080/10618600.2020.1714633>).
Evaluate, fit, and analyze Hill dose response models (Goutelle et al., 2008 <doi:10.1111/j.1472-8206.2008.00633.x>), also sometimes referred to as four-parameter log-logistic models. Includes tools to invert Hill models, select models based on the Akaike information criterion (Akaike, 1974 <doi:10.1109/TAC.1974.1100705>) or Bayesian information criterion (Schwarz, 1978 <https://www.jstor.org/stable/2958889>), and construct bootstrapped confidence intervals both on the Hill model parameters and values derived from the Hill model parameters.
These data contain morphological image measurements for dried beans from Koklu and Ozkan (2020) <doi:10.1016/j.compag.2020.105507>.
Bayesian optimal design with futility and efficacy stopping boundaries (BOP2-FE) is a novel statistical framework for single-arm Phase II clinical trials. It enables early termination for efficacy when interim data are promising, while explicitly controlling Type I and Type II error rates. The design supports a variety of endpoint structures, including single binary endpoints, nested endpoints, co-primary endpoints, and joint monitoring of efficacy and toxicity. The package provides tools for enumerating stopping boundaries prior to trial initiation and for conducting simulation studies to evaluate the designâ s operating characteristics. Users can flexibly specify design parameters to suit their specific applications. For methodological details, refer to Xu et al. (2025) <doi:10.1080/10543406.2025.2558142>.
An implementation of Bayesian survival models with graph-structured selection priors for sparse identification of omics features predictive of survival (Madjar et al., 2021 <doi:10.1186/s12859-021-04483-z>) and its extension to use a fixed graph via a Markov Random Field (MRF) prior for capturing known structure of omics features, e.g. disease-specific pathways from the Kyoto Encyclopedia of Genes and Genomes database (Hermansen et al., 2025 <doi:10.48550/arXiv.2503.13078>).