Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a comprehensive and easy to use R implementation of confirmatory phylogenetic path analysis as described by Von Hardenberg and Gonzalez-Voyer (2012) <doi:10.1111/j.1558-5646.2012.01790.x>.
Allows specification and fitting of some parameter estimation examples inspired by time-resolved spectroscopy via a Shiny GUI.
Comprehensive toolkit for generating various numerical features of protein sequences described in Xiao et al. (2015) <DOI:10.1093/bioinformatics/btv042>. For full functionality, the software ncbi-blast+ is needed, see <https://blast.ncbi.nlm.nih.gov/doc/blast-help/downloadblastdata.html> for more information.
The introduction of the broom package has made converting model objects into data frames as simple as a single function. While the broom package focuses on providing tidy data frames that can be used in advanced analysis, it deliberately stops short of providing functionality for reporting models in publication-ready tables. pixiedust provides this functionality with a programming interface intended to be similar to ggplot2's system of layers with fine tuned control over each cell of the table. Options for output include printing to the console and to the common markdown formats (markdown, HTML, and LaTeX). With a little pixiedust (and happy thoughts) tables can really fly.
Propagation of uncertainty using higher-order Taylor expansion and Monte Carlo simulation. Calculations of propagated uncertainties are based on matrix calculus including covariance structure according to Arras 1998 <doi:10.3929/ethz-a-010113668> (first order), Wang & Iyer 2005 <doi:10.1088/0026-1394/42/5/011> (second order) and BIPM Supplement 1 (Monte Carlo) <doi:10.59161/JCGM101-2008>.
Easily visualize and animate tabledap and griddap objects obtained via the rerddap package in a simple one-line command, using either base graphics or ggplot2 graphics. plotdap handles extracting and reshaping the data, map projections and continental outlines. Optionally the data can be animated through time using the gganmiate package.
This package implements (1) panel cointegration rank tests, (2) estimators for panel vector autoregressive (VAR) models, and (3) identification methods for panel structural vector autoregressive (SVAR) models as described in the accompanying vignette. The implemented functions allow to account for cross-sectional dependence and for structural breaks in the deterministic terms of the VAR processes. Among the large set of functions, particularly noteworthy are those that implement (1) the correlation-augmented inverse normal test on the cointegration rank by Arsova and Oersal (2021, <doi:10.1016/j.ecosta.2020.05.002>), (2) the two-step estimator for pooled cointegrating vectors by Breitung (2005, <doi:10.1081/ETC-200067895>), and (3) the pooled identification based on independent component analysis by Herwartz and Wang (2024, <doi:10.1002/jae.3044>).
This package provides tools for statistical testing of correlation coefficients through robust permutation method and large sample approximation method. Tailored to different types of correlation coefficients including Pearson correlation coefficient, weighted Pearson correlation coefficient, Spearman correlation coefficient, and Lin's concordance correlation coefficient.The robust permutation test controls type I error under general scenarios when sample size is small and two variables are dependent but uncorrelated. The large sample approximation test generally controls type I error when the sample size is large (>200).
This package contains functions to calculate power and sample size for various study designs used in bioequivalence studies. Use known.designs() to see the designs supported. Power and sample size can be obtained based on different methods, amongst them prominently the TOST procedure (two one-sided t-tests). See README and NEWS for further information.
Create the density contour plot for bivariate inverse Gaussian distribution for given non negative random variables.
Computes the generalized synthetic control estimator described in Powell (2017) <doi:10.7249/WR1142>. Provides both point estimates, and hypothesis testing.
Large-scale gene expression studies allow gene network construction to uncover associations among genes. This package is developed for estimating and testing partial correlation graphs with prior information incorporated.
Search CRAN metadata about packages by keyword, popularity, recent activity, package name and more. Uses the R-hub search server, see <https://r-pkg.org> and the CRAN metadata database, that contains information about CRAN packages. Note that this is _not_ a CRAN project.
Data files and documentation for PEDiatric vALidation oF vAriableS in TBI (PEDALFAST). The data was used in "Functional Status Scale in Children With Traumatic Brain Injury: A Prospective Cohort Study" by Bennett, Dixon, et al (2016) <doi:10.1097/PCC.0000000000000934>.
This package implements the PRIDIT (Principal Component Analysis applied to RIDITs') scoring system described in Brockett et al. (2002) <doi:10.1111/1539-6975.00027>. Provides functions for ridit scoring originally developed by Bross (1958) <doi:10.2307/2527727>, calculating PRIDIT weights, and computing final PRIDIT scores for multivariate analysis of ordinal data.
This package implements a partial linear semiparametric mixed-effects model (PLSMM) featuring a random intercept and applies a lasso penalty to both the fixed effects and the coefficients associated with the nonlinear function. The model also accommodates interactions between the nonlinear function and a grouping variable, allowing for the capture of group-specific nonlinearities. Nonlinear functions are modeled using a set of bases functions. Estimation is conducted using a penalized Expectation-Maximization algorithm, and the package offers flexibility in choosing between various information criteria for model selection. Post-selection inference is carried out using a debiasing method, while inference on the nonlinear functions employs a bootstrap approach.
Reads/write binary genotype file compatible with PLINK <https://www.cog-genomics.org/plink/1.9/input#bed> into/from a R matrix; traverse genotype data one windows of variants at a time, like apply() or a for loop; reads/writes genotype relatedness/kinship matrices created by PLINK <https://www.cog-genomics.org/plink/1.9/distance#make_rel> or GCTA <https://cnsgenomics.com/software/gcta/#MakingaGRM> into/from a R square matrix. It is best used for bringing data produced by PLINK and GCTA into R workflow.
Finds equivalence classes corresponding to a symmetric relation or undirected graph. Finds total order consistent with partial order or directed graph (so-called topological sort).
Calculate the Bayesian posterior/predictive probability and determine the sample size and stopping boundaries for single-arm Phase II design.
This package contains functions developed to combine the results of querying a plasmid database using short-read sequence typing with the results of a blast analysis against the query results.
This package contains functions to obtain the operational characteristics of bioequivalence studies in Two-Stage Designs (TSD) via simulations.
Allows users to find a piecewise linear regression approximation to a given continuous univariate function within a specified error tolerance. Methods based on Warwicker and Rebennack (2025) "Efficient continuous piecewise linear regression for linearising univariate non-linear functions" <doi:10.1080/24725854.2023.2299809>.
This package provides a Shiny application for calculating phytosanitary inspection plans based on risks. It generates a diagram of pallets in a lot, highlights the units to be sampled, and documents them based on the selected sampling method (simple random or systematic sampling).
Three-dimensional systematic conservation planning, conducting nested prioritization analyses across multiple depth levels and ensuring efficient resource allocation throughout the water column. It provides a structured workflow designed to address biodiversity conservation and management challenges in the 3 dimensions, while facilitating usersâ choices and parameterization (Doxa et al. 2025 <doi:10.1016/j.ecolmodel.2024.110919>).