Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Builds the coincident profile proposed by Martinez, W and Nieto, Fabio H and Poncela, P (2016) <doi:10.1016/j.spl.2015.11.008>. This methodology studies the relationship between a couple of time series based on the the set of turning points of each time series. The coincident profile establishes if two time series are coincident, or one of them leads the second.
This package provides a helpful R6 class and methods for interacting with the Posit Connect Server API along with some meaningful utility functions for regular tasks. API documentation varies by Posit Connect installation and version, but the latest documentation is also hosted publicly at <https://docs.posit.co/connect/api/>.
Compute expected shortfall (ES) and Value at Risk (VaR) from a quantile function, distribution function, random number generator, probability density function, or data. ES is also known as Conditional Value at Risk (CVaR). Virtually any continuous distribution can be specified. The functions are vectorized over the arguments. The computations are done directly from the definitions, see e.g. Acerbi and Tasche (2002) <doi:10.1111/1468-0300.00091>. Some support for GARCH models is provided, as well.
Perform bulk and cell type-specific expression quantitative trail loci mapping with our novel method (Little et al. (2023) <doi:10.1038/s41467-023-38795-w>).
Accelerate Bayesian analytics workflows in R through interactive modelling, visualization, and inference. Define probabilistic graphical models using directed acyclic graphs (DAGs) as a unifying language for business stakeholders, statisticians, and programmers. This package relies on interfacing with the numpyro python package.
This package provides a flexible tool for calculating carbon-equivalent emissions. Mostly using data from the UK Government's Greenhouse Gas Conversion Factors report <https://www.gov.uk/government/publications/greenhouse-gas-reporting-conversion-factors-2024>, it facilitates transparent emissions calculations for various sectors, including travel, accommodation, and clinical activities. The package is designed for easy integration into R workflows, with additional support for shiny applications and community-driven extensions.
Create Pairwise Comparison Matrices for use in the Analytic Hierarchy Process. The Pairwise Comparison Matrix created will be a logical matrix, which unlike a random comparison matrix, is similar to what a rational decision maker would create on the basis of a preference vector for the alternatives considered.
Calculates power for assessment of intermediate biomarker responses as correlates of risk in the active treatment group in clinical efficacy trials, as described in Gilbert, Janes, and Huang, Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials (2016, Statistics in Medicine). The methods differ from past approaches by accounting for the level of clinical treatment efficacy overall and in biomarker response subgroups, which enables the correlates of risk results to be interpreted in terms of potential correlates of efficacy/protection. The methods also account for inter-individual variability of the observed biomarker response that is not biologically relevant (e.g., due to technical measurement error of the laboratory assay used to measure the biomarker response), which is important because power to detect a specified correlate of risk effect size is heavily affected by the biomarker's measurement error. The methods can be used for a general binary clinical endpoint model with a univariate dichotomous, trichotomous, or continuous biomarker response measured in active treatment recipients at a fixed timepoint after randomization, with either case-cohort Bernoulli sampling or case-control without-replacement sampling of the biomarker (a baseline biomarker is handled as a trivial special case). In a specified two-group trial design, the computeN() function can initially be used for calculating additional requisite design parameters pertaining to the target population of active treatment recipients observed to be at risk at the biomarker sampling timepoint. Subsequently, the power calculation employs an inverse probability weighted logistic regression model fitted by the tps() function in the osDesign package. Power results as well as the relationship between the correlate of risk effect size and treatment efficacy can be visualized using various plotting functions. To link power calculations for detecting a correlate of risk and a correlate of treatment efficacy, a baseline immunogenicity predictor (BIP) can be simulated according to a specified classification rule (for dichotomous or trichotomous BIPs) or correlation with the biomarker response (for continuous BIPs), then outputted along with biomarker response data under assignment to treatment, and clinical endpoint data for both treatment and placebo groups.
With the development of new cross-cultural methods this package is intended to combine multiple functions automating and simplifying functions providing a unified analysis approach for commonly employed methods.
This package provides methods for interpreting CoDa (Compositional Data) regression models along the lines of "Pairwise share ratio interpretations of compositional regression models" (Dargel and Thomas-Agnan 2024) <doi:10.1016/j.csda.2024.107945>. The new methods include variation scenarios, elasticities, elasticity differences and share ratio elasticities. These tools are independent of log-ratio transformations and allow an interpretation in the original space of shares. CoDaImpact is designed to be used with the compositions package and its ecosystem.
Implementations of recent complex-valued wavelet shrinkage procedures for smoothing irregularly sampled signals, see Hamilton et al (2018) <doi:10.1080/00401706.2017.1281846>.
Proposes Seq2seq Time-Feature Analysis using an Encoder-Decoder to project into latent space and a Forward Network to predict the next sequence.
Estimation and inference for linear models where some or all of the fixed-effects coefficients are subject to order restrictions. This package uses the robust residual bootstrap methodology for inference, and can handle some structure in the residual variance matrix.
This package infers the causal effect of an intervention on a multivariate response through the use of Multivariate Bayesian Structural Time Series models (MBSTS) as described in Menchetti & Bojinov (2020) <arXiv:2006.12269>. The package also includes functions for model building and forecasting.
Determining the value of Stirling numbers of 1st kind and 2nd kind,references: Bóna,Miklós(2017,ISBN 9789813148840).
This package provides methods and functions to implement a Recommendation System based on Collaborative Filtering Methodology. See Aggarwal (2016) <doi:10.1007/978-3-319-29659-3> for an overview.
Data cleaning functions for classes logical, factor, numeric, character, currency and Date to make data cleaning fast and easy. Relying on very few dependencies, it provides smart guessing, but with user options to override anything if needed.
Perform additional multiple testing procedure methods to p.adjust(), such as weighted Hochberg (Tamhane, A. C., & Liu, L., 2008) <doi:10.1093/biomet/asn018>, ICC adjusted Bonferroni method (Shi, Q., Pavey, E. S., & Carter, R. E., 2012) <doi:10.1002/pst.1514> and a new correlation corrected weighted Hochberg for correlated endpoints.
Conditional distance correlation <doi:10.1080/01621459.2014.993081> is a novel conditional dependence measurement of two multivariate random variables given a confounding variable. This package provides conditional distance correlation, performs the conditional distance correlation sure independence screening procedure for ultrahigh dimensional data <https://www3.stat.sinica.edu.tw/statistica/J28N1/J28N114/J28N114.html>, and conducts conditional distance covariance test for conditional independence assumption of two multivariate variable.
Sample size estimation in cluster (group) randomized trials. Contains traditional power-based methods, empirical smoothing (Rotondi and Donner, 2009), and updated meta-analysis techniques (Rotondi and Donner, 2012).
Directory reads and summaries are provided for one or more of the subdirectories of the <https://cran.r-project.org/incoming/> directory, and a compact summary object is returned. The package name is a contraption of CRAN Incoming Watcher'.
Fit and apply ComBat, linear mixed-effects models (LMM), or prescaling to harmonize magnetic resonance imaging (MRI) data from different sites. Briefly, these methods remove differences between sites due to using different scanning devices, and LMM additionally tests linear hypotheses. As detailed in the manual, the original ComBat function was first modified for the harmonization of MRI data (Fortin et al. (2017) <doi:10.1016/j.neuroimage.2017.11.024>) and then modified again to create separate functions for fitting and applying the harmonization and allow missing values and constant rows for its use within the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) Consortium (Radua et al. (2020) <doi:10.1016/j.neuroimage.2020.116956>); this package includes the latter version. LMM calls "lme" massively considering specific brain imaging details. Finally, prescaling is a good option for fMRI, where different devices can have varying units of measurement.
Variance estimation on indicators of income concentration and poverty using complex sample survey designs. Wrapper around the survey package.
This package provides a reliable and efficient tool for cleaning univariate time series data. It implements reliable and efficient procedures for automating the process of cleaning univariate time series data. The package provides integration with already developed and deployed tools for missing value imputation and outlier detection. It also provides a way of visualizing large time-series data in different resolutions.