Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Item response theory (IRT) parameter estimation using marginal maximum likelihood and expectation-maximization algorithm (Bock & Aitkin, 1981 <doi:10.1007/BF02293801>). Within parameter estimation algorithm, several methods for latent distribution estimation are available. Reflecting some features of the true latent distribution, these latent distribution estimation methods can possibly enhance the estimation accuracy and free the normality assumption on the latent distribution.
Implementation of the methodology proposed in Data-driven design of targeted gene panels for estimating immunotherapy biomarkers', Bradley and Cannings (2021) <arXiv:2102.04296>. This package allows the user to fit generative models of mutation from an annotated mutation dataset, and then further to produce tunable linear estimators of exome-wide biomarkers. It also contains functions to simulate mutation annotated format (MAF) data, as well as to analyse the output and performance of models.
Interfaces for choosing important predictors in supervised regression, classification, and censored regression models. Permuted importance scores (Biecek and Burzykowski (2021) <doi:10.1201/9780429027192>) can be computed for tidymodels model fits.
This package provides an R version of the InterVA4 software (<http://www.interva.net>) for coding cause of death from verbal autopsies. It also provides simple graphical representation of individual and population level statistics.
Fits the (randomized drift) inverse Gaussian distribution to survival data. The model is described in Aalen OO, Borgan O, Gjessing HK. Survival and Event History Analysis. A Process Point of View. Springer, 2008. It is based on describing time to event as the barrier hitting time of a Wiener process, where drift towards the barrier has been randomized with a Gaussian distribution. The model allows covariates to influence starting values of the Wiener process and/or average drift towards a barrier, with a user-defined choice of link functions.
This software does Multi-Reader, Multi-Case (MRMC) analyses of data from imaging studies where clinicians (readers) evaluate patient images (cases). What does this mean? ... Many imaging studies are designed so that every reader reads every case in all modalities, a fully-crossed study. In this case, the data is cross-correlated, and we consider the readers and cases to be cross-correlated random effects. An MRMC analysis accounts for the variability and correlations from the readers and cases when estimating variances, confidence intervals, and p-values. The functions in this package can treat arbitrary study designs and studies with missing data, not just fully-crossed study designs. An overview of this software, including references presenting details on the methods, can be found here: <https://www.fda.gov/medical-devices/science-and-research-medical-devices/imrmc-software-do-multi-reader-multi-case-statistical-analysis-reader-studies>.
This package provides a unified data layer for single-cell, spatial and bulk T-cell and B-cell immune receptor repertoire data. Think AnnData or SeuratObject, but for AIRR data, a.k.a. Adaptive Immune Receptor Repertoire, VDJ-seq, RepSeq, or VDJ sequencing data.
Computes the log likelihood for an inverse gamma stochastic volatility model using a closed form expression of the likelihood. The details of the computation of this closed form expression are given in Gonzalez and Majoni (2023) <http://rcea.org/RePEc/pdf/wp23-11.pdf> . The closed form expression is obtained for a stationary inverse gamma stochastic volatility model by marginalising out the volatility. This allows the user to obtain the maximum likelihood estimator for this non linear non Gaussian state space model. In addition, the user can obtain the estimates of the smoothed volatility using the exact smoothing distributions.
This package provides functions for converting time series of spatial abundance or density data in raster format to vector fields of population movement using the digital image correlation technique. More specifically, the functions in the package compute cross-covariance using discrete fast Fourier transforms for computational efficiency. Vectors in vector fields point in the direction of highest two dimensional cross-covariance. The package has a novel implementation of the digital image correlation algorithm that is designed to detect persistent directional movement when image time series extend beyond a sequence of two raster images.
API wrapper that contains functions to retrieve data from the IsoMemo partnership databases. Web services for API: <https://isomemodb.com/api/v1/iso-data>.
Estimate test-retest reliability for complex sampling strategies and extract variances using IntraClass Effect Decomposition. Developed by Brandmaier et al. (2018) "Assessing reliability in neuroimaging research through intra-class effect decomposition (ICED)" <doi:10.7554/eLife.35718> Also includes functions to simulate data based on sampling strategy. Unofficial version release name: "Good work squirrels".
Compute several variations of the Implicit Association Test (IAT) scores, including the D scores (Greenwald, Nosek, Banaji, 2003, <doi:10.1037/0022-3514.85.2.197>) and the new scores that were developed using robust statistics (Richetin, Costantini, Perugini, and Schonbrodt, 2015, <doi:10.1371/journal.pone.0129601>).
An implementation of the correction methods proposed by Shu and Yi (2017) <doi:10.1177/0962280217743777> for the inverse probability weighted (IPW) estimation of average treatment effect (ATE) with misclassified binary outcomes. Logistic regression model is assumed for treatment model for all implemented correction methods, and is assumed for the outcome model for the implemented doubly robust correction method. Misclassification probability given a true value of the outcome is assumed to be the same for all individuals.
An implementation of randomization-based hypothesis testing for three different estimands in a cluster-randomized encouragement experiment. The three estimands include (1) testing a cluster-level constant proportional treatment effect (Fisher's sharp null hypothesis), (2) pooled effect ratio, and (3) average cluster effect ratio. To test the third estimand, user needs to install Gurobi (>= 9.0.1) optimizer via its R API. Please refer to <https://www.gurobi.com/documentation/9.0/refman/ins_the_r_package.html>.
Characterisation and calibration of single or multiple Ion Selective Electrodes (ISEs); activity estimation of experimental samples. Implements methods described in: Dillingham, P.W., Radu, T., Diamond, D., Radu, A. and McGraw, C.M. (2012) <doi:10.1002/elan.201100510>, Dillingham, P.W., Alsaedi, B.S.O. and McGraw, C.M. (2017) <doi:10.1109/ICSENS.2017.8233898>, Dillingham, P.W., Alsaedi, B.S.O., Radu, A., and McGraw, C.M. (2019) <doi:10.3390/s19204544>, and Dillingham, P.W., Alsaedi, B.S.O., Granados-Focil, S., Radu, A., and McGraw, C.M. (2020) <doi:10.1021/acssensors.9b02133>.
Get image statistics based on processing fluency theory. The functions provide scores for several basic aesthetic principles that facilitate fluent cognitive processing of images: contrast, complexity / simplicity, self-similarity, symmetry, and typicality. See Mayer & Landwehr (2018) <doi:10.1037/aca0000187> and Mayer & Landwehr (2018) <doi:10.31219/osf.io/gtbhw> for the theoretical background of the methods.
Using shiny to demo igraph package makes learning graph theory easy and fun.
This package provides efficient implementation of the Isolate-Detect methodology for the consistent estimation of the number and location of multiple change-points in one-dimensional data sequences from the "deterministic + noise" model. For details on the Isolate-Detect methodology, please see Anastasiou and Fryzlewicz (2018) <https://docs.wixstatic.com/ugd/24cdcc_6a0866c574654163b8255e272bc0001b.pdf>. Currently implemented scenarios are: piecewise-constant signal with Gaussian noise, piecewise-constant signal with heavy-tailed noise, continuous piecewise-linear signal with Gaussian noise, continuous piecewise-linear signal with heavy-tailed noise.
Calculate false ring proportions from data frames of intra annual density fluctuations.
This package provides a general-purpose workflow for image segmentation using TensorFlow models based on the U-Net architecture by Ronneberger et al. (2015) <arXiv:1505.04597> and the U-Net++ architecture by Zhou et al. (2018) <arXiv:1807.10165>. We provide pre-trained models for assessing canopy density and understory vegetation density from vegetation photos. In addition, the package provides a workflow for easily creating model input and model architectures for general-purpose image segmentation based on grayscale or color images, both for binary and multi-class image segmentation.
Electricity is not made equal and it vary in its carbon footprint (or carbon intensity) depending on its source. This package enables to access and query data provided by the Carbon Intensity API (<https://carbonintensity.org.uk/>). National Gridâ s Carbon Intensity API provides an indicative trend of regional carbon intensity of the electricity system in Great Britain.
Training datasets for iC10; which implements the classifier described in the paper Genome-driven integrated classification of breast cancer validated in over 7,500 samples (Ali HR et al., Genome Biology 2014). It uses copy number and/or expression form breast cancer data, trains a pamr classifier (Tibshirani et al.) with the features available and predicts the iC10 group. Genomic annotation for the training dataset has been obtained from Mark Dunning's lluminaHumanv3.db package.
Generate plots based on the Item Pool Visualization concept for latent constructs. Item Pool Visualizations are used to display the conceptual structure of a set of items (self-report or psychometric). Dantlgraber, Stieger, & Reips (2019) <doi:10.1177/2059799119884283>.
Takes in vivo toxicokinetic concentration-time data and fits parameters of 1-compartment and 2-compartment models for each chemical. These methods are described in detail in "Informatics for Toxicokinetics" (2025).