Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Phenotypic analysis of data coming from high throughput phenotyping (HTP) platforms, including different types of outlier detection, spatial analysis, and parameter estimation. The package is being developed within the EPPN2020 project (<https://cordis.europa.eu/project/id/731013>). Some functions have been created to be used in conjunction with the R package asreml for the ASReml software, which can be obtained upon purchase from VSN international (<https://vsni.co.uk/software/asreml-r/>).
Splines are efficiently represented through their Taylor expansion at the knots. The representation accounts for the support sets and is thus suitable for sparse functional data. Two cases of boundary conditions are considered: zero-boundary or periodic-boundary for all derivatives except the last. The periodical splines are represented graphically using polar coordinates. The B-splines and orthogonal bases of splines that reside on small total support are implemented. The orthogonal bases are referred to as splinets and are utilized for functional data analysis. Random spline generator is implemented as well as all fundamental algebraic and calculus operations on splines. The optimal, in the least square sense, functional fit by splinets to data consisting of sampled values of functions as well as splines build over another set of knots is obtained and used for functional data analysis. The S4-version of the object oriented R is used. <doi:10.48550/arXiv.2102.00733>, <doi:10.1016/j.cam.2022.114444>, <doi:10.48550/arXiv.2302.07552>.
Sequential triangular test for the arithmetic mean in one- and two- samples, proportions in one- and two-samples, and the Pearson's correlation coefficient.
This package provides a small set of helper functions to convert sjPlot HTML-tables to R data.frame objects / knitr::kable-tables.
This package provides data frames that hold certain columns and attributes persistently for data processing in dplyr'.
Statistical performance measures used in the econometric literature to evaluate conditional covariance/correlation matrix estimates (MSE, MAE, Euclidean distance, Frobenius distance, Stein distance, asymmetric loss function, eigenvalue loss function and the loss function defined in Eq. (4.6) of Engle et al. (2016) <doi:10.2139/ssrn.2814555>). Additionally, compute Eq. (3.1) and (4.2) of Li et al. (2016) <doi:10.1080/07350015.2015.1092975> to compare the factor loading matrix. The statistical performance measures implemented have been previously used in, for instance, Laurent et al. (2012) <doi:10.1002/jae.1248>, Amendola et al. (2015) <doi:10.1002/for.2322> and Becker et al. (2015) <doi:10.1016/j.ijforecast.2013.11.007>.
This package provides functions for fitting, forecasting, and early detection of outbreaks in sparse surveillance count time series. Supports negative binomial (NB), self-exciting NB, generalise autoregressive moving average (GARMA) NB , zero-inflated NB (ZINB), self-exciting ZINB, generalise autoregressive moving average ZINB, and hurdle formulations. Climatic and environmental covariates can be included in the regression component and/or the zero-modified components. Includes outbreak-detection algorithms for NB, ZINB, and hurdle models, with utilities for prediction and diagnostics.
Bayesian regression tree models with shrinkage priors on step heights. Supports continuous, binary, and right-censored (survival) outcomes. Used for high-dimensional prediction and causal inference.
This package provides a function that behaves nearly as base::source() but implements a caching mechanism on disk, project based. It allows to quasi source() R scripts that gather data but can fail or consume to much time to respond even if nothing new is expected. It comes with tools to check and execute on demand or when cache is invalid the script.
An analytic framework for the calculation of norm- and criterion-referenced academic growth estimates using large scale, longitudinal education assessment data as developed in Betebenner (2009) <doi:10.1111/j.1745-3992.2009.00161.x>.
This package provides functions for evaluating the stability of low-dimensional embeddings and cluster assignments in singleâ cell RNA sequencing (scRNAâ seq) datasets. Starting from a principal component analysis (PCA) object, users can generate multiple replicates of tâ Distributed Stochastic Neighbor Embedding (tâ SNE) or Uniform Manifold Approximation and Projection (UMAP) embeddings. Embedding stability is quantified by computing pairwise Kendallâ s Tau correlations across replicates and summarizing the distribution of correlation coefficients. In addition to dimensionality reduction, scStability assesses clustering consistency using either Louvain or Leiden algorithms and calculating the Normalized Mutual Information (NMI) between all pairs of cluster assignments. For background on UMAP and t-SNE algorithms, see McInnes et al. (2020, <doi:10.21105/joss.00861>) and van der Maaten & Hinton (2008, <https://github.com/lvdmaaten/bhtsne>), respectively.
Perform variable selection for the spatial Poisson regression model under the adaptive elastic net penalty. Spatial count data with covariates is the input. We use a spatial Poisson regression model to link the spatial counts and covariates. For maximization of the likelihood under adaptive elastic net penalty, we implemented the penalized quasi-likelihood (PQL) and the approximate penalized loglikelihood (APL) methods. The proposed methods can automatically select important covariates, while adjusting for possible spatial correlations among the responses. More details are available in Xie et al. (2018, <arXiv:1809.06418>). The package also contains the Lyme disease dataset, which consists of the disease case data from 2006 to 2011, and demographic data and land cover data in Virginia. The Lyme disease case data were collected by the Virginia Department of Health. The demographic data (e.g., population density, median income, and average age) are from the 2010 census. Land cover data were obtained from the Multi-Resolution Land Cover Consortium for 2006.
This package provides a covariance estimator for multivariate normal data that is sparse and positive definite. Implements the majorize-minimize algorithm described in Bien, J., and Tibshirani, R. (2011), "Sparse Estimation of a Covariance Matrix," Biometrika. 98(4). 807--820.
Access to the datasets and many of the functions used in "Statistics Using R: An Integrative Approach". These datasets include a subset of the National Education Longitudinal Study, the Framingham Heart Study, as well as several simulated datasets used in the examples throughout the textbook. The functions included in the package reproduce some of the functionality of Stata that is not directly available in R'. The package also contains a tutorial on basic data frame management, including how to handle missing data.
This package provides a flexible framework for definition and application of time/depth- based rules for sets of parameters for single grains that can be used to create artificial sediment profiles. Such profiles can be used for virtual sample preparation and synthetic, for instance, luminescence measurements.
Regularized version of partial least square approaches providing sparse, group, and sparse group versions of partial least square regression models (Liquet, B., Lafaye de Micheaux, P., Hejblum B., Thiebaut, R. (2016) <doi:10.1093/bioinformatics/btv535>). Version of PLS Discriminant analysis is also provided.
Create in-app purchasing and subscriptions through Servicebot payment using the Stripe framework.
This package provides tools for calculating disclosure risk measures for microdata, including record-level and file-level measures. The record-level disclosure risk is estimated primarily using exhaustive tabulation. The file-level disclosure risk is estimated by fitting loglinear models on the observed sample counts in cells formed by key variables and their interactions. Funded by the National Center for Education Statistics. See Skinner and Shlomo (2008) <doi:10.1198/016214507000001328> for a description of the file-level risk measures and the loglinear model approach.
Unequal granularity of cell type annotation makes it difficult to compare scRNA-seq datasets at scale. Leveraging the ontology system for defining cell type hierarchy, scOntoMatch aims to align cell type annotations to make them comparable across studies. The alignment involves two core steps: first is to trim the cell type tree within each dataset so each cell type does not have descendants, and then map cell type labels cross-studies by direct matching and mapping descendants to ancestors. Various functions for plotting cell type trees and manipulating ontology terms are also provided. In the Single Cell Expression Atlas hosted at EBI, a compendium of datasets with curated ontology labels are great inputs to this package.
Blind source separation for multivariate spatial data based on simultaneous/joint diagonalization of (robust) local covariance matrices. This package is an implementation of the methods described in Bachoc, Genton, Nordhausen, Ruiz-Gazen and Virta (2020) <doi:10.1093/biomet/asz079>.
Generates, plays, and solves Sudoku puzzles. The GUI playSudoku() needs package "tkrplot" if you are not on Windows.
Plots a QQ-Norm Plot with several Gaussian simulations.
Draw syntenic relationships between genome assemblies. There are 3 functions which take a tab delimited file containing alignment data for syntenic blocks between genomes to produce either a linear alignment plot, an evolution highway style plot, or a painted ideogram representing syntenic relationships. There is also a function to convert alignment data in the DESCHRAMBLER/inferCAR format to the required data structure.
Parametric survival regression models under the maximum likelihood approach via Stan'. Implemented regression models include accelerated failure time models, proportional hazards models, proportional odds models, accelerated hazard models, Yang and Prentice models, and extended hazard models. Available baseline survival distributions include exponential, Weibull, log-normal, log-logistic, gamma, generalized gamma, rayleigh, Gompertz and fatigue (Birnbaum-Saunders) distributions. References: Lawless (2002) <ISBN:9780471372158>; Bennett (1982) <doi:10.1002/sim.4780020223>; Chen and Wang(2000) <doi:10.1080/01621459.2000.10474236>; Demarqui and Mayrink (2021) <doi:10.1214/20-BJPS471>.