Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides statistical tools for analyzing net and relative survival, with a key feature of relaxing the assumption of independent censoring and incorporating the effect of dependent competing risks. It employs a copula-based methodology, specifically the Archimedean copula, to simulate data, conduct survival analysis, and offer comparisons with other methods. This approach is detailed in the work of Adatorwovor et al. (2022) <doi:10.1515/ijb-2021-0016>.
Simple feature stores and tools for creating personalised feature stores. diseasystore powers feature stores which can automatically link and aggregate features to a given stratification level. These feature stores are automatically time-versioned (powered by the SCDB package) and allows you to easily and dynamically compute features as part of your continuous integration.
Multivariate Gaussian mixture model with a determinant point process prior to promote the discovery of parsimonious components from observed data. See Xu, Mueller, Telesca (2016) <doi:10.1111/biom.12482>.
Several functions are provided for dose-response (or concentration-response) characterization from omics data. DRomics is especially dedicated to omics data obtained using a typical dose-response design, favoring a great number of tested doses (or concentrations) rather than a great number of replicates (no need of replicates). DRomics provides functions 1) to check, normalize and or transform data, 2) to select monotonic or biphasic significantly responding items (e.g. probes, metabolites), 3) to choose the best-fit model among a predefined family of monotonic and biphasic models to describe each selected item, 4) to derive a benchmark dose or concentration and a typology of response from each fitted curve. In the available version data are supposed to be single-channel microarray data in log2, RNAseq data in raw counts, or already pretreated continuous omics data (such as metabolomic data) in log scale. In order to link responses across biological levels based on a common method, DRomics also handles apical data as long as they are continuous and follow a normal distribution for each dose or concentration, with a common standard error. For further details see Delignette-Muller et al (2023) <DOI:10.24072/pcjournal.325> and Larras et al (2018) <DOI:10.1021/acs.est.8b04752>.
For checking the dataset from EDC(Electronic Data Capture) in clinical trials. dmtools reshape your dataset in a tidy view and check events. You can reshape the dataset and choose your target to check, for example, the laboratory reference range.
This package provides a collection of functions to search and download Digital Surface Model (DSM) and Light Detection and Ranging (LiDAR) data via APIs, including OpenTopography <https://portal.opentopography.org/apidocs/> and TNMAccess <https://apps.nationalmap.gov/tnmaccess/#/>, and canopy tree height data.
This package provides a toolbox to create and manage metadata files and configuration profiles: files used to configure the parameters and initial settings for some computer programs.
Provide a Dens-based method for estimating functional connection in large scale brain networks using partial correlation.
For identifying, estimating, and plotting descriptive multidimensional item response theory models, restricted to 3D and dichotomous or polytomous data that fit the two-parameter logistic model or the graded response model. The method is foremost explorative and centered around the plot function that exposes item characteristics and constructs, represented by vector arrows, located in a three-dimensional interactive latent space. The results can be useful for item-level analysis as well as test development.
Analysis of agreement for nominal data between two raters using the Delta model. This model is proposed as an alternative to the widespread measure Cohen kappa coefficient, which performs poorly when the marginal distributions are very asymmetric (Martin-Andres and Femia-Marzo (2004), <doi:10.1348/000711004849268>; Martin-Andres and Femia-Marzo (2008) <doi:10.1080/03610920701669884>). The package also contains a function to perform a massive analysis of multiple raters against a gold standard. A shiny app is also provided to obtain the measures of nominal agreement between two raters.
An implementation by Chen, Li, and Zhang (2022) <doi: 10.1093/bioadv/vbac041> of the Depth Importance in Precision Medicine (DIPM) method in Chen and Zhang (2022) <doi:10.1093/biostatistics/kxaa021> and Chen and Zhang (2020) <doi:10.1007/978-3-030-46161-4_16>. The DIPM method is a classification tree that searches for subgroups with especially poor or strong performance in a given treatment group.
Diff, patch and merge for data frames. Document changes in data sets and use them to apply patches. Changes to data can be made visible by using render_diff(). The V8 package is used to wrap the daff.js JavaScript library which is included in the package.
This package provides methods for fitting nonstationary Gaussian process models by spatial deformation, as introduced by Sampson and Guttorp (1992) <doi:10.1080/01621459.1992.10475181>, and by dimension expansion, as introduced by Bornn et al. (2012) <doi:10.1080/01621459.2011.646919>. Low-rank thin-plate regression splines, as developed in Wood, S.N. (2003) <doi:10.1111/1467-9868.00374>, are used to either transform co-ordinates or create new latent dimensions.
This package provides a wrapper on top of the Domino Data Python SDK library. It lets you query and access Domino Data Sources directly from your R environment. Under the hood, Domino Data R SDK leverages the API provided by the Domino Data Python SDK', which must be installed as a prerequisite. Domino is a platform that makes it easy to run your code on scalable hardware, with integrated version control and collaboration features designed for analytical workflows. See <https://docs.dominodatalab.com/en/latest/api_guide/140b48/domino-data-api> for more information.
An R interface to the Free Dictionary API <https://dictionaryapi.dev/>, <https://github.com/meetDeveloper/freeDictionaryAPI>. Retrieve dictionary definitions for English words, as well as additional information including phonetics, part of speech, origins, audio pronunciation, example usage, synonyms and antonyms, returned in tidy format for ease of use.
This package implements the DAAREM method for accelerating the convergence of slow, monotone sequences from smooth, fixed-point iterations such as the EM algorithm. For further details about the DAAREM method, see Henderson, N.C. and Varadhan, R. (2019) <doi:10.1080/10618600.2019.1594835>.
This package provides a unified framework to building Area Deprivation Index (ADI), Social Vulnerability Index (SVI), and Neighborhood Deprivation Index (NDI) deprivation measures and accessing related data from the U.S. Census Bureau such as Gini coefficient data. Tools are also available for calculating percentiles, quantiles, and for creating clear map breaks for data visualization.
In-line functions for multivariate optimization via desirability functions (Derringer and Suich, 1980, <doi:10.1080/00224065.1980.11980968>) with easy use within dplyr pipelines.
Calculate posterior modes and credible intervals of parameters of the Dixon-Simon model for subgroup analysis (with binary covariates) in clinical trials. For details of the methodology, please refer to D.O. Dixon and R. Simon (1991), Biometrics, 47: 871-881.
Dynamic CUR (dCUR) boosts the CUR decomposition (Mahoney MW., Drineas P. (2009) <doi:10.1073/pnas.0803205106>) varying the k, the number of columns and rows used, and its final purposes to help find the stage, which minimizes the relative error to reduce matrix dimension. The goal of CUR Decomposition is to give a better interpretation of the matrix decomposition employing proper variable selection in the data matrix, in a way that yields a simplified structure. Its origins come from analysis in genetics. The goal of this package is to show an alternative to variable selection (columns) or individuals (rows). The idea proposed consists of adjusting the probability distributions to the leverage scores and selecting the best columns and rows that minimize the reconstruction error of the matrix approximation ||A-CUR||. It also includes a method that recalibrates the relative importance of the leverage scores according to an external variable of the user's interest.
Create D3 based SVG ('Scalable Vector Graphics') graphics using a simple R API. The package aims to simplify the creation of many SVG plot types using a straightforward R API. The package relies on the r2d3 R package and the D3 JavaScript library. See <https://rstudio.github.io/r2d3/> and <https://d3js.org/> respectively.
This package provides a Bayesian hierarchical model for clustering dissimilarity data using the Dirichlet process. The latent configuration of objects and the number of clusters are automatically inferred during the fitting process. The package supports multiple models which are available to detect clusters of various shapes and sizes using different covariance structures. Additional functions are included to ensure adequate model fits through prior and posterior predictive checks.
Diagnostic classification models are psychometric models used to categorically estimate respondents mastery, or proficiency, on a set of predefined skills (Bradshaw, 2016, <doi:10.1002/9781118956588.ch13>). Diagnostic models can be estimated with Stan'; however, the necessary scripts can be long and complicated. This package automates the creation of Stan scripts for diagnostic classification models. Specify different types of diagnostic models, define prior distributions, and automatically generate the necessary Stan code for estimating the model.
RStudio as of recently offers the option to define addins and assign shortcuts to them. This package contains addins for a few most frequently used functions in a data scientist's (at least mine) daily work (like str(), example(), plot(), head(), view(), Desc()). Most of these functions will use the current selection in the editor window and send the specific command to the console while instantly executing it. Assigning shortcuts to these addins will save you quite a few keystrokes.