Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Smooth testing of goodness of fit. These tests are data driven (alternative hypothesis is dynamically selected based on data). In this package you will find various tests for exponent, Gaussian, Gumbel and uniform distribution.
Model-based methods for the detection of disease clusters using GLMs, GLMMs and zero-inflated models. These methods are described in V. Gómez-Rubio et al. (2019) <doi:10.18637/jss.v090.i14> and V. Gómez-Rubio et al. (2018) <doi:10.1007/978-3-030-01584-8_1>.
An implementation by Chen, Li, and Zhang (2022) <doi: 10.1093/bioadv/vbac041> of the Depth Importance in Precision Medicine (DIPM) method in Chen and Zhang (2022) <doi:10.1093/biostatistics/kxaa021> and Chen and Zhang (2020) <doi:10.1007/978-3-030-46161-4_16>. The DIPM method is a classification tree that searches for subgroups with especially poor or strong performance in a given treatment group.
Makes deck.gl <https://deck.gl/>, a WebGL-powered open-source JavaScript framework for visual exploratory data analysis of large datasets, available within R via the htmlwidgets package. Furthermore, it supports basemaps from mapbox <https://www.mapbox.com/> via mapbox-gl-js <https://github.com/mapbox/mapbox-gl-js>.
Estimation of a density from grouped (tabulated) summary statistics evaluated in each of the big bins (or classes) partitioning the support of the variable. These statistics include class frequencies and central moments of order one up to four. The log-density is modelled using a linear combination of penalised B-splines. The multinomial log-likelihood involving the frequencies adds up to a roughness penalty based on the differences in the coefficients of neighbouring B-splines and the log of a root-n approximation of the sampling density of the observed vector of central moments in each class. The so-obtained penalized log-likelihood is maximized using the EM algorithm to get an estimate of the spline parameters and, consequently, of the variable density and related quantities such as quantiles, see Lambert, P. (2021) <arXiv:2107.03883> for details.
This package provides a tool to sample data with the desired properties.Samples can be drawn by purposive sampling with determining distributional conditions, such as deviation from normality (skewness and kurtosis), and sample size in quantitative research studies. For purposive sampling, a researcher has something in mind and participants that fit the purpose of the study are included (Etikan,Musa, & Alkassim, 2015) <doi:10.11648/j.ajtas.20160501.11>.Purposive sampling can be useful for answering many research questions (Klar & Leeper, 2019) <doi:10.1002/9781119083771.ch21>.
An intuitive, cross-platform graphical data analysis system. It uses menus and dialogs to guide the user efficiently through the data manipulation and analysis process, and has an excel like spreadsheet for easy data frame visualization and editing. Deducer works best when used with the Java based R GUI JGR, but the dialogs can be called from the command line. Dialogs have also been integrated into the Windows Rgui.
This package provides efficient Markov chain Monte Carlo (MCMC) algorithms for dynamic shrinkage processes, which extend global-local shrinkage priors to the time series setting by allowing shrinkage to depend on its own past. These priors yield locally adaptive estimates, useful for time series and regression functions with irregular features. The package includes full MCMC implementations for trend filtering using dynamic shrinkage on signal differences, producing locally constant or linear fits with adaptive credible bands. Also included are models with static shrinkage and normal-inverse-Gamma priors for comparison. Additional tools cover dynamic regression with time-varying coefficients and B-spline models with shrinkage on basis differences, allowing for flexible curve-fitting with unequally spaced data. Some support for heteroscedastic errors, outlier detection, and change point estimation. Methods in this package are described in Kowal et al. (2019) <doi:10.1111/rssb.12325>, Wu et al. (2024) <doi:10.1080/07350015.2024.2362269>, Schafer and Matteson (2024) <doi:10.1080/00401706.2024.2407316>, and Cho and Matteson (2024) <doi:10.48550/arXiv.2408.11315>.
Track and document dplyr data pipelines. As you filter, mutate, and join your way through a data set, dtrackr seamlessly keeps track of your data flow and makes publication ready documentation of a data pipeline simple.
Differential Analysis of short RNA transcripts that can be modeled by either Poisson or Negative binomial distribution. The statistical methodology implemented in this package is based on the random selection of references genes (Desaulle et al. (2021) <arXiv:2103.09872>).
We provide three distance metrics for measuring the separation between two clusters in high-dimensional spaces. The first metric is the centroid distance, which calculates the Euclidean distance between the centers of the two groups. The second is a ridge Mahalanobis distance, which incorporates a ridge correction constant, alpha, to ensure that the covariance matrix is invertible. The third metric is the maximal data piling distance, which computes the orthogonal distance between the affine spaces spanned by each class. These three distances are asymptotically interconnected and are applicable in tasks such as discrimination, clustering, and outlier detection in high-dimensional settings.
This package contains the function used to create the Dandelion Plot. Dandelion Plot is a visualization method for R-mode Exploratory Factor Analysis.
This package implements an algorithm to effortlessly split a column in an R data frame filled with multiple values separated by delimiters. This automates the process of creating separate columns for each unique value, transforming them into binary outcomes.
Perform model selection using distribution and probability-based methods, including standardized AIC, BIC, and AICc. These standardized information criteria allow one to perform model selection in a way similar to the prevalent "Rule of 2" method, but formalize the method to rely on probability theory. A novel goodness-of-fit procedure for assessing linear regression models is also available. This test relies on theoretical properties of the estimated error variance for a normal linear regression model, and employs a bootstrap procedure to assess the null hypothesis that the fitted model shows no lack of fit. For more information, see Koeneman and Cavanaugh (2023) <arXiv:2309.10614>. Functionality to perform all subsets linear or generalized linear regression is also available.
Constructs dynamic optimal shrinkage estimators for the weights of the global minimum variance portfolio which are reconstructed at given reallocation points as derived in Bodnar, Parolya, and Thorsén (2021) (<arXiv:2106.02131>). Two dynamic shrinkage estimators are available in this package. One using overlapping samples while the other use nonoverlapping samples.
It allows to learn the structure of univariate time series, learning parameters and forecasting. Implements a model of Dynamic Bayesian Networks with temporal windows, with collections of linear regressors for Gaussian nodes, based on the introductory texts of Korb and Nicholson (2010) <doi:10.1201/b10391> and Nagarajan, Scutari and Lèbre (2013) <doi:10.1007/978-1-4614-6446-4>.
An add-on package to DImodels for the fitting of biodiversity and ecosystem function relationship study data with multiple ecosystem function responses and/or time points. This package uses the multivariate and repeated measures Diversity-Interactions (DI) methods developed by Kirwan et al. (2009) <doi:10.1890/08-1684.1>, Finn et al. (2013) <doi:10.1111/1365-2664.12041>, and Dooley et al. (2015) <doi:10.1111/ele.12504>.
Analyses gene expression data derived from experiments to detect differentially expressed genes by employing the concept of majority voting with five different statistical models. It includes functions for differential expression analysis, significance testing, etc. It simplifies the process of uncovering meaningful patterns and trends within gene expression data, aiding researchers in downstream analysis. Boyer, R.S., Moore, J.S. (1991) <doi:10.1007/978-94-011-3488-0_5>.
Computationally efficient tools for comparing all pairs of profiles in a DNA database. The expectation and covariance of the summary statistic is implemented for fast computing. Routines for estimating proportions of close related individuals are available. The use of wildcards (also called F- designation) is implemented. Dedicated functions ease plotting the results. See Tvedebrink et al. (2012) <doi:10.1016/j.fsigen.2011.08.001>. Compute the distribution of the numbers of alleles in DNA mixtures. See Tvedebrink (2013) <doi:10.1016/j.fsigss.2013.10.142>.
This package contains data organized by topics: categorical data, regression model, means comparisons, independent and repeated measures ANOVA, mixed ANOVA and ANCOVA.
An easy-to-use yet powerful system for plotting grouped data effect sizes. Various types of effect size can be estimated, then plotted together with a representation of the original data. Select from many possible data representations (box plots, violin plots, raw data points etc.), and combine as desired. Durga plots are implemented in base R, so are compatible with base R methods for combining plots, such as layout()'. See Khan & McLean (2023) <doi:10.1101/2023.02.06.526960>.
In practice, we will encounter problems where the longitudinal performance of processes needs to be monitored over time. Dynamic screening systems (DySS) are methods that aim to identify and give signals to processes with poor performance as early as possible. This package is designed to implement dynamic screening systems and the related methods. References: Qiu, P. and Xiang, D. (2014) <doi:10.1080/00401706.2013.822423>; Qiu, P. and Xiang, D. (2015) <doi:10.1002/sim.6477>; Li, J. and Qiu, P. (2016) <doi:10.1080/0740817X.2016.1146423>; Li, J. and Qiu, P. (2017) <doi:10.1002/qre.2160>; You, L. and Qiu, P. (2019) <doi:10.1080/00949655.2018.1552273>; Qiu, P., Xia, Z., and You, L. (2020) <doi:10.1080/00401706.2019.1604434>; You, L., Qiu, A., Huang, B., and Qiu, P. (2020) <doi:10.1002/bimj.201900127>; You, L. and Qiu, P. (2021) <doi:10.1080/00224065.2020.1767006>.
This package implements double hierarchical generalized linear models in which the mean, dispersion parameters for variance of random effects, and residual variance (overdispersion) can be further modeled as random-effect models.
Estimates the Dyad Ratios Algorithm for pooling and smoothing poll estimates. The Dyad Ratios Algorithm smooths both forward and backward in time over polling results allowing differences in both question type and polling house. The result is an estimate of a single latent variable that describes the systematic trend over time in the (noisy) polling results. See James A. Stimson (2018) <doi:10.1177/0759106318761614> and the package's vignette for more details.