Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Multilevel models (mixed effects models) are the statistical tool of choice for analyzing multilevel data (Searle et al, 2009). These models account for the correlated nature of observations within higher level units by adding group-level error terms that augment the singular residual error of a standard OLS regression. Multilevel and mixed effects models often require specialized data pre-processing and further post-estimation derivations and graphics to gain insight into model results. The package presented here, mlmtools', is a suite of pre- and post-estimation tools for multilevel models in R'. Package implements post-estimation tools designed to work with models estimated using lme4''s (Bates et al., 2014) lmer() function, which fits linear mixed effects regression models. Searle, S. R., Casella, G., & McCulloch, C. E. (2009, ISBN:978-0470009598). Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014) <doi:10.18637/jss.v067.i01>.
The MCC-F1 analysis is a method to evaluate the performance of binary classifications. The MCC-F1 curve is more reliable than the Receiver Operating Characteristic (ROC) curve and the Precision-Recall (PR)curve under imbalanced ground truth. The MCC-F1 analysis also provides the MCC-F1 metric that integrates classifier performance over varying thresholds, and the best threshold of binary classification.
Age-specific mortality rates are estimated and projected using the Kannisto, Lee-Carter and related methods as described in Sevcikova et al. (2016) <doi:10.1007/978-3-319-26603-9_15>.
Magic functions to obtain results from for loops.
Implementation of Warnes & Raftery's MCGibbsit run-length and convergence diagnostic for a set of (not-necessarily independent) Markov Chain Monte Carlo (MCMC) samplers. It combines the quantile estimate error-bounding approach of the Raftery and Lewis MCMC run length diagnostic `gibbsit` with the between verses within chain approach of the Gelman and Rubin MCMC convergence diagnostic.
This package implements Gibbs sampling and Bayes factors for multinomial models with linear inequality constraints on the vector of probability parameters. As special cases, the model class includes models that predict a linear order of binomial probabilities (e.g., p[1] < p[2] < p[3] < .50) and mixture models assuming that the parameter vector p must be inside the convex hull of a finite number of predicted patterns (i.e., vertices). A formal definition of inequality-constrained multinomial models and the implemented computational methods is provided in: Heck, D.W., & Davis-Stober, C.P. (2019). Multinomial models with linear inequality constraints: Overview and improvements of computational methods for Bayesian inference. Journal of Mathematical Psychology, 91, 70-87. <doi:10.1016/j.jmp.2019.03.004>. Inequality-constrained multinomial models have applications in the area of judgment and decision making to fit and test random utility models (Regenwetter, M., Dana, J., & Davis-Stober, C.P. (2011). Transitivity of preferences. Psychological Review, 118, 42â 56, <doi:10.1037/a0021150>) or to perform outcome-based strategy classification to select the decision strategy that provides the best account for a vector of observed choice frequencies (Heck, D.W., Hilbig, B.E., & Moshagen, M. (2017). From information processing to decisions: Formalizing and comparing probabilistic choice models. Cognitive Psychology, 96, 26â 40. <doi:10.1016/j.cogpsych.2017.05.003>).
Matrix is an universal and sometimes primary object/unit in applied mathematics and statistics. We provide a number of algorithms for selected problems in optimization and statistical inference. For general exposition to the topic with focus on statistical context, see the book by Banerjee and Roy (2014, ISBN:9781420095388).
Computation of standardized interquartile range (IQR), Huber-type skipped mean (Hampel (1985), <doi:10.2307/1268758>), robust coefficient of variation (CV) (Arachchige et al. (2019), <doi:10.48550/arXiv.1907.01110>), robust signal to noise ratio (SNR), z-score, standardized mean difference (SMD), as well as functions that support graphical visualization such as boxplots based on quartiles (not hinges), negative logarithms and generalized logarithms for ggplot2 (Wickham (2016), ISBN:978-3-319-24277-4).
This package provides statistical tests and algorithms for the detection of change points in time series and point processes - particularly for changes in the mean in time series and for changes in the rate and in the variance in point processes. References - Michael Messer, Marietta Kirchner, Julia Schiemann, Jochen Roeper, Ralph Neininger and Gaby Schneider (2014), A multiple filter test for the detection of rate changes in renewal processes with varying variance <doi:10.1214/14-AOAS782>. Stefan Albert, Michael Messer, Julia Schiemann, Jochen Roeper, Gaby Schneider (2017), Multi-scale detection of variance changes in renewal processes in the presence of rate change points <doi:10.1111/jtsa.12254>. Michael Messer, Kaue M. Costa, Jochen Roeper and Gaby Schneider (2017), Multi-scale detection of rate changes in spike trains with weak dependencies <doi:10.1007/s10827-016-0635-3>. Michael Messer, Stefan Albert and Gaby Schneider (2018), The multiple filter test for change point detection in time series <doi:10.1007/s00184-018-0672-1>. Michael Messer, Hendrik Backhaus, Albrecht Stroh and Gaby Schneider (2019+) Peak detection in time series.
This package provides a simple way to construct and maintain functions that keep state i.e. remember their argument lists. This can be useful when one needs to repeatedly invoke the same function with only a small number of argument changes at each invocation.
Advanced methods for a valuable quantitative environmental risk assessment using Bayesian inference of survival and reproduction Data. Among others, it facilitates Bayesian inference of the general unified threshold model of survival (GUTS). See our companion paper Baudrot and Charles (2021) <doi:10.21105/joss.03200>, as well as complementary details in Baudrot et al. (2018) <doi:10.1021/acs.est.7b05464> and Delignette-Muller et al. (2017) <doi:10.1021/acs.est.6b05326>.
This package implements the method to analyse weighted mobility networks or distribution networks as outlined in: Block, P., Stadtfeld, C., & Robins, G. (2022) <doi:10.1016/j.socnet.2021.08.003>. The purpose of the model is to analyse the structure of mobility, incorporating exogenous predictors pertaining to individuals and locations known from classical mobility analyses, as well as modelling emergent mobility patterns akin to structural patterns known from the statistical analysis of social networks.
Multi-Dimensional Analysis (MDA) is an adaptation of factor analysis developed by Douglas Biber (1992) <doi:10.1007/BF00136979>. Its most common use is to describe language as it varies by genre, register, and use. This package contains functions for carrying out the calculations needed to describe and plot MDA results: dimension scores, dimension means, and factor loadings.
Clustering via parsimonious Gaussian Mixtures of Experts using the MoEClust models introduced by Murphy and Murphy (2020) <doi:10.1007/s11634-019-00373-8>. This package fits finite Gaussian mixture models with a formula interface for supplying gating and/or expert network covariates using a range of parsimonious covariance parameterisations from the GPCM family via the EM/CEM algorithm. Visualisation of the results of such models using generalised pairs plots and the inclusion of an additional noise component is also facilitated. A greedy forward stepwise search algorithm is provided for identifying the optimal model in terms of the number of components, the GPCM covariance parameterisation, and the subsets of gating/expert network covariates.
According to a phenomenon known as "the wisdom of the crowds," combining point estimates from multiple judges often provides a more accurate aggregate estimate than using a point estimate from a single judge. However, if the judges use shared information in their estimates, the simple average will over-emphasize this common component at the expense of the judgesâ private information. Asa Palley & Ville Satopää (2021) "Boosting the Wisdom of Crowds Within a Single Judgment Problem: Selective Averaging Based on Peer Predictions" <https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=3504286> proposes a procedure for calculating a weighted average of the judgesâ individual estimates such that resulting aggregate estimate appropriately combines the judges collective information within a single estimation problem. The authors use both simulation and data from six experimental studies to illustrate that the weighting procedure outperforms existing averaging-like methods, such as the equally weighted average, trimmed average, and median. This aggregate estimate -- know as "the knowledge-weighted estimate" -- inputs a) judges estimates of a continuous outcome (E) and b) predictions of others average estimate of this outcome (P). In this R-package, the function knowledge_weighted_estimate(E,P) implements the knowledge-weighted estimate. Its use is illustrated with a simple stylized example and on real-world experimental data.
Fits the Multiple Random Dot Product Graph Model and performs a test for whether two networks come from the same distribution. Both methods are proposed in Nielsen, A.M., Witten, D., (2018) "The Multiple Random Dot Product Graph Model", arXiv preprint <arXiv:1811.12172> (Submitted to Journal of Computational and Graphical Statistics).
Inference of a multi-states birth-death model from a phylogeny, comprising a number of states N, birth and death rates for each state and on which edges each state appears. Inference is done using a hybrid approach: states are progressively added in a greedy approach. For a fixed number of states N the best model is selected via maximum likelihood. Reference: J. Barido-Sottani, T. G. Vaughan and T. Stadler (2018) <doi:10.1098/rsif.2018.0512>.
Play and record games of minesweeper using a graphics device that supports event handling. Replay recorded games and save GIF animations of them. Based on classic minesweeper as detailed by Crow P. (1997) <https://minesweepergame.com/math/a-mathematical-introduction-to-the-game-of-minesweeper-1997.pdf>.
Access to several Numerical Weather Prediction services both in raster format and as a time series for a location. Currently it works with GFS <https://www.ncei.noaa.gov/products/weather-climate-models/global-forecast>, MeteoGalicia <https://www.meteogalicia.gal/web/modelos/threddsIndex.action>, NAM <https://www.ncei.noaa.gov/products/weather-climate-models/north-american-mesoscale>, and RAP <https://www.ncei.noaa.gov/products/weather-climate-models/rapid-refresh-update>.
Given a CSV file with titles and abstracts, the package creates a document-term matrix that is lemmatized and stemmed and can directly be used to train machine learning methods for automatic title-abstract screening in the preparation of a meta analysis.
Color palettes inspired by the works of Mexican painters and muralists. The package includes functions that return vectors of colors and also functions to use color and fill scales in ggplot2 visualizations.
Analysis of experimental multi-parent populations to detect regions of the genome (called quantitative trait loci, QTLs) influencing phenotypic traits measured in unique and multiple environments. The population must be composed of crosses between a set of at least three parents (e.g. factorial design, diallel', or nested association mapping). The functions cover data processing, QTL detection, and results visualization. The implemented methodology is described in Garin, Wimmer, Mezmouk, Malosetti and van Eeuwijk (2017) <doi:10.1007/s00122-017-2923-3>, in Garin, Malosetti and van Eeuwijk (2020) <doi: 10.1007/s00122-020-03621-0>, and in Garin, Diallo, Tekete, Thera, ..., and Rami (2024) <doi: 10.1093/genetics/iyae003>.
Offers a general framework of multivariate mixed-effects models for the joint analysis of multiple correlated outcomes with clustered data structures and potential missingness proposed by Wang et al. (2018) <doi:10.1093/biostatistics/kxy022>. The missingness of outcome values may depend on the values themselves (missing not at random and non-ignorable), or may depend on only the covariates (missing at random and ignorable), or both. This package provides functions for two models: 1) mvMISE_b() allows correlated outcome-specific random intercepts with a factor-analytic structure, and 2) mvMISE_e() allows the correlated outcome-specific error terms with a graphical lasso penalty on the error precision matrix. Both functions are motivated by the multivariate data analysis on data with clustered structures from labelling-based quantitative proteomic studies. These models and functions can also be applied to univariate and multivariate analyses of clustered data with balanced or unbalanced design and no missingness.
This package provides a framework for analyzing broth microdilution assays in various 96-well plate designs, visualizing results and providing descriptive and (simple) inferential statistics (i.e. summary statistics and sign test). The functions are designed to add metadata to 8 x 12 tables of absorption values, creating a tidy data frame. Users can choose between clean-up procedures via function parameters (which covers most cases) or user prompts (in cases with complex experimental designs). Users can also choose between two validation methods, i.e. exclusion of absorbance values above a certain threshold or manual exclusion of samples. A function for visual inspection of samples with their absorption values over time for certain group combinations helps with the decision. In addition, the package includes functions to subtract the background absorption (usually at time T0) and to calculate the growth performance compared to a baseline. Samples can be visually inspected with their absorption values displayed across time points for specific group combinations. Core functions of this package (i.e. background subtraction, sample validation and statistics) were inspired by the manual calculations that were applied in Tewes and Muller (2020) <doi:10.1038/s41598-020-67600-7>.