Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Perform user-friendly power analyses for the random intercept cross-lagged panel model (RI-CLPM) and the bivariate stable trait autoregressive trait state (STARTS) model. The strategy as proposed by Mulder (2023) <doi:10.1080/10705511.2022.2122467> is implemented. Extensions include the use of parameter constraints over time, bounded estimation, generation of data with skewness and kurtosis, and the option to setup the power analysis for Mplus.
Simulates pooled sequencing data under a variety of conditions. Also allows for the evaluation of the average absolute difference between allele frequencies computed from genotypes and those computed from pooled data. Carvalho et al., (2022) <doi:10.1101/2023.01.20.524733>.
Translates beliefs into prior information in the form of Beta and Gamma distributions. It can be used for the generation of priors on the prevalence of disease and the sensitivity/specificity of diagnostic tests and any other binomial experiment.
Estimate False Discovery Rates (FDRs) for importance metrics from random forest runs.
When using pooled p-values to adjust for multiple testing, there is an inherent balance that must be struck between rejection based on weak evidence spread among many tests and strong evidence in a few, explored in Salahub and Olford (2023) <arXiv:2310.16600>. This package provides functionality to compute marginal and central rejection levels and the centrality quotient for p-value pooling functions and provides implementations of the chi-squared quantile pooled p-value (described in Salahub and Oldford (2023)) and a proposal from Heard and Rubin-Delanchy (2018) <doi:10.1093/biomet/asx076> to control the quotient's value.
In short, this package is a locator for cool, refreshing beverages. It will find and return the nearest location where you can get a cold one.
This package provides Partial least squares Regression and various regular, sparse or kernel, techniques for fitting Cox models in high dimensional settings <doi:10.1093/bioinformatics/btu660>, Bastien, P., Bertrand, F., Meyer N., Maumy-Bertrand, M. (2015), Deviance residuals-based sparse PLS and sparse kernel PLS regression for censored data, Bioinformatics, 31(3):397-404. Cross validation criteria were studied in <doi:10.48550/arXiv.1810.02962>, Bertrand, F., Bastien, Ph. and Maumy-Bertrand, M. (2018), Cross validating extensions of kernel, sparse or regular partial least squares regression models to censored data.
The Poisson-lognormal model and variants (Chiquet, Mariadassou and Robin, 2021 <doi:10.3389/fevo.2021.588292>) can be used for a variety of multivariate problems when count data are at play, including principal component analysis for count data, discriminant analysis, model-based clustering and network inference. Implements variational algorithms to fit such models accompanied with a set of functions for visualization and diagnostic.
This package provides a set of basic tools for generating, analyzing, summarizing and visualizing finite partially ordered sets. In particular, it implements flexible and very efficient algorithms for the extraction of linear extensions and for the computation of mutual ranking probabilities and other user-defined functionals, over them. The package is meant as a computationally efficient "engine", for the implementation of data analysis procedures, on systems of multidimensional ordinal indicators and partially ordered data, in the spirit of Fattore, M. (2016) "Partially ordered sets and the measurement of multidimensional ordinal deprivation", Social Indicators Research <DOI:10.1007/s11205-015-1059-6>, and Fattore M. and Arcagni, A. (2018) "A reduced posetic approach to the measurement of multidimensional ordinal deprivation", Social Indicators Research <DOI:10.1007/s11205-016-1501-4>.
Prepare pharmacokinetic/pharmacodynamic (PK/PD) data for PK/PD analyses. This package provides functions to standardize infusion and bolus dose data while linking it to drug level or concentration data.
This package provides a shiny app that allows to access and use the INVEKOS API for field polygons in Austria. API documentation is available at <https://gis.lfrz.gv.at/api/geodata/i009501/ogc/features/v1/>.
R functions to access provenance information collected by rdt or rdtLite'. The information is stored inside a ProvInfo object and can be accessed through a collection of functions that will return the requested data. The exact format of the JSON created by rdt and rdtLite is described in <https://github.com/End-to-end-provenance/ExtendedProvJson>.
Analysis of protein expression data can be done through Principal Component Analysis (PCA), and this R package is designed to streamline the analysis. This package enables users to perform PCA and it generates biplot and scree plot for advanced graphical visualization. Optionally, it supports grouping/clustering visualization with PCA loadings and confidence ellipses. With this R package, researchers can quickly explore complex protein datasets, interpret variance contributions, and visualize sample clustering through intuitive biplots. For more details, see Jolliffe (2001) <doi:10.1007/b98835>, Gabriel (1971) <doi:10.1093/biomet/58.3.453>, Zhang et al. (2024) <doi:10.1038/s41467-024-53239-9>, and Anandan et al. (2022) <doi:10.1038/s41598-022-07781-5>.
An R interface to pikchr (<https://pikchr.org>, pronounced â pictureâ ), a PIC'-like markup language for creating diagrams within technical documentation. Originally developed by Brian Kernighan, PIC has been adapted into pikchr by D. Richard Hipp, the creator of SQLite'. pikchr is designed to be embedded in fenced code blocks of Markdown or other documentation markup languages, making it ideal for generating diagrams in text-based formats. This package allows R users to seamlessly integrate the descriptive syntax of pikchr for diagram creation directly within the R environment.
Joint frailty models have been widely used to study the associations between recurrent events and a survival outcome. However, existing joint frailty models only consider one or a few recurrent events and cannot deal with high-dimensional recurrent events. This package can be used to fit our recently developed penalized joint frailty model that can handle high-dimensional recurrent events. Specifically, an adaptive lasso penalty is imposed on the parameters for the effects of the recurrent events on the survival outcome, which allows for variable selection. Also, our algorithm is computationally efficient, which is based on the Gaussian variational approximation method.
Games that can be played in the R console. Includes coin flip, hangman, jumble, magic 8 ball, poker, rock paper scissors, shut the box, spelling bee, and 2048.
This package provides functions for fitting and validation of models for subgroup identification and personalized medicine / precision medicine under the general subgroup identification framework of Chen et al. (2017) <doi:10.1111/biom.12676>. This package is intended for use for both randomized controlled trials and observational studies and is described in detail in Huling and Yu (2021) <doi:10.18637/jss.v098.i05>.
This package provides functions to estimate statistical errors of phylogenetic metrics particularly to detect binary trait influence on diversification, as well as a function to simulate trees with fixed number of sampled taxa and trait prevalence.
This package provides classes for analysing and implementing equity portfolios, including routines for generating tradelists and calculating exposures to user-specified risk factors.
Consider a possibly nonlinear nonparametric regression with p regressors. We provide evaluations by 13 methods to rank regressors by their practical significance or importance using various methods, including machine learning tools. Comprehensive methods are as follows. m6=Generalized partial correlation coefficient or GPCC by Vinod (2021)<doi:10.1007/s10614-021-10190-x> and Vinod (2022)<https://www.mdpi.com/1911-8074/15/1/32>. m7= a generalization of psychologists effect size incorporating nonlinearity and many variables. m8= local linear partial (dy/dxi) using the np package for kernel regressions. m9= partial (dy/dxi) using the NNS package. m10= importance measure using the NNS boost function. m11= Shapley Value measure of importance (cooperative game theory). m12 and m13= two versions of the random forest algorithm. Taraldsen's exact density for sampling distribution of correlations added.
Fill missing symmetrical data with mirroring, calculate Procrustes alignments with or without scaling, and compute standard or vector correlation and covariance matrices (congruence coefficients) of 3D landmarks. Tolerates missing data for all analyses.
Basic functions to fit and predict periodic autoregressive time series models. These models are discussed in the book P.H. Franses (1996) "Periodicity and Stochastic Trends in Economic Time Series", Oxford University Press. Data set analyzed in that book is also provided. NOTE: the package was orphaned during several years. It is now only maintained, but no major enhancements are expected, and the maintainer cannot provide any support.
Presentation two independence tests for two-way, three-way and four-way contingency tables. These tests are: the modular test and the logarithmic minimum test. For details on this method see: Sulewski (2017) <doi:10.18778/0208-6018.330.04>, Sulewski (2018) <doi:10.1080/02664763.2018.1424122>, Sulewski (2019) <doi:10.2478/bile-2019-0003>, Sulewski (2021) <doi:10.1080/00949655.2021.1908286>.
Efficient algorithm for estimating piecewise exponential hazard models for right-censored data, and is useful for reliable power calculation, study design, and event/timeline prediction for study monitoring.