Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An implementation of the Heroicons icon library for shiny applications and other R web-based projects. You can search, render, and customize icons without CSS or JavaScript dependencies.
Provide simple mechanism to repeatedly evaluate an expression until either it succeeds or timeout exceeded. It is useful in situations that random failures could happen.
This package provides a toolkit for making antigenic maps from immunological assay data, in order to quantify and visualize antigenic differences between different pathogen strains as described in Smith et al. (2004) <doi:10.1126/science.1097211> and used in the World Health Organization influenza vaccine strain selection process. Additional functions allow for the diagnostic evaluation of antigenic maps and an interactive viewer is provided to explore antigenic relationships amongst several strains and incorporate the visualization of associated genetic information.
Adaptation of the Matlab tsEVA toolbox developed by Lorenzo Mentaschi available here: <https://github.com/menta78/tsEva>. It contains an implementation of the Transformed-Stationary (TS) methodology for non-stationary extreme value Analysis (EVA) as described in Mentaschi et al. (2016) <doi:10.5194/hess-20-3527-2016>. In synthesis this approach consists in: (i) transforming a non-stationary time series into a stationary one to which the stationary extreme value theory can be applied; and (ii) reverse-transforming the result into a non-stationary extreme value distribution. RtsEva offers several options for trend estimation (mean, extremes, seasonal) and contains multiple plotting functions displaying different aspects of the non-stationarity of extremes.
Robust inference methods for fixed-effect and random-effects models of meta-analysis are implementable. The robust methods are developed using the density power divergence that is a robust estimating criterion developed in machine learning theory, and can effectively circumvent biases and misleading results caused by influential outliers. The density power divergence is originally introduced by Basu et al. (1998) <doi:10.1093/biomet/85.3.549>, and the meta-analysis methods are developed by Noma et al. (2022) <forthcoming>.
Analyses sentiment of a sentence in English and assigns score to it. It can classify sentences to the following categories of sentiments:- Positive, Negative, very Positive, very negative, Neutral. For a vector of sentences, it counts the number of sentences in each category of sentiment.In calculating the score, negation and various degrees of adjectives are taken into consideration. It deals only with English sentences.
Carry out principal component analysis (PCA) of very large pedigrees such as found in breeding populations! This package exploits sparse matrices and randomised linear algebra to deliver a gazillion-times speed-up compared to naive singular value decoposition (SVD) (and eigen decomposition).
This package implements the network clustering algorithm described in Newman (2006) <doi:10.1103/PhysRevE.74.036104>. The complete iterative algorithm comprises of two steps. In the first step, the network is expressed in terms of its leading eigenvalue and eigenvector and recursively partition into two communities. Partitioning occurs if the maximum positive eigenvalue is greater than the tolerance (10e-5) for the current partition, and if it results in a positive contribution to the Modularity. Given an initial separation using the leading eigen step, rSpectral then continues to maximise for the change in Modularity using a fine-tuning step - or variate thereof. The first stage here is to find the node which, when moved from one community to another, gives the maximum change in Modularity. This nodeâ s community is then fixed and we repeat the process until all nodes have been moved. The whole process is repeated from this new state until the change in the Modularity, between the new and old state, is less than the predefined tolerance. A slight variant of the fine-tuning step, which can improve speed of the calculation, is also provided. Instead of moving each node into each community in turn, we only consider moves of neighbouring nodes, found in different communities, to the community of the current node of interest. The two steps process is repeatedly applied to each new community found, subdivided each community into two new communities, until we are unable to find any division that results in a positive change in Modularity.
Perform structural reliability analysis, including computation and simulation with system signatures, Samaniego (2007) <doi:10.1007/978-0-387-71797-5>, and survival signatures, Coolen and Coolen-Maturi (2013) <doi:10.1007/978-3-642-30662-4_8>. Additionally supports parametric and topological inference given system lifetime data, Aslett (2012) <https://www.louisaslett.com/PhD_Thesis.pdf>.
You can easily share url pages using React Router in shiny applications and Quarto documents. The package wraps the react-router-dom React library and provides access to hash routing to navigate on multiple url pages.
Hybrid Mortality Modelling (HMM) provides a framework in which mortality around "the accident hump" and at very old ages can be modelled under a single model. The graphics codes necessary for visualization of the models output are included here. Specifically, the graphics are based on the assumption that, the mortality rates can be expressed as a function of the area under the curve between the crude mortality rates plots and the tangential transform of the force of mortality.
This package provides a strong type system for R which supports symbol declaration and assignment with type checking and condition checking.
Validating sub-national statistical typologies, re-coding across standard typologies of sub-national statistics, and making valid aggregate level imputation, re-aggregation, re-weighting and projection down to lower hierarchical levels to create meaningful data panels and time series.
Takes matched and unmatched data and calculates Rosenbaum bounds for the treatment effect. Calculates bounds for binary outcome data, Hodges-Lehmann point estimates, Wilcoxon signed-rank test for matched data and matched IV estimators, Wilcoxon sum rank test, and for data with multiple matched controls. The sensitivity analysis methods in this package are documented in Rosenbaum (2002) Observational Studies, <doi:10.1007/978-1-4757-3692-2>, Springer-Verlag.
This package provides a flexible and streamlined pipeline for formatting, analyzing, and visualizing omics data, regardless of omics type (e.g. transcriptomics, proteomics, metabolomics). The package includes tools for shaping input data into analysis-ready structures, fitting linear or mixed-effect models, extracting key contrasts, and generating a rich variety of ready-to-use publication-quality plots. Designed for transparency and reproducibility across a wide range of study designs, with customizable components for statistical modeling.
An integrated package for constructing random forest prediction intervals using a fast implementation package ranger'. This package can apply the following three methods described in Haozhe Zhang, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman (2019) <doi:10.1080/00031305.2019.1585288>: the out-of-bag prediction interval, the split conformal method, and the quantile regression forest.
This package provides a Bayesian companion to the rms package, rmsb provides Bayesian model fitting, post-fit estimation, and graphics. It implements Bayesian regression models whose fit objects can be processed by rms functions such as contrast()', summary()', Predict()', nomogram()', and latex()'. The fitting function currently implemented in the package is blrm() for Bayesian logistic binary and ordinal regression with optional clustering, censoring, and departures from the proportional odds assumption using the partial proportional odds model of Peterson and Harrell (1990) <https://www.jstor.org/stable/2347760>.
The Regional Vulnerability Index (RVI), a statistical measure of brain structural abnormality, quantifies an individual's similarity to the expected pattern (effect size) of deficits in schizophrenia (Kochunov P, Fan F, Ryan MC, et al. (2020) <doi:10.1002/hbm.25045>).
This companion package extends the package robmed (Alfons, Ates & Groenen, 2022b; <doi:10.18637/jss.v103.i13>) in various ways. Most notably, it provides a graphical user interface for the robust bootstrap test ROBMED (Alfons, Ates & Groenen, 2022a; <doi:10.1177/1094428121999096>) to make the method more accessible to less proficient R users, as well as functions to export the results as a table in a Microsoft Word or Microsoft Powerpoint document, or as a LaTeX table. Furthermore, the package contains a shiny app to compare various bootstrap procedures for mediation analysis on simulated data.
We implement linear regression when the outcome of interest and some of the covariates are observed in two different datasets that cannot be linked, based on D'Haultfoeuille, Gaillac, Maurel (2022) <doi:10.3386/w29953>. The package allows for common regressors observed in both datasets, and for various shape constraints on the effect of covariates on the outcome of interest. It also provides the tools to perform a test of point identification. See the associated vignette <https://github.com/cgaillac/RegCombin/blob/master/RegCombin_vignette.pdf> for theory and code examples.
Residual balancing is a robust method of constructing weights for marginal structural models, which can be used to estimate (a) the average treatment effect in a cross-sectional observational study, (b) controlled direct/mediator effects in causal mediation analysis, and (c) the effects of time-varying treatments in panel data (Zhou and Wodtke 2020 <doi:10.1017/pan.2020.2>). This package provides three functions, rbwPoint(), rbwMed(), and rbwPanel(), that produce residual balancing weights for estimating (a), (b), (c), respectively.
Extract the implied risk neutral density from options using various methods.
Calculates periodograms based on (robustly) fitting periodic functions to light curves (irregularly observed time series, possibly with measurement accuracies, occurring in astroparticle physics). Three main functions are included: RobPer() calculates the periodogram. Outlying periodogram bars (indicating a period) can be detected with betaCvMfit(). Artificial light curves can be generated using the function tsgen(). For more details see the corresponding article: Thieler, Fried and Rathjens (2016), Journal of Statistical Software 69(9), 1-36, <doi:10.18637/jss.v069.i09>.
The mixed integer programming library MIPLIB (see <http://miplib.zib.de/>) is commonly used to compare the performance of mixed integer optimization solvers. This package provides functions to access MIPLIB from the R Optimization Infrastructure ('ROI'). More information about MIPLIB can be found in the paper by Koch et al. available at <http://mpc.zib.de/index.php/MPC/article/viewFile/56/28>. The README.md file illustrates how to use this package.