Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools for decomposing differences in rate metrics between two groups into contributions from individual subgroups and visualizing them as a "Theseus Plot". Inspired by the story of the Ship of Theseus, the method replaces subgroup data from one group with that of another step by step, recalculating the overall metric at each stage to quantify subgroup contributions. A Theseus Plot combines the stepwise progression of a waterfall plot with the comparative bars of a bar chart, offering an intuitive way to understand subgroup-level effects.
This package provides functionality of a statistical testing implementation whether a dataset comes from a symmetric distribution when the center of symmetry is unknown, including Wilcoxon test and sign test procedure. In addition, sample size determination for both tests is provided. The Wilcoxon test procedure is described in Vexler et al. (2023) <https://www.sciencedirect.com/science/article/abs/pii/S0167947323000579>, and the sign test is outlined in Gastwirth (1971) <https://www.jstor.org/stable/2284233>.
You only need to type why pie charts are bad on Google to find thousands of articles full of (valid) reasons why other types of charts should be preferred over this one. Therefore, because of the little use due to the reasons already mentioned, making pie charts (and related) in R is not straightforward, so other functions are needed to simplify things. In this R package there are useful functions to make tasty pie charts immediately by exploiting the many cool templates provided.
This package implements an Entropy measure of dependence based on the Bhattacharya-Hellinger-Matusita distance. Can be used as a (nonlinear) autocorrelation/crosscorrelation function for continuous and categorical time series. The package includes tests for serial and cross dependence and nonlinearity based on it. Some routines have a parallel version that can be used in a multicore/cluster environment. The package makes use of S4 classes.
This package implements the Topic Testlet Model (TTM) as described by Xiong et al. (2025) <doi:10.1111/jedm.70001>. The package integrates Latent Dirichlet Allocation (LDA) with the Partial Credit Model to account for local item dependence in testlets using latent topics from student textual responses.
This package provides a set of exploratory data analysis (EDA) tools for visualizing trends, diagnosing data types for beginner-friendly workflows, and automatically routing to suitable statistical tests or trend exploration models. Includes unified plotting functions for trend lines, grouped boxplots, and comparative scatterplots; automated statistical testing (e.g., t-test, Wilcoxon, ANOVA, Kruskal-Wallis, Tukey, Dunn) with optional effect size calculation; and model-based trend analysis using generalized additive models (GAM) for count data, generalized linear models (GLM) for continuous data, and zero-inflated models (ZIP/ZINB) for count data with potential zero-inflation. Also supports time-window continuity checks, cross-year handling in compare_monthly_cases(), and ARIMA-ready preparation with stationarity diagnostics, ensuring consistent parameter styles for reproducible research and user-friendly workflows.Methods are based on R Core Team (2024) <https://www.R-project.org/>, Wood, S.N.(2017, ISBN:978-1498728331), Hyndman RJ, Khandakar Y (2008) <doi:10.18637/jss.v027.i03>, Simon Jackman (2024) <https://github.com/atahk/pscl/>, Achim Zeileis, Christian Kleiber, Simon Jackman (2008) <doi:10.18637/jss.v027.i08>.
Allows the user to draw probabilistic samples and make inferences from a finite population based on several sampling designs.
The LSTM (Long Short-Term Memory) model is a Recurrent Neural Network (RNN) based architecture that is widely used for time series forecasting. Min-Max transformation has been used for data preparation. Here, we have used one LSTM layer as a simple LSTM model and a Dense layer is used as the output layer. Then, compile the model using the loss function, optimizer and metrics. This package is based on Keras and TensorFlow modules and the algorithm of Paul and Garai (2021) <doi:10.1007/s00500-021-06087-4>.
This package provides functions to estimate the insertion and deletion rates of transposable element (TE) families. The estimation of insertion rate consists of an improved estimate of the age distribution that takes into account random mutations, and an adjustment by the deletion rate. A hypothesis test for a uniform insertion rate is also implemented. This package implements the methods proposed in Dai et al (2018).
Graphic interface for text analysis, implement a few methods such as biplots, correspondence analysis, co-occurrence, clustering, topic models, correlations and sentiments.
Adds some functions to help in your coding etiquette. tinycodet primarily focuses on 4 aspects. 1) Safer decimal (in)equality testing, standard-evaluated alternatives to with() and aes(), and other functions for safer coding. 2) A new package import system, that attempts to combine the benefits of using a package without attaching it, with the benefits of attaching a package. 3) Extending the string manipulation capabilities of the stringi R package. 4) Reducing repetitive code. Besides linking to Rcpp', tinycodet has only one other dependency, namely stringi'.
This package provides tools for estimating and inferring two-way partial area under receiver operating characteristic curves (two-way pAUC), partial area under receiver operating characteristic curves (pAUC), and partial area under ordinal dominance curves (pODC). Methods includes Mann-Whitney statistic and Jackknife, etc.
Implementation of a Bayesian two-way latent structure model for integrative genomic clustering. The model clusters samples in relation to distinct data sources, with each subject-dataset receiving a latent cluster label, though cluster labels have across-dataset meaning because of the model formulation. A common scaling across data sources is unneeded, and inference is obtained by a Gibbs Sampler. The model can fit multivariate Gaussian distributed clusters or a heavier-tailed modification of a Gaussian density. Uniquely among integrative clustering models, the formulation makes no nestedness assumptions of samples across data sources -- the user can still fit the model if a study subject only has information from one data source. The package provides a variety of post-processing functions for model examination including ones for quantifying observed alignment of clusterings across genomic data sources. Run time is optimized so that analyses of datasets on the order of thousands of features on fewer than 5 datasets and hundreds of subjects can converge in 1 or 2 days on a single CPU. See "Swanson DM, Lien T, Bergholtz H, Sorlie T, Frigessi A, Investigating Coordinated Architectures Across Clusters in Integrative Studies: a Bayesian Two-Way Latent Structure Model, 2018, <doi:10.1101/387076>, Cold Spring Harbor Laboratory" at <https://www.biorxiv.org/content/early/2018/08/07/387076.full.pdf> for model details.
Calculate the failure probability of civil engineering problems with Level I up to Level III Methods. Have fun and enjoy. References: Spaethe (1991, ISBN:3-211-82348-4) "Die Sicherheit tragender Baukonstruktionen", AU,BECK (2001) "Estimation of small failure probabilities in high dimensions by subset simulation." <doi:10.1016/S0266-8920(01)00019-4>, Breitung (1989) "Asymptotic approximations for probability integrals." <doi:10.1016/0266-8920(89)90024-6>.
This package provides a suite of descriptive and inferential methods designed to evaluate one or more biomarkers for their ability to guide patient treatment recommendations. Package includes functions to assess the calibration of risk models; and plot, evaluate, and compare markers. Please see the reference Janes H, Brown MD, Huang Y, et al. (2014) <doi:10.1515/ijb-2012-0052> for further details.
This package provides a user friendly interface to generation of booktab style tables using xtable'.
Instead of nesting function calls, annotate and transform functions using "#." comments.
This package provides an interactive interface to the tfrmt package. Users can import, modify, and export tables and templates with little to no code.
Routines for the analysis of nonlinear time series. This work is largely inspired by the TISEAN project, by Rainer Hegger, Holger Kantz and Thomas Schreiber: <http://www.mpipks-dresden.mpg.de/~tisean/>.
Calculates topic-specific diagnostics (e.g. mean token length, exclusivity) for Latent Dirichlet Allocation and Correlated Topic Models fit using the topicmodels package. For more details, see Chapter 12 in Airoldi et al. (2014, ISBN:9781466504080), pp 262-272 Mimno et al. (2011, ISBN:9781937284114), and Bischof et al. (2014) <arXiv:1206.4631v1>.
General framework to organize data, methods, and results used in reproducible scientific analyses. A TAF analysis consists of four scripts (data.R, model.R, output.R, report.R) that are run sequentially. Each script starts by reading files from a previous step and ends with writing out files for the next step. Convenience functions are provided to version control the required data and software, run analyses, clean residues from previous runs, manage files, manipulate tables, and produce figures. With a focus on stability and reproducible analyses, the TAF package comes with no dependencies. TAF forms a base layer for the icesTAF package and other scientific applications.
This package provides functions for propensity score estimation and weighting for continuous exposures as described in Zhu, Y., Coffman, D. L., & Ghosh, D. (2015). A boosting algorithm for estimating generalized propensity scores with continuous treatments. Journal of Causal Inference, 3(1), 25-40. <doi:10.1515/jci-2014-0022>.
Cleans spectrophotometry data obtained from the Denovix instrument. The package also provides an option to normalize the data in order to compare the quality of the samples obtained.