Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Datasets used in "Statistical Methods for the Social Sciences" (SMSS) by Alan Agresti and Barbara Finlay.
Sometimes it is useful to serve up alternative shiny UIs depending on information passed in the request object, such as the value of a cookie or a query parameter. This packages facilitates such switches.
Implement a promising, and yet little explored protocol for bioacoustical analysis, the eigensound method by MacLeod, Krieger and Jones (2013) <doi:10.4404/hystrix-24.1-6299>. Eigensound is a multidisciplinary method focused on the direct comparison between stereotyped sounds from different species. SoundShape', in turn, provide the tools required for anyone to go from sound waves to Principal Components Analysis, using tools extracted from traditional bioacoustics (i.e. tuneR and seewave packages), geometric morphometrics (i.e. geomorph package) and multivariate analysis (e.g. stats package). For more information, please see Rocha and Romano (2021) and check SoundShape repository on GitHub for news and updates <https://github.com/p-rocha/SoundShape>.
Spectral and Average Autocorrelation Zero Distance Density ('sazed') is a method for estimating the season length of a seasonal time series. sazed is aimed at practitioners, as it employs only domain-agnostic preprocessing and does not depend on parameter tuning or empirical constants. The computation of sazed relies on the efficient autocorrelation computation methods suggested by Thibauld Nion (2012, URL: <https://etudes.tibonihoo.net/literate_musing/autocorrelations.html>) and by Bob Carpenter (2012, URL: <https://lingpipe-blog.com/2012/06/08/autocorrelation-fft-kiss-eigen/>).
This package provides drop-in replacements for purrr and furrr mapping functions with built-in fault tolerance, automatic checkpointing, and seamless recovery capabilities. When long-running computations are interrupted due to errors, system crashes, or other failures, simply re-run the same code to automatically resume from the last checkpoint. Ideal for large-scale data processing, API calls, web scraping, and other time-intensive operations where reliability is critical. For purrr methodology, see Wickham and Henry (2023) <https://purrr.tidyverse.org/>.
The computer program is an efficient igneous norm algorithm and rock classification system written in R but run as shiny app.
RegLog system provides a set of shiny modules to handle register procedure for your users, alongside with login, edit credentials and password reset functionality. It provides support for popular SQL databases and optionally googlesheet-based database for easy setup. For email sending it provides support for emayili and gmailr backends. Architecture makes customizing usability pretty straightforward. The authentication system created with shiny.reglog is designed to be optional: user don't need to be logged-in to access your application, but when logged-in the user data can be used to read from and write to relational databases.
Standardized accuracy (staccuracy) is a framework for expressing accuracy scores such that 50% represents a reference level of performance and 100% is a perfect prediction. The staccuracy package provides tools for creating staccuracy functions as well as some recommended staccuracy measures. It also provides functions for some classic performance metrics such as mean absolute error (MAE), root mean squared error (RMSE), and area under the receiver operating characteristic curve (AUCROC), as well as their winsorized versions when applicable.
Scripts and exercises that use card shuffling to teach confidence interval comparisons for different estimators.
Analysis of metacommunities based on functional traits and phylogeny of the community components. The functions that are offered here implement for the R environment methods that have been available in the SYNCSA application written in C++ (by Valerio Pillar, available at <http://ecoqua.ecologia.ufrgs.br/SYNCSA.html>).
Dual interfaces, graphical and programmatic, designed for intuitive applications of Multilevel Regression and Poststratification (MRP). Users can apply the method to a variety of datasets, from electronic health records to sample survey data, through an end-to-end Bayesian data analysis workflow. The package provides robust tools for data cleaning, exploratory analysis, flexible model building, and insightful result visualization. For more details, see Si et al. (2020) <https://www150.statcan.gc.ca/n1/en/pub/12-001-x/2020002/article/00003-eng.pdf?st=iF1_Fbrh> and Si (2025) <doi:10.1214/24-STS932>.
Generates and predicts a set of linearly stacked Random Forest models using bootstrap sampling. Individual datasets may be heterogeneous (not all samples have full sets of features). Contains support for parallelization but the user should register their cores before running. This is an extension of the method found in Matlock (2018) <doi:10.1186/s12859-018-2060-2>.
Utility functions that help with common base-R problems relating to lists. Lists in base-R are very flexible. This package provides functions to quickly and easily characterize types of lists. That is, to identify if all elements in a list are null, data.frames, lists, or fully named lists. Other functionality is provided for the handling of lists, such as the easy splitting of lists into equally sized groups, and the unnesting of data.frames within fully named lists.
Bundles functions used to analyze the harmfulness of trial errors in criminal trials. Functions in the Scientific Analysis of Trial Errors ('sate') package help users estimate the probability that a jury will find a defendant guilty given jurors preferences for a guilty verdict and the uncertainty of that estimate. Users can also compare actual and hypothetical trial conditions to conduct harmful error analysis. The conceptual framework is discussed by Barry Edwards, A Scientific Framework for Analyzing the Harmfulness of Trial Errors, UCLA Criminal Justice Law Review (2024) <doi:10.5070/CJ88164341> and Barry Edwards, If The Jury Only Knew: The Effect Of Omitted Mitigation Evidence On The Probability Of A Death Sentence, Virginia Journal of Social Policy & the Law (2025) <https://vasocialpolicy.org/wp-content/uploads/2025/05/Edwards-If-The-Jury-Only-Knew.pdf>. The relationship between individual jurors verdict preferences and the probability that a jury returns a guilty verdict has been studied by Davis (1973) <doi:10.1037/h0033951>; MacCoun & Kerr (1988) <doi:10.1037/0022-3514.54.1.21>, and Devine et el. (2001) <doi:10.1037/1076-8971.7.3.622>, among others.
Easily display user feedback in Shiny apps.
The definition of fuzzy random variable and the methods of simulation from fuzzy random variables are two challenging statistical problems in three recent decades. This package is organized based on a special definition of fuzzy random variable and simulate fuzzy random variable by Piecewise Linear Fuzzy Numbers (PLFNs); see Coroianua et al. (2013) <doi:10.1016/j.fss.2013.02.005> for details about PLFNs. Some important statistical functions are considered for obtaining the membership function of main statistics, such as mean, variance, summation, standard deviation and coefficient of variance. Some of applied advantages of Sim.PLFN package are: (1) Easily generating / simulation a random sample of PLFN, (2) drawing the membership functions of the simulated PLFNs or the membership function of the statistical result, and (3) Considering the simulated PLFNs for arithmetic operation or importing into some statistical computation. Finally, it must be mentioned that Sim.PLFN package works on the basis of FuzzyNumbers package.
This package provides a set of plotting methods for simmer trajectories and simulations.
Software that leverages the capabilities of Circos by manipulating data, preparing configuration files, and running the Perl-native Circos directly from the R environment with minimal user intervention. Circos is a novel software that addresses the challenges in visualizing genetic data by creating circular ideograms composed of tracks of heatmaps, scatter plots, line plots, histograms, links between common markers, glyphs, text, and etc. Please see <http://www.circos.ca>.
This package provides a collection of functions to perform Detrended Fluctuation Analysis (DFA exponent), GUEDES et al. (2019) <doi:10.1016/j.physa.2019.04.132> , Detrended cross-correlation coefficient (RHODCCA), GUEDES & ZEBENDE (2019) <doi:10.1016/j.physa.2019.121286>, DMCA cross-correlation coefficient and Detrended multiple cross-correlation coefficient (DMC), GUEDES & SILVA-FILHO & ZEBENDE (2018) <doi:10.1016/j.physa.2021.125990>, both with sliding windows approach.
This package implements the smooth LASSO estimator for the function-on-function linear regression model described in Centofanti et al. (2022) <doi:10.1016/j.csda.2022.107556>.
Computes sequential A-, MV-, D- and E-optimal or near-optimal block and row-column designs for two-colour cDNA microarray experiments using the linear fixed effects and mixed effects models where the interest is in a comparison of all possible elementary treatment contrasts. The package also provides an optional method of using the graphical user interface (GUI) R package tcltk to ensure that it is user friendly.
The Semi Parametric Piecewise Distribution blends the Generalized Pareto Distribution for the tails with a kernel based interior.
Measures memory and CPU usage of R code by regularly taking snapshots of calls to the system command ps'. The package provides an entry point (albeit coarse) to profile usage of system resources by R code run in parallel.
Stepwise models for the optimal linear combination of continuous variables in binary classification problems under Youden Index optimisation. Information on the models implemented can be found at Aznar-Gimeno et al. (2021) <doi:10.3390/math9192497>.