Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Standardized accuracy (staccuracy) is a framework for expressing accuracy scores such that 50% represents a reference level of performance and 100% is a perfect prediction. The staccuracy package provides tools for creating staccuracy functions as well as some recommended staccuracy measures. It also provides functions for some classic performance metrics such as mean absolute error (MAE), root mean squared error (RMSE), and area under the receiver operating characteristic curve (AUCROC), as well as their winsorized versions when applicable.
This package provides a collection of functions for symbolic computation using the caracas package for structural equation models and other statistical analyses. Among its features is the ability to calculate the model-implied covariance (and correlation) matrix and the sampling covariance matrix of variable functions using the delta method.
Simulates data from model objects (e.g., from lm(), glm()), and plots this along with the original data to compare how well the simulated data matches the original data to determine model fit.
Ordinary and modified statistics for symmetrical linear regression models with small samples. The supported ordinary statistics include Wald, score, likelihood ratio and gradient. The modified statistics include score, likelihood ratio and gradient. Diagnostic tools associated with the fitted model are implemented. For more details see Medeiros and Ferrari (2017) <DOI:10.1111/stan.12107>.
Perform a probabilistic linkage of two data files using a scaling procedure using the methods described in Goldstein, H., Harron, K. and Cortina-Borja, M. (2017) <doi:10.1002/sim.7287>.
This package provides a lightweight tool that provides a reproducible workflow for selecting and executing appropriate statistical analysis in one-way or two-way experimental designs. The package automatically checks for data normality, conducts parametric (ANOVA) or non-parametric (Kruskal-Wallis) tests, performs post-hoc comparisons with Compact Letter Displays (CLD), and generates publication-ready boxplots, faceted plots, and heatmaps. It is designed for researchers seeking fast, automated statistical summaries and visualization. Based on established statistical methods including Shapiro and Wilk (1965) <doi:10.2307/2333709>, Kruskal and Wallis (1952) <doi:10.1080/01621459.1952.10483441>, Tukey (1949) <doi:10.2307/3001913>, Fisher (1925) <ISBN:0050021702>, and Wickham (2016) <ISBN:978-3-319-24277-4>.
Easily display user feedback in Shiny apps.
This package provides a software package help user to create virtual species for species distribution modelling. It includes several methods to help user to create virtual species distribution map. Those maps can be used for Species Distribution Modelling (SDM) study. SDM use environmental data for sites of occurrence of a species to predict all the sites where the environmental conditions are suitable for the species to persist, and may be expected to occur.
Spike and slab for prediction and variable selection in linear regression models. Uses a generalized elastic net for variable selection.
Using any importation code designed for SAS users to read ASCII files into sas7bdat files, this package parses through the INPUT block of a .sas syntax file to design the parameters needed for a read.fwf() function call. This allows the user to specify the location of the ASCII (often a .dat') file and the location of the SAS syntax file, and then load the data frame directly into R in just one step.
An R data package containing setlists from all Bruce Springsteen concerts over 1973-2021. Also includes all his song details such as lyrics and albums. Data extracted from: <http://brucebase.wikidot.com/>.
Create panel data consisting of independent states from 1816 to the present. The package includes the Gleditsch & Ward (G&W) and Correlates of War (COW) lists of independent states, as well as helper functions for working with state panel data and standardizing other data sources to create country-year/month/etc. data.
Sample size calculation to detect dynamic treatment regime (DTR) effects based on change in clinical attachment level (CAL) outcomes from a non-surgical chronic periodontitis treatments study. The experiment is performed under a Sequential Multiple Assignment Randomized Trial (SMART) design. The clustered tooth (sub-unit) level CAL outcomes are skewed, spatially-referenced, and non-randomly missing. The implemented algorithm is available in Xu et al. (2019+) <arXiv:1902.09386>.
This package implements survival-model-based imputation for censored laboratory measurements, including Tobit-type models with several distribution options. Suitable for data with values below detection or quantification limits, the package identifies the best-fitting distribution and produces realistic imputations that respect the censoring thresholds.
The systemPipeShiny (SPS) framework comes with many UI and server components. However, installing the whole framework is heavy and takes some time. If you would like to use UI and server components from SPS in your own Shiny apps, do not hesitate to try this package.
Extracts and summarizes metadata from data frames, including variable names, labels, types, and missing values. Computes compact descriptive statistics, frequency tables, and cross-tabulations to assist with efficient data exploration. Includes an interactive and exportable codebook generator for documenting variable metadata. Facilitates the identification of missing data patterns and structural issues in datasets. Designed to streamline initial data management and exploratory analysis workflows within R'.
Extension to the spatstat family of packages, for analysing large datasets of spatial points on a network. The geometrically- corrected K function is computed using a memory-efficient tree-based algorithm described by Rakshit, Baddeley and Nair (2019).
Connect to a remote server over SSH to transfer files via SCP, setup a secure tunnel, or run a command or script on the host while streaming stdout and stderr directly to the client.
Phenotypic analysis of field trials using mixed models with and without spatial components. One of a series of statistical genetic packages for streamlining the analysis of typical plant breeding experiments developed by Biometris. Some functions have been created to be used in conjunction with the R package asreml for the ASReml software, which can be obtained upon purchase from VSN international (<https://vsni.co.uk/software/asreml-r/>).
Provide regularized maximum covariance analysis incorporating smoothness, sparseness and orthogonality of couple patterns by using the alternating direction method of multipliers algorithm. The method can be applied to either regularly or irregularly spaced data, including 1D, 2D, and 3D (Wang and Huang, 2018 <doi:10.1002/env.2481>).
Allows user to conduct a simulation based quantitative bias analysis using covariate structures generated with individual-level data to characterize the bias arising from unmeasured confounding. Users can specify their desired data generating mechanisms to simulate data and quantitatively summarize findings in an end-to-end application using this package.
This implementation of the Empirical Mode Decomposition (EMD) works in 2 dimensions simultaneously, and can be applied on spatial data. It can handle both gridded or un-gridded datasets.
This package provides a convenient interface for formatting SQL queries directly within R'. It acts as a wrapper around the sql_format Rust crate. The package allows you to format SQL code with customizable options, including indentation, case formatting, and more, ensuring your SQL queries are clean, readable, and consistent.
This package implements the Smoothness-Penalized Deconvolution method for estimating a probability density under measurement error of Kent and Ruppert (2023) <doi:10.1080/01621459.2023.2259028>. The estimator is formed by computing a histogram of the error-contaminated data, and then finding an estimate that minimizes a reconstruction error plus a smoothness-inducing penalty term. The primary function, sped(), takes the data and error distribution, and returns the estimator as a function.