Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides text analysis in R, focusing on the use of a tokenized text format. In this format, the positions of tokens are maintained, and each token can be annotated (e.g., part-of-speech tags, dependency relations). Prominent features include advanced Lucene-like querying for specific tokens or contexts (e.g., documents, sentences), similarity statistics for words and documents, exporting to DTM for compatibility with many text analysis packages, and the possibility to reconstruct original text from tokens to facilitate interpretation.
This package implements the multiple changepoint algorithm PELT with a nonparametric cost function based on the empirical distribution of the data. This package extends the changepoint package (see Killick, R and Eckley, I (2014) <doi:10.18637/jss.v058.i03> ).
Simple, fast, and automatic encodings for category data using a data.table backend. Most of the methods are an implementation of "Sufficient Representation for Categorical Variables" by Johannemann, Hadad, Athey, Wager (2019) <arXiv:1908.09874>, particularly their mean, sparse principal component analysis, low rank representation, and multinomial logit encodings.
An open-source implementation of the Congruent Matching Cells method for cartridge case identification as proposed by Song (2013) <https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=911193> as well as an extension of the method proposed by Tong et al. (2015) <doi:10.6028/jres.120.008>. Provides a wide range of pre, inter, and post-processing options when working with cartridge case scan data and their associated comparisons. See the cmcR package website for more details and examples.
Load Current Population Survey (CPS) microdata into R using the Census Bureau Data API (<https://www.census.gov/data/developers/data-sets.html>), including basic monthly CPS and CPS ASEC microdata.
Analyzes and modifies metabolomics raw data (generated using Gas Chromatography-Atmospheric Pressure Chemical Ionization-Mass Spectrometry) to correct overloaded signals, i.e. ion intensities exceeding detector saturation leading to a cut-off peak. Data in xcmsRaw format are accepted as input and mzXML files can be processed alternatively. Overloaded signals are detected automatically and modified using an Gaussian or an Isotopic-Ratio approach. Quality control plots are generated and corrected data are stored within the original xcmsRaw or mzXML respectively to allow further processing.
This package provides functions and data files to help CE Public-Use Microdata (PUMD) users calculate annual estimated expenditure means, standard errors, and quantiles according to the methods used by the CE with PUMD. For more information on the CE please visit <https://www.bls.gov/cex>. For further reading on CE estimate calculations please see the CE Calculation section of the U.S. Bureau of Labor Statistics (BLS) Handbook of Methods at <https://www.bls.gov/opub/hom/cex/calculation.htm>. For further information about CE PUMD please visit <https://www.bls.gov/cex/pumd.htm>.
Set chunk hooks for R Markdown documents <https://rmarkdown.rstudio.com/>, and improve user experience. For example, change units of figure sizes, benchmark chunks, and number lines on code blocks.
Datasets used in the book "Categorical Data Analysis" by Agresti (2012, ISBN:978-0-470-46363-5) but not printed in the book. Datasets and help pages were automatically produced from the source <https://users.stat.ufl.edu/~aa/cda/data.html> by the R script foo.R, which can be found in the GitHub repository.
An interactive document on the topic of confusion matrix analysis using rmarkdown and shiny packages. Runtime examples are provided in the package function as well as at <https://predanalyticssessions1.shinyapps.io/ConfusionMatrixShiny/>.
Clustered covariate regression enables estimation and inference in both linear and non-linear models with linear predictor functions even when the design matrix is column rank deficient. Routines in this package implement algorithms in Soale and Tsyawo (2019) <doi:10.13140/RG.2.2.32355.81441>.
Imports conversation transcripts into R, concatenates them into a single dataframe appending event identifiers, cleans and formats the text, then yokes user-specified psycholinguistic database values to each word. ConversationAlign then computes alignment indices between two interlocutors across each transcript for >40 possible semantic, lexical, and affective dimensions. In addition to alignment, ConversationAlign also produces a table of analytics (e.g., token count, type-token-ratio) in a summary table describing your particular text corpus.
Change point tests for joint distributions and copulas using pseudo-observations with multipliers or bootstrap. The processes used here have been defined in Bucher, Kojadinovic, Rohmer & Segers <doi:10.1016/j.jmva.2014.07.012> and Nasri & Remillard <doi:10.1016/j.jmva.2019.03.002>.
This package contains the basic functions to apply the unified framework for partitioning the drivers of stability of ecological communities. Segrestin et al. (2024) <doi:10.1111/geb.13828>.
Computes the Conover-Iman test (1979) for 0th-order stochastic dominance and reports the results among multiple pairwise comparisons after a Kruskal-Wallis omnibus test for i0th-order stochastic dominance among k groups (Kruskal and Wallis, 1952). conover.test makes k(k-1)/2 multiple pairwise comparisons based on Conover-Iman t-test-statistic of the rank differences. The null hypothesis for each pairwise comparison is that the probability of observing a randomly selected value from the first group that is larger than a randomly selected value from the second group equals one half; this null hypothesis corresponds to that of the Wilcoxon-Mann-Whitney rank-sum test. Like the rank-sum test, if the data can be assumed to be continuous, and the distributions are assumed identical except for a difference in location, Conover-Iman test may be understood as a test for median difference and for mean difference. conover.test accounts for tied ranks. The Conover-Iman test is strictly valid if and only if the corresponding Kruskal-Wallis null hypothesis is rejected.
The CoTiMA package performs meta-analyses of correlation matrices of repeatedly measured variables taken from studies that used different time intervals. Different time intervals between measurement occasions impose problems for meta-analyses because the effects (e.g. cross-lagged effects) cannot be simply aggregated, for example, by means of common fixed or random effects analysis. However, continuous time math, which is applied in CoTiMA', can be used to extrapolate or intrapolate the results from all studies to any desired time lag. By this, effects obtained in studies that used different time intervals can be meta-analyzed. CoTiMA fits models to empirical data using the structural equation model (SEM) package ctsem', the effects specified in a SEM are related to parameters that are not directly included in the model (i.e., continuous time parameters; together, they represent the continuous time structural equation model, CTSEM). Statistical model comparisons and significance tests are then performed on the continuous time parameter estimates. CoTiMA also allows analysis of publication bias (Egger's test, PET-PEESE estimates, zcurve analysis etc.) and analysis of statistical power (post hoc power, required sample sizes). See Dormann, C., Guthier, C., & Cortina, J. M. (2019) <doi:10.1177/1094428119847277>. and Guthier, C., Dormann, C., & Voelkle, M. C. (2020) <doi:10.1037/bul0000304>.
Plots the coefficients from model objects. This very quickly shows the user the point estimates and confidence intervals for fitted models.
Process Digital Cover Photography images of tree canopies to get canopy attributes like Foliage Cover and Leaf Area Index. Detailed description of the methods in Chianucci et al. (2022) <doi:10.1007/s00468-018-1666-3>.
Responsive and modern HTML card essentials for shiny applications and dashboards. This novel card component in Bootstrap provides a flexible and extensible content container with multiple variants and options for building robust R based apps e.g for graph build or machine learning projects. The features rely on a combination of JQuery <https://jquery.com> and CSS styles to improve the card functionality.
This package provides a toolbox for developing applications, games, simulations, or agent-based models in the R terminal. Included functions allow users to move the cursor around the terminal screen, change text colors and attributes, clear the screen, hide and show the cursor, map key presses to functions, draw shapes and curves, among others. Most functionalities require users to be in a terminal (not the R GUI).
Functionality for segmenting individual trees from a forest stand scanned with a close-range (e.g., terrestrial or mobile) laser scanner. The complete workflow from a raw point cloud to a complete tabular forest inventory is provided. The package contains several algorithms for detecting tree bases and a graph-based algorithm to attach all remaining points to these tree bases. It builds heavily on the lidR package. A description of the segmentation algorithm can be found in Larysch et al. (2025) <doi:10.1007/s10342-025-01796-z>.
Fit multiclass Classification version of Bayesian Adaptive Smoothing Splines (CBASS) to data using reversible jump MCMC. The multiclass classification problem consists of a response variable that takes on unordered categorical values with at least three levels, and a set of inputs for each response variable. The CBASS model consists of a latent multivariate probit formulation, and the means of the latent Gaussian random variables are specified using adaptive regression splines. The MCMC alternates updates of the latent Gaussian variables and the spline parameters. All the spline parameters (variables, signs, knots, number of interactions), including the number of basis functions used to model each latent mean, are inferred. Functions are provided to process inputs, initialize the chain, run the chain, and make predictions. Predictions are made on a probabilistic basis, where, for a given input, the probabilities of each categorical value are produced. See Marrs and Francom (2023) "Multiclass classification using Bayesian multivariate adaptive regression splines" Under review.
Allows clustering of incomplete observations by addressing missing values using multiple imputation. For achieving this goal, the methodology consists in three steps, following Audigier and Niang 2022 <doi:10.1007/s11634-022-00519-1>. I) Missing data imputation using dedicated models. Four multiple imputation methods are proposed, two are based on joint modelling and two are fully sequential methods, as discussed in Audigier et al. (2021) <doi:10.48550/arXiv.2106.04424>. II) cluster analysis of imputed data sets. Six clustering methods are available (distances-based or model-based), but custom methods can also be easily used. III) Partition pooling. The set of partitions is aggregated using Non-negative Matrix Factorization based method. An associated instability measure is computed by bootstrap (see Fang, Y. and Wang, J., 2012 <doi:10.1016/j.csda.2011.09.003>). Among applications, this instability measure can be used to choose a number of clusters with missing values. The package also proposes several diagnostic tools to tune the number of imputed data sets, to tune the number of iterations in fully sequential imputation, to check the fit of imputation models, etc.
Calculates daily climate water balance for irrigation purposes and also calculates the reference evapotranspiration (ET) using three methods, Penman and Monteith (Allen et al. 1998, ISBN:92-5-104219-5); Priestley and Taylor (1972) <doi:10/cr3qwn>; or Hargreaves and Samani (1985) <doi:10.13031/2013.26773>. Users may specify a management allowed depletion (MAD), which is used to suggest when to irrigate. The functionality allows for the use of crop and water stress coefficients as well.