Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides classes and methods for objects, whose indexing naturally starts from zero. Subsetting, indexing and mathematical operations are defined naturally between lagged objects and lagged and base R objects. Recycling is not used, except for singletons. The single bracket operator doesn't drop dimensions by default.
R lists, especially nested lists, can be very difficult to visualize or represent. Sometimes str() is not enough, so this suite of htmlwidgets is designed to help see, understand, and maybe even modify your R lists. The function reactjson() requires a package reactR that can be installed from CRAN or <https://github.com/timelyportfolio/reactR>.
Linear dimension reduction subspaces can be uniquely defined using orthogonal projection matrices. This package provides tools to compute distances between such subspaces and to compute the average subspace. For details see Liski, E.Nordhausen K., Oja H., Ruiz-Gazen A. (2016) Combining Linear Dimension Reduction Subspaces <doi:10.1007/978-81-322-3643-6_7>.
Simulation and estimation of univariate and multivariate log-GARCH models. The main functions of the package are: lgarchSim(), mlgarchSim(), lgarch() and mlgarch(). The first two functions simulate from a univariate and a multivariate log-GARCH model, respectively, whereas the latter two estimate a univariate and multivariate log-GARCH model, respectively.
Analysis of stock data ups and downs trend, the stock technical analysis indicators function have trend line, reversal pattern and market trend.
Set of tools for analyzing vertical fuel continuity at the tree level using Airborne Laser Scanning data. The workflow consisted of: 1) calculating the vertical height profiles of each segmented tree; 2) identifying gaps and fuel layers; 3) estimating the distance between fuel layers; and 4) retrieving the fuel layers base height and depth. Additionally, other functions recalculate previous metrics after considering distances greater than certain threshold. Moreover, the package calculates: i) the percentage of Leaf Area Density comprised in each fuel layer, ii) remove fuel layers with Leaf Area Density (LAD) percentage less than 10, and iii) recalculate the distances among the reminder ones. On the other hand, it identifies the crown base height (CBH) based on different criteria: the fuel layer with the highest LAD percentage and the fuel layers located at the largest- and at the last-distance. When there is only one fuel layer, it also identifies the CBH performing a segmented linear regression (breaking points) on the cumulative sum of LAD as a function of height. Finally, a collection of plotting functions is developed to represent: i) the initial gaps and fuel layers; ii) the fuels base height, depths and gaps with distances greater than certain threshold and, iii) the CBH based on different criteria. The methods implemented in this package are original and have not been published elsewhere.
This package provides functions for the longitudinal genetic random field method (He et al., 2015, <doi:10.1111/biom.12310>) to test the association between a longitudinally measured quantitative outcome and a set of genetic variants in a gene/region.
This package provides a Low Rank Correction Variational Bayesian algorithm for high-dimensional multi-source heterogeneous quantile linear models. More details have been written up in a paper submitted to the journal Statistics in Medicine, and the details of variational Bayesian methods can be found in Ray and Szabo (2021) <doi:10.1080/01621459.2020.1847121>. It simultaneously performs parameter estimation and variable selection. The algorithm supports two model settings: (1) local models, where variable selection is only applied to homogeneous coefficients, and (2) global models, where variable selection is also performed on heterogeneous coefficients. Two forms of parameter estimation are output: one is the standard variational Bayesian estimation, and the other is the variational Bayesian estimation corrected with low-rank adjustment.
Utilities for querying plain text accounting files from Ledger', HLedger', and Beancount'.
Label-free bottom-up proteomics expression data is often affected by data heterogeneity and missing values. Normalization and missing value imputation are commonly used techniques to address these issues and make the dataset suitable for further downstream analysis. This package provides an optimal combination of normalization and imputation methods for the dataset. The package utilizes three normalization methods and three imputation methods.The statistical evaluation measures named pooled co-efficient of variance, pooled estimate of variance and pooled median absolute deviation are used for selecting the best combination of normalization and imputation method for the given dataset. The user can also visualize the results by using various plots available in this package. The user can also perform the differential expression analysis between two sample groups with the function included in this package. The chosen three normalization methods, three imputation methods and three evaluation measures were chosen for this study based on the research papers published by Välikangas et al. (2016) <doi:10.1093/bib/bbw095>, Jin et al. (2021) <doi:10.1038/s41598-021-81279-4> and Srivastava et al. (2023) <doi:10.2174/1574893618666230223150253>.This work has published by Sakthivel et al. (2025) <doi:10.1021/acs.jproteome.4c00552>.
Generates the Langa-Weir classification of cognitive function for the 2022 Health and Retirement Study (HRS) cognition data. It is particularly useful for researchers studying cognitive aging who wish to work with the most recent release of HRS data. The package provides user-friendly functions for data preprocessing, scoring, and classification allowing users to easily apply the Langa-Weir classification system. For details regarding the; HRS <https://hrsdata.isr.umich.edu/> and Langa-Weir classifications <https://hrsdata.isr.umich.edu/data-products/langa-weir-classification-cognitive-function-1995-2020>.
This package provides a collection of helper functions and illustrative datasets to support learning and teaching of data science with R. The package is designed as a companion to the book <https://book-data-science-r.netlify.app>, making key data science techniques accessible to individuals with minimal coding experience. Functions include tools for data partitioning, performance evaluation, and data transformations (e.g., z-score and min-max scaling). The included datasets are curated to highlight practical applications in data exploration, modeling, and multivariate analysis. An early inspiration for the package came from an ancient Persian idiom about "eating the liveR," symbolizing deep and immersive engagement with knowledge.
Implementation of Locally Scaled Density Based Clustering (LSDBC) algorithm proposed by Bicici and Yuret (2007) <doi:10.1007/978-3-540-71618-1_82>. This package also contains some supporting functions such as betaCV() function and get_spectral() function.
These functions take a gene expression value matrix, a primary covariate vector, an additional known covariates matrix. A two stage analysis is applied to counter the effects of latent variables on the rankings of hypotheses. The estimation and adjustment of latent effects are proposed by Sun, Zhang and Owen (2011). "leapp" is developed in the context of microarray experiments, but may be used as a general tool for high throughput data sets where dependence may be involved.
This package provides methods for assessing agreement between repeated measurements obtained by two or more methods using the longitudinal concordance correlation coefficient (LCC). Polynomial mixed-effects models (via nlme') describe how concordance, Pearson correlation and accuracy evolve over time. Functions are provided for model fitting, diagnostic plots, extraction of summaries, and non-parametric bootstrap confidence intervals (including parallel computation), following Oliveira et al. (2018) <doi:10.1007/s13253-018-0321-1>.
This package provides a ggplot2 extension that focusses on expanding the plotter's arsenal of guides. Guides in ggplot2 include axes and legends. legendry offers new axes and annotation options, as well as new legends and colour displays.
This package contains LUE_BIOMASS(),LUE_BIOMASS_VPD(), LUE_YIELD() and LUE_YIELD_VPD() to estimate aboveground biomass and crop yield firstly by calculating the Absorbed Photosynthetically Active Radiation (APAR) and secondly the actual values of light use efficiency with and without vapour presure deficit Shi et al.(2007) <doi:10.2134/agronj2006.0260>.
This package provides tools for maximum likelihood estimation of parameters of scientific models. Based on Goffe et al (1994) <doi:10.1016/0304-4076(94)90038-8>.
This package provides a largish collection of example datasets, including several classics. Many of these datasets are well suited for regression, classification, and visualization.
Client for programmatic access to the Lake Multi-scaled Geospatial and Temporal database <https://lagoslakes.org>, with functions for accessing lake water quality and ecological context data for the US.
Fast implementations to compute the genetic covariance matrix, the Jaccard similarity matrix, the s-matrix (the weighted Jaccard similarity matrix), and the (classic or robust) genomic relationship matrix of a (dense or sparse) input matrix (see Hahn, Lutz, Hecker, Prokopenko, Cho, Silverman, Weiss, and Lange (2020) <doi:10.1002/gepi.22356>). Full support for sparse matrices from the R-package Matrix'. Additionally, an implementation of the power method (von Mises iteration) to compute the largest eigenvector of a matrix is included, a function to perform an automated full run of global and local correlations in population stratification data, a function to compute sliding windows, and a function to invert minor alleles and to select those variants/loci exceeding a minimal cutoff value. New functionality in locStra allows one to extract the k leading eigenvectors of the genetic covariance matrix, Jaccard similarity matrix, s-matrix, and genomic relationship matrix via fast PCA without actually computing the similarity matrices. The fast PCA to compute the k leading eigenvectors can now also be run directly from bed'+'bim'+'fam files.
This package provides a unified interface to large language models across multiple providers. Supports text generation, structured output with optional JSON Schema validation, and embeddings. Includes tidyverse-friendly helpers, chat session, consistent error handling, and parallel batch tools.
Fits look-up tables by filling entries with the mean or median values of observations fall in partitions of the feature space. Partitions can be determined by user of the package using input argument feature.boundaries, and dimensions of the feature space can be any combination of continuous and categorical features provided by the data set. A Predict function directly fetches corresponding entry value, and a default value is defined as the mean or median of all available observations. The table and other components are represented using the S4 class lookupTable.
Create tables from within R directly on Google Slides presentations. Currently supports matrix, data.frame and flextable objects.