Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Set of tools for mapping of categorical response variables based on principal component analysis (pca) and multidimensional unfolding (mdu).
This package provides a framework to load text and excel files through a shiny graphical interface. It allows renaming, transforming, ordering and removing variables. It includes basic exploratory methods such as the mean, median, mode, normality test, histogram and correlation.
Computes log-transformed kernel density estimates for positive data using a variety of kernels. It follows the methods described in Jones, Nguyen and McLachlan (2018) <doi:10.21105/joss.00870>.
This package performs recursive partitioning of linear and nonlinear mixed effects models, specifically for longitudinal data. The package is an extension of the original longRPart package by Stewart and Abdolell (2013) <https://cran.r-project.org/package=longRPart>.
This package contains functions for a flexible varying-coefficient landmark model by incorporating multiple short-term events into the prediction of long-term survival probability. For more information about landmark prediction please see Li, W., Ning, J., Zhang, J., Li, Z., Savitz, S.I., Tahanan, A., Rahbar.M.H., (2023+). "Enhancing Long-term Survival Prediction with Multiple Short-term Events: Landmarking with A Flexible Varying Coefficient Model".
Conveniently generate CSS using R code.
An effortless ndjson (newline-delimited JSON') logger, with two primary log-writing interfaces. It provides a set of wrappings for base R's message(), warning(), and stop() functions that maintain identical functionality, but also log the handler message to an ndjson log file. loggit also exports its internal loggit() function for powerful and configurable custom logging. No change in existing code is necessary to use this package, and should only require additions to fully leverage the power of the logging system. loggit also provides a log reader for reading an ndjson log file into a data frame, log rotation, and live echo of the ndjson log messages to terminal stdout for log capture by external systems (like containers). loggit is ideal for Shiny apps, data pipelines, modeling work flows, and more. Please see the vignettes for detailed example use cases.
This package provides a graph proposed by Rosenbaum is useful for checking some properties of various sorts of latent scale, this program generates commands to obtain the graph using dot from graphviz'.
Allows the simultaneous analysis of responses and response times in an Item Response Theory (IRT) modelling framework. Supports variable person speed functions (intercept, trend, quadratic), and covariates for item and person (random) parameters. Data missing-by-design can be specified. Parameter estimation is done with a MCMC algorithm. LNIRT replaces the package CIRT, which was written by Rinke Klein Entink. For reference, see the paper by Fox, Klein Entink and Van der Linden (2007), "Modeling of Responses and Response Times with the Package cirt", Journal of Statistical Software, <doi:10.18637/jss.v020.i07>.
Trend filtering is a widely used nonparametric method for knot detection. This package provides an efficient solution for L0 trend filtering, avoiding the traditional methods of using Lagrange duality or Alternating Direction Method of Multipliers algorithms. It employ a splicing approach that minimizes L0-regularized sparse approximation by transforming the L0 trend filtering problem. The package excels in both efficiency and accuracy of trend estimation and changepoint detection in segmented functions. References: Wen et al. (2020) <doi:10.18637/jss.v094.i04>; Zhu et al. (2020)<doi:10.1073/pnas.2014241117>; Wen et al. (2023) <doi:10.1287/ijoc.2021.0313>.
Data files and a few functions used in the book Linear Models and Regression with R: An Integrated Approach by Debasis Sengupta and Sreenivas Rao Jammalamadaka (2019).
Studies that report shifts in species distributions may be biased by the shape of the study area. The main functionality of this package is to calculate the Latitudinal Bias Index (LBI) for any given shape. The LBI is bounded between +1 (100% probability to exclusively record latitudinal shifts, i.e., range shifts data sampled along a perfectly South-North oriented straight line) and -1 (100% probability to exclusively record longitudinal shifts, i.e., range shifts data sampled along a perfectly East-West oriented straight line).
Split your rmarkdown or quarto files by sections into a tibble: titles, text, chunks. Rebuild the file from the tibble.
Simple functions to lookup items in key-value pairs. See Mehta (2021) <doi:10.1007/978-1-4842-6613-7_6>.
Bandwidth selection for kernel density estimators of 2-d level sets and highest density regions. It applies a plug-in strategy to estimate the asymptotic risk function and minimize to get the optimal bandwidth matrix. See Doss and Weng (2018) <arXiv:1806.00731> for more detail.
This package contains functions to help create log files. The package aims to overcome the difficulty of the base R sink() command. The log_print() function will print to both the console and the file log, without interfering in other write operations.
Non-parametric estimators for casual effects based on longitudinal modified treatment policies as described in Diaz, Williams, Hoffman, and Schenck <doi:10.1080/01621459.2021.1955691>, traditional point treatment, and traditional longitudinal effects. Continuous, binary, categorical treatments, and multivariate treatments are allowed as well are censored outcomes. The treatment mechanism is estimated via a density ratio classification procedure irrespective of treatment variable type. For both continuous and binary outcomes, additive treatment effects can be calculated and relative risks and odds ratios may be calculated for binary outcomes. Supports survival outcomes with competing risks (Diaz, Hoffman, and Hejazi; <doi:10.1007/s10985-023-09606-7>).
The lognormal distribution (Limpert et al. (2001) <doi:10.1641/0006-3568(2001)051%5B0341:lndats%5D2.0.co;2>) can characterize uncertainty that is bounded by zero. This package provides estimation of distribution parameters, computation of moments and other basic statistics, and an approximation of the distribution of the sum of several correlated lognormally distributed variables (Lo 2013 <doi:10.12988/ams.2013.39511>) and the approximation of the difference of two correlated lognormally distributed variables (Lo 2012 <doi:10.1155/2012/838397>).
This package implements Cumulative Sum (CUSUM) control charts specifically designed for monitoring processes following a Gamma distribution. Provides functions to estimate distribution parameters, simulate control limits, and apply cautious learning schemes for adaptive thresholding. It supports upward and downward monitoring with guaranteed performance evaluated via Monte Carlo simulations. It is useful for quality control applications in industries where data follows a Gamma distribution. Methods are based on Madrid-Alvarez et al. (2024) <doi:10.1002/qre.3464> and Madrid-Alvarez et al. (2024) <doi:10.1080/08982112.2024.2440368>.
This package provides function for the l1-ball prior on high-dimensional regression. The main function, l1ball(), yields posterior samples for linear regression, as introduced by Xu and Duan (2020) <arXiv:2006.01340>.
Shiny apps for the quantitative analysis of images from lateral flow assays (LFAs). The images are segmented and background corrected and color intensities are extracted. The apps can be used to import and export intensity data and to calibrate LFAs by means of linear, loess, or gam models. The calibration models can further be saved and applied to intensity data from new images for determining concentrations.
These functions take a gene expression value matrix, a primary covariate vector, an additional known covariates matrix. A two stage analysis is applied to counter the effects of latent variables on the rankings of hypotheses. The estimation and adjustment of latent effects are proposed by Sun, Zhang and Owen (2011). "leapp" is developed in the context of microarray experiments, but may be used as a general tool for high throughput data sets where dependence may be involved.
Hidden Markov Model (HMM) based on symmetric lambda distribution framework is implemented for the study of return time-series in the financial market. Major features in the S&P500 index, such as regime identification, volatility clustering, and anti-correlation between return and volatility, can be extracted from HMM cleanly. Univariate symmetric lambda distribution is essentially a location-scale family of exponential power distribution. Such distribution is suitable for describing highly leptokurtic time series obtained from the financial market. It provides a theoretically solid foundation to explore such data where the normal distribution is not adequate. The HMM implementation follows closely the book: "Hidden Markov Models for Time Series", by Zucchini, MacDonald, Langrock (2016).
Estimation of various extensions of the mixed models including latent class mixed models, joint latent class mixed models, mixed models for curvilinear outcomes, mixed models for multivariate longitudinal outcomes using a maximum likelihood estimation method (Proust-Lima, Philipps, Liquet (2017) <doi:10.18637/jss.v078.i02>).