Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions to find all matches or non-matches, orphans, and duplicate or other replicated elements.
Create additional rows and columns on broom::tidy() output to allow for easier control on categorical parameter estimates.
This package contains functions to standardize tracheid profiles using the traditional method (Vaganov) and a new method to standardize tracheidograms based on the relative position of tracheids within tree rings.
This interface was created to develop a standard procedure to analyse temporal trend in the framework of the OSPAR convention. The analysis process run through 4 successive steps : 1) manipulate your data, 2) select the parameters you want to analyse, 3) build your regulated time series, 4) perform diagnosis and analysis and 5) read the results. Statistical analysis call other package function such as Kendall tests or cusum() function.
Create interactive tables, calendars, charts and markdown WYSIWYG editor with TOAST UI <https://ui.toast.com/> libraries to integrate in shiny applications or rmarkdown HTML documents.
Fit a threshold regression model based on the first-hitting-time of a boundary by the sample path of a Wiener diffusion process. The threshold regression methodology is well suited to applications involving survival and time-to-event data.
Utilities for rapidly loading specified rows and/or columns of data from large tab-separated value (tsv) files (large: e.g. 1 GB file of 10000 x 10000 matrix). tsvio is an R wrapper to C code that creates an index file for the rows of the tsv file, and uses that index file to collect rows and/or columns from the tsv file without reading the whole file into memory.
Time Series Qn is a package with applications of the Qn estimator of Rousseeuw and Croux (1993) <doi:10.1080/01621459.1993.10476408> to univariate and multivariate Time Series in time and frequency domains. More specifically, the robust estimation of autocorrelation or autocovariance matrix functions from Ma and Genton (2000, 2001) <doi:10.1111/1467-9892.00203>, <doi:10.1006/jmva.2000.1942> and Cotta (2017) <doi:10.13140/RG.2.2.14092.10883> are provided. The robust pseudo-periodogram of Molinares et. al. (2009) <doi:10.1016/j.jspi.2008.12.014> is also given. This packages also provides the M-estimator of the long-memory parameter d based on the robustification of the GPH estimator proposed by Reisen et al. (2017) <doi:10.1016/j.jspi.2017.02.008>.
Htmlwidget of Tippyjs to add tooltips to Shiny apps and R markdown documents.
Allows forecasting time series using nearest neighbors regression Francisco Martinez, Maria P. Frias, Maria D. Perez-Godoy and Antonio J. Rivera (2019) <doi:10.1007/s10462-017-9593-z>. When the forecasting horizon is higher than 1, two multi-step ahead forecasting strategies can be used. The model built is autoregressive, that is, it is only based on the observations of the time series. The nearest neighbors used in a prediction can be consulted and plotted.
Estimators for semi-parametric linear regression models with truncated response variables (fixed truncation point). The estimators implemented are the Symmetrically Trimmed Least Squares (STLS) estimator introduced by Powell (1986) <doi:10.2307/1914308>, the Quadratic Mode (QME) estimator introduced by Lee (1993) <doi:10.1016/0304-4076(93)90056-B>, and the Left Truncated (LT) estimator introduced by Karlsson (2006) <doi:10.1007/s00184-005-0023-x>.
This package provides a standardized user interface for column selection, that facilitates dataset merging in teal framework.
This package contains functions for applying the T^2-test for equivalence. The T^2-test for equivalence is a multivariate two-sample equivalence test. Distance measure of the test is the Mahalanobis distance. For multivariate normally distributed data the T^2-test for equivalence is exact and UMPI. The function T2EQ() implements the T^2-test for equivalence according to Wellek (2010) <DOI:10.1201/ebk1439808184>. The function T2EQ.dissolution.profiles.hoffelder() implements a variant of the T^2-test for equivalence according to Hoffelder (2016) <http://www.ecv.de/suse_item.php?suseId=Z|pi|8430> for the equivalence comparison of highly variable dissolution profiles.
This package provides tools to perform multiple comparison analyses, based on the well-known Tukey's "Honestly Significant Difference" (HSD) test. In models involving interactions, TukeyC stands out from other R packages by implementing intuitive and easy-to-use functions. In addition to accommodating traditional R methods such as lm() and aov(), it has also been extended to objects of the lmer() class, that is, mixed models with fixed effects. For more details see Tukey (1949) <doi:10.2307/3001913>.
Link R with Transformers from Hugging Face to transform text variables to word embeddings; where the word embeddings are used to statistically test the mean difference between set of texts, compute semantic similarity scores between texts, predict numerical variables, and visual statistically significant words according to various dimensions etc. For more information see <https://www.r-text.org>.
Forecasting of long memory time series in presence of structural break by using TSF algorithm by Papailias and Dias (2015) <doi:10.1016/j.ijforecast.2015.01.006>.
Provide data generation and estimation tools for the truncated positive normal (tpn) model discussed in Gomez, Olmos, Varela and Bolfarine (2018) <doi:10.1007/s11766-018-3354-x>, the slash tpn distribution discussed in Gomez, Gallardo and Santoro (2021) <doi:10.3390/sym13112164>, the bimodal tpn distribution discussed in Gomez et al. (2022) <doi:10.3390/sym14040665>, the flexible tpn model <doi:10.3390/math11214431> and the unit tpn distribution <doi:10.1016/j.chemolab.2025.105322>.
This package provides a screening process utilizing training and testing samples to filter out uninformative DNA methylation sites. Surrogate variables (SVs) of DNA methylation are included in the filtering process to explain unknown factor effects.
Find out who maintains the packages you use in your current session or in your package library and maybe say thank you'.
This package provides a coherent interface to multiple modelling tools for fitting trends along with a standardised approach for generating confidence and prediction intervals.
Test the nullity of covariances, in a set of variables, using a simple univariate procedure. See Marques, Diago, Norouzirad, Bispo (2023) <doi:10.1002/mma.9130>.
This application provides exploratory and confirmatory factor analysis, classical test theory, unidimensional and multidimensional item response theory, and continuous item response model analysis, through the shiny interactive interface. In addition, it offers rich functionalities for visualizing and downloading results. Users can download figures, tables, and analysis reports via the interactive interface.
This package provides access to datasets, models and preprocessing facilities for deep learning with images. Integrates seamlessly with the torch package and it's API borrows heavily from PyTorch vision package.
It allows for mapping proportions and indicators defined on the unit interval. It implements Beta-based small area methods comprising the classical Beta regression models, the Flexible Beta model and Zero and/or One Inflated extensions (Janicki 2020 <doi:10.1080/03610926.2019.1570266>). Such methods, developed within a Bayesian framework through Stan <https://mc-stan.org/>, come equipped with a set of diagnostics and complementary tools, visualizing and exporting functions. A Shiny application with a user-friendly interface can be launched to further simplify the process. For further details, refer to De Nicolò and Gardini (2024 <doi:10.18637/jss.v108.i01>).