Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The two-parameter Xgamma and Poisson Xgamma distributions are analyzed, covering standard distribution and regression functions, maximum likelihood estimation, quantile functions, probability density and mass functions, cumulative distribution functions, and random number generation. References include: "Sen, S., Chandra, N. and Maiti, S. S. (2018). On properties and applications of a two-parameter XGamma distribution. Journal of Statistical Theory and Applications, 17(4): 674--685. <doi:10.2991/jsta.2018.17.4.9>." "Wani, M. A., Ahmad, P. B., Para, B. A. and Elah, N. (2023). A new regression model for count data with applications to health care data. International Journal of Data Science and Analytics. <doi:10.1007/s41060-023-00453-1>.".
This package provides a tidy approach to analysis of biological sequences. All processing and data-storage functions are heavily optimized to allow the fastest and most efficient data storage.
This package provides a comprehensive suite of statistical tools for analyzing, simulating, and computing properties of the Topp-Leone Cauchy Rayleigh (TLCAR) distribution, a versatile distribution amalgamating features of the Topp-Leone, Cauchy, and Rayleigh distributions, ideal for modeling intricate, heterogeneous data across scientific domains. See Atchadé, M.N., Bogninou, M.J., and Djibril, A.M. (2023) <doi:10.1007/s44199-023-00066-4> and Atchadé, M.N., Bogninou, M.J., and Djibril, A.M. (2024) <doi:10.1007/s44199-023-00069-1> for further insights.
Calculates total survey error (TSE) for a survey under multiple, different weighting schemes, using both scale-dependent and scale-independent metrics. Package works directly from the data set, with no hand calculations required: just upload a properly structured data set (see TESTWGT and its documentation), properly input column names (see functions documentation), and run your functions. For more on TSE, see: Weisberg, Herbert (2005, ISBN:0-226-89128-3); Biemer, Paul (2010) <doi:10.1093/poq/nfq058>; Biemer, Paul et.al. (2017, ISBN:9781119041672); etc.
Fit, compare, and visualize Bayesian graphical vector autoregressive (GVAR) network models using Stan'. These models are commonly used in psychology to represent temporal and contemporaneous relationships between multiple variables in intensive longitudinal data. Fitted models can be compared with a test based on matrix norm differences of posterior point estimates to quantify the differences between two estimated networks. See also Siepe, Kloft & Heck (2024) <doi:10.31234/osf.io/uwfjc>.
Implementation of Time to Target plot based on the work of Ribeiro and Rosseti (2015) <DOI:10.1007/s11590-014-0760-8>, that describe a numerical method that gives the probability of an algorithm A finds a solution at least as good as a given target value in smaller computation time than algorithm B.
It allows for mapping proportions and indicators defined on the unit interval. It implements Beta-based small area methods comprising the classical Beta regression models, the Flexible Beta model and Zero and/or One Inflated extensions (Janicki 2020 <doi:10.1080/03610926.2019.1570266>). Such methods, developed within a Bayesian framework through Stan <https://mc-stan.org/>, come equipped with a set of diagnostics and complementary tools, visualizing and exporting functions. A Shiny application with a user-friendly interface can be launched to further simplify the process. For further details, refer to De Nicolò and Gardini (2024 <doi:10.18637/jss.v108.i01>).
Link R with Transformers from Hugging Face to transform text variables to word embeddings; where the word embeddings are used to statistically test the mean difference between set of texts, compute semantic similarity scores between texts, predict numerical variables, and visual statistically significant words according to various dimensions etc. For more information see <https://www.r-text.org>.
Interacts with a suite of web application programming interfaces (API) for taxonomic tasks, such as getting database specific taxonomic identifiers, verifying species names, getting taxonomic hierarchies, fetching downstream and upstream taxonomic names, getting taxonomic synonyms, converting scientific to common names and vice versa, and more. Some of the services supported include NCBI E-utilities (<https://www.ncbi.nlm.nih.gov/books/NBK25501/>), Encyclopedia of Life (<https://eol.org/docs/what-is-eol/data-services>), Global Biodiversity Information Facility (<https://techdocs.gbif.org/en/openapi/>), and many more. Links to the API documentation for other supported services are available in the documentation for their respective functions in this package.
This package provides functions to find all matches or non-matches, orphans, and duplicate or other replicated elements.
This package provides functions for visualizing networks with tmap'. It supports sfnetworks objects natively but is not limited to them. Useful for adding network layers such as edges and nodes to tmap maps. More features may be added in future versions.
Cluster data without specifying the number of clusters using the Table Invitation Prior (TIP) introduced in the paper "Clustering Gene Expression Using the Table Invitation Prior" by Charles W. Harrison, Qing He, and Hsin-Hsiung Huang (2022) <doi:10.3390/genes13112036>. TIP is a Bayesian prior that uses pairwise distance and similarity information to cluster vectors, matrices, or tensors.
The goal of TailID is to detect sensitive points in the tail of a dataset using techniques from Extreme Value Theory (EVT). It utilizes the Generalized Pareto Distribution (GPD) for assessing tail behavior and detecting inconsistent points with the Identical Distribution hypothesis of the tail. For more details see Manau (2025)<doi:10.4230/LIPIcs.ECRTS.2025.20>.
The textrank algorithm is an extension of the Pagerank algorithm for text. The algorithm allows to summarize text by calculating how sentences are related to one another. This is done by looking at overlapping terminology used in sentences in order to set up links between sentences. The resulting sentence network is next plugged into the Pagerank algorithm which identifies the most important sentences in your text and ranks them. In a similar way textrank can also be used to extract keywords. A word network is constructed by looking if words are following one another. On top of that network the Pagerank algorithm is applied to extract relevant words after which relevant words which are following one another are combined to get keywords. More information can be found in the paper from Mihalcea, Rada & Tarau, Paul (2004) <https://www.aclweb.org/anthology/W04-3252/>.
This package provides methods for generating modelled parametric Tropical Cyclone (TC) spatial hazard fields and time series output at point locations from TC tracks. R's compatibility to simply use fast cpp code via the Rcpp package and the wide range spatial analysis tools via the terra package makes it an attractive open source environment to study TCs'. This package estimates TC vortex wind and pressure fields using parametric equations originally coded up in python by TCRM <https://github.com/GeoscienceAustralia/tcrm> and then coded up in Cuda cpp by TCwindgen <https://github.com/CyprienBosserelle/TCwindgen>.
The classical two-sample t-test works well for the normally distributed data or data with large sample size. The tcfu() and tt() tests implemented in this package provide better type-I-error control with more accurate power when testing the equality of two-sample means for skewed populations having unequal variances. These tests are especially useful when the sample sizes are moderate. The tcfu() uses the Cornish-Fisher expansion to achieve a better approximation to the true percentiles. The tt() provides transformations of the Welch's t-statistic so that the sampling distribution become more symmetric. For more technical details, please refer to Zhang (2019) <http://hdl.handle.net/2097/40235>.
Fits a wide variety of multivariate spatio-temporal models with simultaneous and lagged interactions among variables (including vector autoregressive spatio-temporal ('VAST') dynamics) for areal, continuous, or network spatial domains. It includes time-variable, space-variable, and space-time-variable interactions using dynamic structural equation models ('DSEM') as expressive interface, and the mgcv package to specify splines via the formula interface. See Thorson et al. (2025) <doi:10.1111/geb.70035> for more details.
Uniform random samples from simple manifolds, sometimes with noise, are commonly used to test topological data analytic (TDA) tools. This package includes samplers powered by two techniques: analytic volume-preserving parameterizations, as employed by Arvo (1995) <doi:10.1145/218380.218500>, and rejection sampling, as employed by Diaconis, Holmes, and Shahshahani (2013) <doi:10.1214/12-IMSCOLL1006>.
Description: Implementation of topological data analysis methods based on graph-theoretic approaches for discovering topological structures in data. The core algorithm constructs topological spaces from graphs following Nada et al. (2018) <doi:10.1002/mma.4726> "New types of topological structures via graphs".
Email Finder R Client Library. Search emails are based on the website You give one domain name and it returns all the email addresses found on the internet. Email Finder generates or retrieves the most likely email address from a domain name, a first name and a last name. Email verify checks the deliverability of a given email address, verifies if it has been found in our database, and returns their sources.
This package provides functions for density, cumulative density, quantile and simulation of Tukey g-and-h (1977) distributions. The quantile-based transformation (Hoaglin 1985 <doi:10.1002/9781118150702.ch11>) and its reverse transformation, as well as the letter-value based estimates (Hoaglin 1985), are also provided.
The eigenvalues of observed symmetric matrices are often of intense scientific interest. This package offers single sample tests for the eigenvalues of the population mean or the eigenvalue-multiplicity of the population mean. For k-samples, this package offers tests for equal eigenvalues between samples. Included is support for matrices with constraints common to geophysical tensors (constant trace, sum of squared eigenvalues, or both) and eigenvectors are usually considered nuisance parameters. Pivotal bootstrap methods enable these tests to have good performance for small samples (n=15 for 3x3 matrices). These methods were developed and studied by Hingee, Scealy and Wood (2026, "Nonparametric bootstrap inference for the eigenvalues of geophysical tensors", accepted by the Journal of American Statistical Association). Also available is a 2-sample test using a Gaussian orthogonal ensemble approximation and an eigenvalue-multiplicity test that assumes orthogonally-invariant covariance.
This package provides a suite of functions for analysing, interpreting, and visualising time-series features calculated from different feature sets from the theft package. Implements statistical learning methodologies described in Henderson, T., Bryant, A., and Fulcher, B. (2023) <doi:10.48550/arXiv.2303.17809>.
Specialized toolkit for processing biological and fisheries data from Peru's anchovy (Engraulis ringens) fishery. Provides functions to analyze fishing logbooks, calculate biological indicators (length-weight relationships, juvenile percentages), generate spatial fishing indicators, and visualize regulatory measures from Peru's Ministry of Production. Features automated data processing from multiple file formats, coordinate validation, spatial analysis of fishing zones, and tools for analyzing fishing closure announcements and regulatory compliance. Includes built-in datasets of Peruvian coastal coordinates and parallel lines for analyzing fishing activities within regulatory zones.