Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for building customized ready-to-export tables for publication.
If a procedure consists of several stages and there are several models that can be selected for each stage, uncertainty of the procedure can be decomposed by stages or models. This package includes the ANOVA-based method, the cumulative uncertainty-based method, and the balanced decomposition method. Yongdai Kim et al. (2019) <doi:10.1016/j.hydroa.2019.100024> is a related paper which is accessible via the URL below.
This package provides a time series of the national grid demand (high-voltage electric power transmission network) in the UK since 2011.
Fetch United States Congressional Records from their API <https://api.govinfo.gov/docs/> such as congressional speeches, speaker names, and metadata about congressional sessions, and detailed granule records. Optional parameters allow users to specify congressional sessions, and the maximum number of speeches to retrieve. Data is parsed, cleaned, and returned in a structured dataframe for analysis.
Using matrix layout to visualize the unique, common, or individual contribution of each predictor (or matrix of predictors) towards explained variation on different models. These contributions were derived from variation partitioning (VP) and hierarchical partitioning (HP), applying the algorithm of "Lai et al. (2022) Generalizing hierarchical and variation partitioning in multiple regression and canonical analyses using the rdacca.hp R package.Methods in Ecology and Evolution, 13: 782-788 <doi:10.1111/2041-210X.13800>".
Calculate unified measures that quantify the effect of a covariate on a binary dependent variable (e.g., for meta-analyses). This can be particularly important if the estimation results are obtained with different models/estimators (e.g., linear probability model, logit, probit, ...) and/or with different transformations of the explanatory variable of interest (e.g., linear, quadratic, interval-coded, ...). The calculated unified measures are: (a) semi-elasticities of linear, quadratic, or interval-coded covariates and (b) effects of linear, quadratic, interval-coded, or categorical covariates when a linear or quadratic covariate changes between distinct intervals, the reference category of a categorical variable or the reference interval of an interval-coded variable needs to be changed, or some categories of a categorical covariate or some intervals of an interval-coded covariate need to be grouped together. Approximate standard errors of the unified measures are also calculated. All methods that are implemented in this package are described in the vignette "Extracting and Unifying Semi-Elasticities and Effect Sizes from Studies with Binary Dependent Variables" that is included in this package.
Demographic data on the United States at the county and state levels spanning multiple years.
This package provides a diverse collection of U.S. datasets encompassing various fields such as crime, economics, education, finance, energy, healthcare, and more. It serves as a valuable resource for researchers and analysts seeking to perform in-depth analyses and derive insights from U.S.-specific data.
In diagnostic contexts, individuals are often assessed using multiple tests that measure the same latent variable (e.g., intelligence). These test scores are typically not exactly identical. Simple averaging neglects the correlation between tests and the reduced variance of their combination. The unifyR package provides functions to compute statistically accurate unified scores, reliabilities and validities of multiple tests. The underlying algorithms build on and extend the method proposed by Evans (1996, <DOI:10.3758/BF03204767>) and have been validated through simulations.
Despite there being a section in RFC 7231 <https://tools.ietf.org/html/rfc7231#section-5.5.3> defining a suggested structure for User-Agent headers this data is notoriously difficult to parse consistently. Tools are provided that will take in user agent strings and return structured R objects. This is a V8'-backed package based on the ua-parser project <https://github.com/ua-parser>.
Most universities use specific color combinations to express their unique brand identity. The unicol package provides the colors and color palettes of various universities for easy plotting and printing in R. We collect and provide a diverse range of color palettes for creating scientific visualizations.
The "ussher" data set is drawn from original chronological textual historic events. Commonly known as James Ussher's Annals of the World, the source text was originally written in Latin in 1650, and published in English translation in 1658.The data are classified by index, year, epoch (or one of the 7 ancient "Ages of the World"), Biblical source book if referenced (rarely), as well as alternate dating mechanisms, such as "Anno Mundi" (age of the world) or "Julian Period" (dates based upon the Julian calendar). Additional file "usshfull" includes variables that may be of further interest to historians, such as Southern Kingdom and Northern Kingdom discrepant dates, and the original amalgamated dating mechanic used by Ussher in the original text. The raw data can also be called using "usshraw", as described in: Ussher, J. (1658) <https://archive.org/stream/AnnalsOfTheWorld/Annals_djvu.txt>.
Intended to be used by the United States Copyright Office Product Management Division Business Analysts. Include algorithms for the United States Copyright Office Product Management Division SR Audit Data dataset. The algorithm takes in the SR Audit Data excel file and reformat the spreadsheet such that the values and variables fit the format of the online database. Support functions in this package include clean_str(), which cleans instances of variable AUDIT_LOG; clean_data_to_excel(), which cleans and output the reorganized SR Audit Data dataset in excel format; clean_data_to_dataframe(), which cleans and stores the reorganized SR Audit Data data set to a data frame; format_from_excel(), which reads in the outputted excel file from the clean_data_to_excel() function and formats and returns the data as a dictionary that uses FIELD types as keys and NON-FIELD types as the values of those keys. format_from_dataframe(), which reads in the outputted data frame from the clean_data_to_dataframe() function and formats and returns the data as a dictionary that uses FIELD types as keys and NON-FIELD types as the values of those keys; support_function(), which takes in the dictionary outputted either from the format_from_dataframe() or format_from_excel() function and returns the data as a formatted data frame according to the original U.S. Copyright Office SR Audit Data online database. The main function of this package is clean_format_all(), which takes in an excel file and returns the formatted data into a new excel and text file according to the format from the U.S. Copyright Office SR Audit Data online database.
This package implements the Gaussian method of first and second order, the Kragten numerical method and the Monte Carlo simulation method for uncertainty estimation and analysis.
Una herramienta rápida y consistente para la disposición de microdatos y la visualización de las cifras y estadà sticas oficiales de la Universidad Nacional de Colombia <https://unal.edu.co>. Contiene una biblioteca de funciones gráficas, tanto estáticas como interactivas, que ofrece numerosos tipos de gráficos con una sintaxis altamente configurable y simple. Entre estos encontramos la visualización de tablas HTML, series, gráficos de barras y circulares, mapas, etc. Todo lo anterior apoyado en bibliotecas de JavaScript. English: A fast and consistent tool for the arrangement of microdata and the visualization of official figures and statistics from the National University of Colombia <https://unal.edu.co>. It includes a library of graphical functions, both static and interactive, offering numerous types of charts with a highly configurable and simple syntax. Among these, we find the visualization of HTML tables, series, bar and pie charts, maps, etc. It provides the capability to transition from the interactive to the dynamic world and from one library to another without changing function or syntax.
Predicts a smooth and continuous (individual) utility function from utility points, and computes measures of intensity for risk and higher-order risk measures (or any other measure computed with user-written function) based on this utility function and its derivatives according to the method introduced in Schneider (2017) <http://hdl.handle.net/21.11130/00-1735-0000-002E-E306-0>.
Detects values imported from spreadsheets that were auto-converted to Excel date serials and reconstructs the originally intended day.month decimals (for example, 30.3 that Excel displayed as 30/03/2025'). The functions work in a vectorized manner, preserve non-serial values, and support both the 1900 and 1904 date systems.
This natural language processing toolkit provides language-agnostic tokenization', parts of speech tagging', lemmatization and dependency parsing of raw text. Next to text parsing, the package also allows you to train annotation models based on data of treebanks in CoNLL-U format as provided at <https://universaldependencies.org/format.html>. The techniques are explained in detail in the paper: Tokenizing, POS Tagging, Lemmatizing and Parsing UD 2.0 with UDPipe', available at <doi:10.18653/v1/K17-3009>. The toolkit also contains functionalities for commonly used data manipulations on texts which are enriched with the output of the parser. Namely functionalities and algorithms for collocations, token co-occurrence, document term matrix handling, term frequency inverse document frequency calculations, information retrieval metrics (Okapi BM25), handling of multi-word expressions, keyword detection (Rapid Automatic Keyword Extraction, noun phrase extraction, syntactical patterns) sentiment scoring and semantic similarity analysis.
Variance approximations for the Horvitz-Thompson total estimator in Unequal Probability Sampling using only first-order inclusion probabilities. See Matei and Tillé (2005) and Haziza, Mecatti and Rao (2008) for details.
Connect to Uniprot <https://www.uniprot.org/> to retrieve information about proteins using their accession number such information could be name or taxonomy information, For detailed information kindly read the publication <doi:10.1016/j.jprot.2019.103613>.
Allows using two URL shortening services, which also provide expanding and analytic functions. Specifically developed for Bit.ly (which requires OAuth 2.0) and is.gd (no API key).
This package provides a collection of parametric quantile regression models for bounded data. At present, the package provides 13 parametric quantile regression models. It can specify regression structure for any quantile and shape parameters. It also provides several S3 methods to extract information from fitted model, such as residual analysis, prediction, plotting, and model comparison. For more computation efficient the [dpqr]'s, likelihood, score and hessian functions are written in C++. For further details see Mazucheli et. al (2022) <doi:10.1016/j.cmpb.2022.106816>.
This package provides an algorithm to detect and characterize disturbances (start, end dates, intensity) that can occur at different hierarchical levels by studying the dynamics of longitudinal observations at the unit level and group level based on Nadaraya-Watson's smoothing curves, but also a shiny app which allows to visualize the observations and the detected disturbances. Finally the package provides a dataframe mimicking a pig farming system subsected to disturbances simulated according to Le et al.(2022) <doi:10.1016/j.animal.2022.100496>.
Returns a data frame with the names of the input data points and hex colors (or CIELab coordinates). Data can be mapped to colors for use in data visualization. It optimally maps data points into a polygon that represents the CIELab colour space. Since Euclidean distance approximates relative perceptual differences in CIELab color space, the result is a color encoding that aims to capture much of the structure of the original data.