Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Emissions are the mass of pollutants released into the atmosphere. Air quality models need emissions data, with spatial and temporal distribution, to represent air pollutant concentrations. This package, eixport, creates inputs for the air quality models WRF-Chem Grell et al (2005) <doi:10.1016/j.atmosenv.2005.04.027>, MUNICH Kim et al (2018) <doi:10.5194/gmd-11-611-2018> , BRAMS-SPM Freitas et al (2005) <doi:10.1016/j.atmosenv.2005.07.017> and RLINE Snyder et al (2013) <doi:10.1016/j.atmosenv.2013.05.074>. See the eixport website (<https://atmoschem.github.io/eixport/>) for more information, documentations and examples. More details in Ibarra-Espinosa et al (2018) <doi:10.21105/joss.00607>.
This package provides a set of user-friendly functions to aid in organizing, plotting and analyzing event-related potential (ERP) data. Provides an easy-to-learn method to explore ERP data. Should be useful to those without a background in computer programming, and to those who are new to ERPs (or new to the more advanced ERP software available). Emphasis has been placed on highly automated processes using functions with as few arguments as possible. Expects processed (cleaned) data.
This package provides tools for exploratory analysis of tabular data using colour highlighting. Highlighting is displayed in any console supporting ANSI colours, and can be converted to HTML', typst', latex and SVG'. quarto and rmarkdown rendering are directly supported. It is also possible to add colour to regular expression matches and highlight differences between two arbitrary R objects.
Interactive data exploration with one line of code, automated reporting or use an easy to remember set of tidy functions for low code exploratory data analysis.
Generates interactive circle plots with the nodes around the circumference and linkages between the connected nodes using hierarchical edge bundling via the D3 JavaScript library. See <http://d3js.org/> for more information on D3.
This package implements Excel functions in R for your calculation simplicity.You can use most of the aggregate functions, addressing functions,logical functions and text functions. Helps you a ton in learning how R works as some Excel users might be struggling with the program.
Padroniza endereços brasileiros a partir de diferentes critérios. Os métodos de padronização incluem apenas manipulações básicas de strings, não oferecendo suporte a correspondências probabilà sticas entre strings. (Standardizes brazilian addresses using different criteria. Standardization methods include only basic string manipulation, not supporting probabilistic matches between strings.).
Training and prediction functions are provided for the Extreme Learning Machine algorithm (ELM). The ELM use a Single Hidden Layer Feedforward Neural Network (SLFN) with random generated weights and no gradient-based backpropagation. The training time is very short and the online version allows to update the model using small chunk of the training set at each iteration. The only parameter to tune is the hidden layer size and the learning function.
This package provides methods for analyzing R by C ecological contingency tables using the extreme case analysis, ecological regression, and Multinomial-Dirichlet ecological inference models. Also provides tools for manipulating higher-dimension data objects.
This package provides classes and methods for implementing aquatic ecosystem models, for running these models, and for visualizing their results.
An implementation of the ESS algorithm following Amol Deshpande, Minos Garofalakis, Michael I Jordan (2013) <arXiv:1301.2267>. The ESS algorithm is used for model selection in decomposable graphical models.
This package provides step-by-step automation for integrating biodiversity data from multiple online aggregators, merging and cleaning datasets while addressing challenges such as taxonomic inconsistencies, georeferencing issues, and spatial or environmental outliers. Includes functions to extract environmental data and to define the biogeographic ranges in which species are most likely to occur.
An implementation of multiple-locus association mapping on a genome-wide scale. Eagle can handle inbred and outbred study populations, populations of arbitrary unknown complexity, and data larger than the memory capacity of the computer. Since Eagle is based on linear mixed models, it is best suited to the analysis of data on continuous traits. However, it can tolerate non-normal data. Eagle reports, as its findings, the best set of snp in strongest association with a trait. For users unfamiliar with R, to perform an analysis, run OpenGUI()'. This opens a web browser to the menu-driven user interface for the input of data, and for performing genome-wide analysis.
This package provides tools for training and practicing epidemiologists including methods for two-way and multi-way contingency tables.
An implementation of European Forestry Dynamics Model (EFDM) and an estimation algorithm for the transition probabilities. The EFDM is a large-scale forest model that simulates the development of the forest and estimates volume of wood harvested for any given forested area. This estimate can be broken down by, for example, species, site quality, management regime and ownership category. See Packalen et al. (2015) <doi:10.2788/153990>.
This package provides R access to election results data. Wraps elex (https://github.com/newsdev/elex/), a Python package and command line tool for fetching and parsing Associated Press election results.
This package provides a system for calculating the optimal sampling effort, based on the ideas of "Ecological cost-benefit optimization" as developed by A. Underwood (1997, ISBN 0 521 55696 1). Data is obtained from simulated ecological communities with prep_data() which formats and arranges the initial data, and then the optimization follows the following procedure of four functions: (1) prep_data() takes the original dataset and creates simulated sets that can be used as a basis for estimating statistical power and type II error. (2) sim_beta() is used to estimate the statistical power for the different sampling efforts specified by the user. (3) sim_cbo() calculates then the optimal sampling effort, based on the statistical power and the sampling costs. Additionally, (4) scompvar() calculates the variation components necessary for (5) Underwood_cbo() to calculate the optimal combination of number of sites and samples depending on either an economic budget or on a desired statistical accuracy. Lastly, (6) plot_power() helps the user visualize the results of sim_beta().
This package provides computational tools for working with the Extended Laplace distribution, including the probability density function, cumulative distribution function, quantile function, random variate generation based on convolution with Uniform noise and the quantile-quantile plot. Useful for modeling contaminated Laplace data and other applications in robust statistics. See Saah and Kozubowski (2025) <doi:10.1016/j.cam.2025.116588>.
This package performs analysis of polynomial regression in simple designs with quantitative treatments.
This package provides a tool for the preparation and enrichment of health datasets for analysis (Toner et al. (2023) <doi:10.1093/gigascience/giad030>). Provides functionality for assessing data quality and for improving the reliability and machine interpretability of a dataset. eHDPrep also enables semantic enrichment of a dataset where metavariables are discovered from the relationships between input variables determined from user-provided ontologies.
The 2-D spatial and temporal Epidemic Type Aftershock Sequence ('ETAS') Model is widely used to decluster earthquake data catalogs. Usually, the calculation of standard errors of the ETAS model parameter estimates is based on the Hessian matrix derived from the log-likelihood function of the fitted model. However, when an ETAS model is fitted to a local data set over a time period that is limited or short, the standard errors based on the Hessian matrix may be inaccurate. It follows that the asymptotic confidence intervals for parameters may not always be reliable. As an alternative, this package allows for the construction of bootstrap confidence intervals based on empirical quantiles for the parameters of the 2-D spatial and temporal ETAS model. This version improves on Version 0.1.0 of the package by enabling the study space window (renamed study region') to be polygonal rather than merely rectangular. A Japan earthquake data catalog is used in a second example to illustrate this new feature.
Estimation of the parameters in a model for symmetric relational data (e.g., the above-diagonal part of a square matrix), using a model-based eigenvalue decomposition and regression. Missing data is accommodated, and a posterior mean for missing data is calculated under the assumption that the data are missing at random. The marginal distribution of the relational data can be arbitrary, and is fit with an ordered probit specification. See Hoff (2007) <arXiv:0711.1146> for details on the model.
Fully robust versions of the elastic net estimator are introduced for linear and binary and multinomial regression, in particular high dimensional data. The algorithm searches for outlier free subsets on which the classical elastic net estimators can be applied. A reweighting step is added to improve the statistical efficiency of the proposed estimators. Selecting appropriate tuning parameters for elastic net penalties are done via cross-validation.
The purpose of this package is to generate trees and validate unverified code. Trees are made by parsing a statement into a verification tree data structure. This will make it easy to port the statement into another language. Safe statement evaluations are done by executing the verification trees.