Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Within this package the XML-RPC API to NEOS <https://neos-server.org/neos/> is implemented. This enables the user to pass optimization problems to NEOS and retrieve results within R.
High resolution vector country boundaries derived from Natural Earth data, can be plotted in rworldmap.
This package provides an R scripting interface to the open-source SAGA-GIS (System for Automated Geoscientific Analyses Geographical Information System) software. Rsagacmd dynamically generates R functions for every SAGA-GIS geoprocessing tool based on the user's currently installed SAGA-GIS version. These functions are contained within an S3 object and are accessed as a named list of libraries and tools. This structure facilitates an easier scripting experience by organizing the large number of SAGA-GIS geoprocessing tools (>700) by their respective library. Interactive scripting can fully take advantage of code autocompletion tools (e.g. in RStudio'), allowing for each tools syntax to be quickly recognized. Furthermore, the most common types of spatial data (via the terra', sp', and sf packages) along with non-spatial data are automatically passed from R to the SAGA-GIS command line tool for geoprocessing operations, and the results are loaded as the appropriate R object. Outputs from individual SAGA-GIS tools can also be chained using pipes from the magrittr and dplyr packages to combine complex geoprocessing operations together in a single statement. SAGA-GIS is available under a GPLv2 / LGPLv2 licence from <https://sourceforge.net/projects/saga-gis/> including Windows x86/x64 and macOS binaries. SAGA-GIS is also included in Debian/Ubuntu default software repositories. Rsagacmd has currently been tested on SAGA-GIS versions from 2.3.1 to 9.5.1 on Windows, Linux and macOS.
Enhances the R Optimization Infrastructure ('ROI') package by registering the quadprog solver. It allows for solving quadratic programming (QP) problems.
An R interface to Weka (Version 3.9.3). Weka is a collection of machine learning algorithms for data mining tasks written in Java, containing tools for data pre-processing, classification, regression, clustering, association rules, and visualization. Package RWeka contains the interface code, the Weka jar is in a separate package RWekajars'. For more information on Weka see <https://www.cs.waikato.ac.nz/ml/weka/>.
The GenDataSample() and GenDataPopulation() functions create, respectively, a sample or population of multivariate nonnormal data using methods described in Ruscio and Kaczetow (2008). Both of these functions call a FactorAnalysis() function to reproduce a correlation matrix. The EFACompData() function allows users to determine how many factors to retain in an exploratory factor analysis of an empirical data set using a method described in Ruscio and Roche (2012). The latter function uses populations of comparison data created by calling the GenDataPopulation() function. <DOI: 10.1080/00273170802285693>. <DOI: 10.1037/a0025697>.
Robust Estimation of Variance Component Models by classic and composite robust procedures. The composite procedures are robust against outliers generated by the Independent Contamination Model.
Bootstrap, permutation tests, and jackknife, featuring easy-to-use syntax.
The main purpose of this package is to perform simulation-based estimation of stochastic actor-oriented models for longitudinal network data collected as panel data. Dependent variables can be single or multivariate networks, which can be directed, non-directed, or two-mode; and associated actor variables. There are also functions for testing parameters and checking goodness of fit. An overview of these models is given in Snijders (2017), <doi:10.1146/annurev-statistics-060116-054035>.
Predict fish year-class strength by calibration regression analysis of multiple recruitment index series.
R functions for generating and/or displaying random Chuck Norris facts. Based on data from the Internet Chuck Norris database ('ICNDb').
Provide an interface for Drama Corpora Project ('DraCor') API: <https://dracor.org/documentation/api>.
Analysis of corneal data obtained from a Placido disk corneal topographer with calculation of irregularity indices. This package performs analyses of corneal data obtained from a Placido disk corneal topographer, with the calculation of the Placido irregularity indices and the posterior analysis. The package is intended to be easy to use by a practitioner, providing a simple interface and yielding easily interpretable results. A corneal topographer is an ophthalmic clinical device that obtains measurements in the cornea (the anterior part of the eye). A Placido disk corneal topographer makes use of the Placido disk [Rowsey et al. (1981)]<doi:10.1001/archopht.1981.03930011093022>, which produce a circular pattern of measurement nodes. The raw information measured by such a topographer is used by practitioners to analyze curvatures, to study optical aberrations, or to diagnose specific conditions of the eye (e.g. keratoconus, an important corneal disease). The rPACI package allows the calculation of the corneal irregularity indices described in [Castro-Luna et al. (2020)]<doi:10.1016%2Fj.clae.2019.12.006>, [Ramos-Lopez et al. (2013)]<doi:10.1097%2FOPX.0b013e3182843f2a>, and [Ramos-Lopez et al. (2011)]<doi:10.1097/opx.0b013e3182279ff8>. It provides a simple interface to read corneal topography data files as exported by a typical Placido disk topographer, to compute the irregularity indices mentioned before, and to display summary plots that are easy to interpret for a clinician.
This package provides a wrapper for Jagger, a morphological analyzer proposed in Yoshinaga (2023) <arXiv:2305.19045>. Jagger uses patterns derived from morphological dictionaries and training data sets and applies them from the beginning of the input. This simultaneous and deterministic process enables it to effectively perform tokenization, POS tagging, and lemmatization.
This R package connects to SWI-Prolog, <https://www.swi-prolog.org/>, so that R can send deterministic and non-deterministic queries to prolog (consult, query/submit, once, findall).
Interface for the Google Ads API'. Google Ads is an online advertising service that enables advertisers to display advertising to web users (see <https://developers.google.com/google-ads/> for more information).
This package provides a robust Partial Least-Squares (PLS) method is implemented that is robust to outliers in the residuals as well as to leverage points. A specific weighting scheme is applied which avoids iterations, and leads to a highly efficient robust PLS estimator.
We provide several avenues to predict and account for user-based mortality and tag loss during mark-recapture studies. When planning a study on a target species, the retentionmort_generation() function can be used to produce multiple synthetic mark-recapture datasets to anticipate the error associated with a planned field study to guide method development to reduce error. Similarly, if field data was already collected, the retentionmort() function can be used to predict the error from already generated data to adjust for user-based mortality and tag loss. The test_dataset_retentionmort() function will provide an example dataset of how data should be inputted into the function to run properly. Lastly, the retentionmort_figure() function can be used on any dataset generated from either model function to produce an rmarkdown printout of preliminary analysis associated with the model, including summary statistics and figures. Methods and results pertaining to the formation of this package can be found in McCutcheon et al. (in review, "Predicting tagging-related mortality and tag loss during mark-recapture studies").
Implementation of the methods described in the paper with the above title: Langsrud, Ã . (2019) <doi:10.1007/s11222-018-9848-9>. The package can be used to generate synthetic or hybrid continuous microdata, and the relationship to the original data can be controlled in several ways. A function for replacing suppressed tabular cell frequencies with decimal numbers is included.
This package provides a framework for estimating ensembles of meta-analytic, meta-regression, and multilevel models (assuming either presence or absence of the effect, heterogeneity, publication bias, and moderators). The RoBMA framework uses Bayesian model-averaging to combine the competing meta-analytic models into a model ensemble, weights the posterior parameter distributions based on posterior model probabilities and uses Bayes factors to test for the presence or absence of the individual components (e.g., effect vs. no effect; Bartoš et al., 2022, <doi:10.1002/jrsm.1594>; Maier, Bartoš & Wagenmakers, 2022, <doi:10.1037/met0000405>; Bartoš et al., 2025, <doi:10.1037/met0000737>). Users can define a wide range of prior distributions for the effect size, heterogeneity, publication bias (including selection models and PET-PEESE), and moderator components. The package provides convenient functions for summary, visualizations, and fit diagnostics.
This package provides portable access from R to biomedical image processing toolbox ANTs by Avants et al. (2009) <doi:10.54294/uvnhin> via seamless integration with the Python implementation ANTsPy'. Allows biomedical images to be processed in Python and analyzed in R', and vice versa via shared memory. See citation("rpyANTs") for more reference information.
This package provides a comprehensive set of tools designed for optimizing likelihood within a tie-oriented (Butts, C., 2008, <doi:10.1111/j.1467-9531.2008.00203.x>) or an actor-oriented modelling framework (Stadtfeld, C., & Block, P., 2017, <doi:10.15195/v4.a14>) in relational event networks. The package accommodates both frequentist and Bayesian approaches. The frequentist approaches that the package incorporates are the Maximum Likelihood Optimization (MLE) and the Gradient-based Optimization (GDADAMAX). The Bayesian methodologies included in the package are the Bayesian Sampling Importance Resampling (BSIR) and the Hamiltonian Monte Carlo (HMC). The flexibility of choosing between frequentist and Bayesian optimization approaches allows researchers to select the estimation approach which aligns the most with their analytical preferences.
The goal of rFIA is to increase the accessibility and use of the United States Forest Services (USFS) Forest Inventory and Analysis (FIA) Database by providing a user-friendly, open source toolkit to easily query and analyze FIA Data. Designed to accommodate a wide range of potential user objectives, rFIA simplifies the estimation of forest variables from the FIA Database and allows all R users (experts and newcomers alike) to unlock the flexibility inherent to the Enhanced FIA design. Specifically, rFIA improves accessibility to the spatial-temporal estimation capacity of the FIA Database by producing space-time indexed summaries of forest variables within user-defined population boundaries. Direct integration with other popular R packages (e.g., dplyr', tidyr', and sf') facilitates efficient space-time query and data summary, and supports common data representations and API design. The package implements design-based estimation procedures outlined by Bechtold & Patterson (2005) <doi:10.2737/SRS-GTR-80>, and has been validated against estimates and sampling errors produced by FIA EVALIDator'. Current development is focused on the implementation of spatially-enabled model-assisted and model-based estimators to improve population, change, and ratio estimates.
This package provides a RUT (Rol Unico Tributario) is an unique and personal identification number implemented in Chile to identify citizens and taxpayers. Rutifier allows to validate if a RUT exist or not and change between the different formats a RUT can have.