Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package performs Random Subspace Method (RSM) for high-dimensional linear regression to obtain variable importance measures. The final model is chosen based on validation set or Generalized Information Criterion.
The open sourced data management software Integrated Rule-Oriented Data System ('iRODS') offers solutions for the whole data life cycle (<https://irods.org/>). The loosely constructed and highly configurable architecture of iRODS frees the user from strict formatting constraints and single-vendor solutions. This package provides an interface to the iRODS HTTP API, allowing you to manage your data and metadata in iRODS with R. Storage of annotated files and R objects in iRODS ensures findability, accessibility, interoperability, and reusability of data.
Mappable vector library provides convenient way to access large datasets. Use all of your data at once, with few limits. Memory mapped data can be shared between multiple R processes. Access speed depends on storage medium, so solid state drive is recommended, preferably with PCI Express (or M.2 nvme) interface or a fast network file system. The data is memory mapped into R and then accessed using usual R list and array subscription operators. Convenience functions are provided for merging, grouping and indexing large vectors and data.frames. The layout of underlying MVL files is optimized for large datasets. The vectors are stored to guarantee alignment for vector intrinsics after memory map. The package is built on top of libMVL, which can be used as a standalone C library. libMVL has simple C API making it easy to interchange datasets with outside programs. Large MVL datasets are distributed via Academic Torrents <https://academictorrents.com/collection/mvl-datasets>.
Interface to the ChEA3 transcription factor enrichment API. ChEA3 integrates evidence from ChIP-seq, co-expression, and literature resources to prioritize transcription factors regulating a given set of genes. This package provides convenient R functions to query the API, retrieve ranked results across collections (including integrated scores), and standardize output for downstream analysis in R/Bioconductor workflows. See <https://maayanlab.cloud/chea3/> or Keenan (2019) <doi:10.1093/nar/gkz446> for further details.
This package provides a collection of R functions for use with Stock Synthesis, a fisheries stock assessment modeling platform written in ADMB by Dr. Richard D. Methot at the NOAA Northwest Fisheries Science Center. The functions include tools for summarizing and plotting results, manipulating files, visualizing model parameterizations, and various other common stock assessment tasks. This version of r4ss is compatible with Stock Synthesis versions 3.24 through 3.30 (specifically version 3.30.19.01, from April 2022).
This package provides implementations of a classifier based on the "Classification Based on Associations" (CBA). It can be used for building classification models from association rules. Rules are pruned in the order of precedence given by the sort criteria and a default rule is added. The final classifier labels provided instances. CBA was originally proposed by Liu, B. Hsu, W. and Ma, Y. Integrating Classification and Association Rule Mining. Proceedings KDD-98, New York, 27-31 August. AAAI. pp80-86 (1998, ISBN:1-57735-070-7).
Three robust marginal integration procedures for additive models based on local polynomial kernel smoothers. As a preliminary estimator of the multivariate function for the marginal integration procedure, a first approach uses local constant M-estimators, a second one uses local polynomials of order 1 over all the components of covariates, and the third one uses M-estimators based on local polynomials but only in the direction of interest. For this last approach, estimators of the derivatives of the additive functions can be obtained. All three procedures can compute predictions for points outside the training set if desired. See Boente and Martinez (2017) <doi:10.1007/s11749-016-0508-0> for details.
This package provides a general-purpose optimisation engine that supports i) Monte Carlo optimisation with Metropolis criterion [Metropolis et al. (1953) <doi:10.1063/1.1699114>, Hastings (1970) <doi:10.1093/biomet/57.1.97>] and Acceptance Ratio Simulated Annealing [Kirkpatrick et al. (1983) <doi:10.1126/science.220.4598.671>, Ä erný (1985) <doi:10.1007/BF00940812>] on multiple cores, and ii) Acceptance Ratio Replica Exchange Monte Carlo Optimisation. In each case, the system pseudo-temperature is dynamically adjusted such that the observed acceptance ratio is kept near to the desired (fixed or changing) acceptance ratio.
Measure single-storage water supply system performance using resilience, reliability, and vulnerability metrics; assess storage-yield-reliability relationships; determine no-fail storage with sequent peak analysis; optimize release decisions for water supply, hydropower, and multi-objective reservoirs using deterministic and stochastic dynamic programming; generate inflow replicates using parametric and non-parametric models; evaluate inflow persistence using the Hurst coefficient.
R6 class interface for handling relational database connections using DBI package as backend. The class allows handling of connections to e.g. PostgreSQL, MariaDB and SQLite. The purpose is having an intuitive object allowing straightforward handling of SQL databases.
Set of functions for Regression Discontinuity Design ('RDD'), for data visualisation, estimation and testing.
An algorithm which can be used to determine an objective threshold for signal-noise separation in large random matrices (correlation matrices, mutual information matrices, network adjacency matrices) is provided. The package makes use of the results of Random Matrix Theory (RMT). The algorithm increments a suppositional threshold monotonically, thereby recording the eigenvalue spacing distribution of the matrix. According to RMT, that distribution undergoes a characteristic change when the threshold properly separates signal from noise. By using the algorithm, the modular structure of a matrix - or of the corresponding network - can be unraveled.
Download, prepare and analyze data from large-scale assessments and surveys with complex sampling and assessment design (see Rutkowski', 2010 <doi:10.3102/0013189X10363170>). Such studies are, for example, international assessments like TIMSS', PIRLS and PISA'. A graphical interface is available for the non-technical user.The package includes functions to covert the original data from SPSS into R data sets keeping the user-defined missing values, merge data from different respondents and/or countries, generate variable dictionaries, modify data, produce descriptive statistics (percentages, means, percentiles, benchmarks) and multivariate statistics (correlations, linear regression, binary logistic regression). The number of supported studies and analysis types will increase in future. For a general presentation of the package, see Mirazchiyski', 2021a (<doi:10.1186/s40536-021-00114-4>). For detailed technical aspects of the package, see Mirazchiyski', 2021b (<doi:10.3390/psych3020018>).
The Reproducible Open Coding Kit ('ROCK', and this package, rock') was developed to facilitate reproducible and open coding, specifically geared towards qualitative research methods. It was developed to be both human- and machine-readable, in the spirit of MarkDown and YAML'. The idea is that this makes it relatively easy to write other functions and packages to process ROCK files. The rock package contains functions for basic coding and analysis, such as collecting and showing coded fragments and prettifying sources, as well as a number of advanced analyses such as the Qualitative Network Approach and Qualitative/Unified Exploration of State Transitions. The ROCK and this rock package are described in the ROCK book (ZörgŠ& Peters, 2022; <https://rockbook.org>), in ZörgŠ& Peters (2024) <doi:10.1080/21642850.2022.2119144> and Peters, ZörgŠand van der Maas (2022) <doi:10.31234/osf.io/cvf52>, and more information and tutorials are available at <https://rock.science>.
Frequentist sequential meta-analysis based on Trial Sequential Analysis (TSA) in programmed in Java by the Copenhagen Trial Unit (CTU). The primary function is the calculation of group sequential designs for meta-analysis to be used for planning and analysis of both prospective and retrospective sequential meta-analyses to preserve type-I-error control under sequential testing. RTSA includes tools for sample size and trial size calculation for meta-analysis and core meta-analyses methods such as fixed-effect and random-effects models and forest plots. TSA is described in Wetterslev et. al (2008) <doi:10.1016/j.jclinepi.2007.03.013>. The methods for deriving the group sequential designs are based on Jennison and Turnbull (1999, ISBN:9780849303166).
Represents high-dimensional data as tables of features, samples and measurements, and a design list for tracking the meaning of individual variables. Using this format, filtering, normalization, and other transformations of a dataset can be carried out in a flexible manner. romic takes advantage of these transformations to create interactive shiny apps for exploratory data analysis such as an interactive heatmap.
Downloads Southern Oscillation Index, Oceanic Nino Index, North Pacific Gyre Oscillation data, North Atlantic Oscillation and Arctic Oscillation. Data sources are described in the help files for each function.
An R interface to United States Environmental Protection Agency (EPA) Assessment, Total Maximum Daily Load (TMDL) Tracking and Implementation System ('ATTAINS') data. ATTAINS is the EPA database used to track information provided by states about water quality assessments conducted under federal Clean Water Act requirements. ATTAINS information and API information is available at <https://www.epa.gov/waterdata/attains>.
Companion package for the book: "Robust Statistics: Theory and Methods, second edition", <http://www.wiley.com/go/maronna/robust>. This package contains code that implements the robust estimators discussed in the recent second edition of the book above, as well as the scripts reproducing all the examples in the book.
This package provides a tool for multiply imputing missing data using MIDAS', a deep learning method based on denoising autoencoder neural networks (see Lall and Robinson, 2022; <doi:10.1017/pan.2020.49>). This algorithm offers significant accuracy and efficiency advantages over other multiple imputation strategies, particularly when applied to large datasets with complex features. Alongside interfacing with Python to run the core algorithm, this package contains functions for processing data before and after model training, running imputation model diagnostics, generating multiple completed datasets, and estimating regression models on these datasets. For more information see Lall and Robinson (2023) <doi:10.18637/jss.v107.i09>.
This package contains functions for random generation of R x C and 2 x 2 x K contingency tables. In addition to the generation of contingency tables over predetermined intraclass-correlated clusters, it is possible to generate contingency tables without intraclass correlations under product multinomial, multinomial, and Poisson sampling plans. It also consists of a function for generation of random data from a given discrete probability distribution function. See Demirhan (2016) <https://journal.r-project.org/archive/2016-1/demirhan.pdf> for more information.
It is widely documented in psychology, economics and other disciplines that socio-economic agent may not pay full attention to all available alternatives, rendering standard revealed preference theory invalid. This package implements the estimation and inference procedures of Cattaneo, Ma, Masatlioglu and Suleymanov (2020) <arXiv:1712.03448> and Cattaneo, Cheung, Ma, and Masatlioglu (2022) <arXiv:2110.10650>, which utilizes standard choice data to partially identify and estimate a decision maker's preference and attention. For inference, several simulation-based critical values are provided.
Loads Blackrock <https://blackrockneurotech.com> neural signal data files into the memory, provides utility tools to extract the data into common formats such as plain-text tsv and HDF5'.
Various functions for querying and reshaping dependency trees, as for instance created with the spacyr or udpipe packages. This enables the automatic extraction of useful semantic relations from texts, such as quotes (who said what) and clauses (who did what). Method proposed in Van Atteveldt et al. (2017) <doi:10.1017/pan.2016.12>.