Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Tests whether multivariate ordinal data may stem from discretizing a multivariate normal distribution. The test is described by Foldnes and Grønneberg (2019) <doi:10.1080/10705511.2019.1673168>. In addition, an adjusted polychoric correlation estimator is provided that takes marginal knowledge into account, as described by Grønneberg and Foldnes (2022) <doi:10.1037/met0000495>.
View 2D/3D sections, contour plots, mesh of excursion sets for computer experiments designs, surrogates or test functions.
Interactively train neural networks on Numerai, <https://numer.ai/>, data. Generate tournament predictions and write them to a CSV.
This package provides functions to calculate Divisia monetary aggregates index as given in Barnett, W. A. (1980) (<DOI:10.1016/0304-4076(80)90070-6>).
Allows users to quickly and easily detect data containing Personally Identifiable Information (PII) through convenience functions.
Item focussed recursive partitioning for simultaneous selection of items and variables that induce Differential Item Functioning (DIF) in dichotomous or polytomous items.
Algorithms implementing populations of agents that interact with one another and sense their environment may exhibit emergent behavior such as self-organization and swarm intelligence. Here, a swarm system called Databionic swarm (DBS) is introduced which was published in Thrun, M.C., Ultsch A.: "Swarm Intelligence for Self-Organized Clustering" (2020), Artificial Intelligence, <DOI:10.1016/j.artint.2020.103237>. DBS is able to adapt itself to structures of high-dimensional data such as natural clusters characterized by distance and/or density based structures in the data space. The first module is the parameter-free projection method called Pswarm (Pswarm()), which exploits the concepts of self-organization and emergence, game theory, swarm intelligence and symmetry considerations. The second module is the parameter-free high-dimensional data visualization technique, which generates projected points on the topographic map with hypsometric tints defined by the generalized U-matrix (GeneratePswarmVisualization()). The third module is the clustering method itself with non-critical parameters (DBSclustering()). Clustering can be verified by the visualization and vice versa. The term DBS refers to the method as a whole. It enables even a non-professional in the field of data mining to apply its algorithms for visualization and/or clustering to data sets with completely different structures drawn from diverse research fields. The comparison to common projection methods can be found in the book of Thrun, M.C.: "Projection Based Clustering through Self-Organization and Swarm Intelligence" (2018) <DOI:10.1007/978-3-658-20540-9>.
This package provides a collection of widely used univariate data sets of various applied domains on applications of distribution theory. The functions allow researchers and practitioners to quickly, easily, and efficiently access and use these data sets. The data are related to different applied domains and as follows: Bio-medical, survival analysis, medicine, reliability analysis, hydrology, actuarial science, operational research, meteorology, extreme values, quality control, engineering, finance, sports and economics. The total 100 data sets are documented along with associated references for further details and uses.
DAGs With Omitted Objects Displayed (DAGWOOD) is a framework to help reveal key hidden assumptions in a causal DAG. This package provides an implementation of the DAGWOOD algorithm. Further description can be found in Haber et al (2022) <DOI:10.1016/j.annepidem.2022.01.001>.
This package performs sensitivity analysis for the sharp null, attributable effects, and weak nulls in matched studies with continuous exposures and binary or continuous outcomes as described in Zhang, Small, Heng (2024) <doi:10.48550/arXiv.2401.06909> and Zhang, Heng (2024) <doi:10.48550/arXiv.2409.12848>. Two of the functions require installation of the Gurobi optimizer. Please see <https://docs.gurobi.com/current/#refman/ins_the_r_package.html> for guidance.
Generates simulated data representing the LOX drop testing process (also known as impact testing). A simulated process allows for accelerated study of test behavior. Functions are provided to simulate trials, test series, and groups of test series. Functions for creating plots specific to this process are also included. Test attributes and criteria can be set arbitrarily. This work is not endorsed by or affiliated with NASA. See "ASTM G86-17, Standard Test Method for Determining Ignition Sensitivity of Materials to Mechanical Impact in Ambient Liquid Oxygen and Pressurized Liquid and Gaseous Oxygen Environments" <doi:10.1520/G0086-17>.
Generate balanced factorial designs with crossed and nested random and fixed effects <https://github.com/mmrabe/designr>.
Estimation of the total population size from capture-recapture data efficiently and with low bias implementing the methods from Das M, Kennedy EH, and Jewell NP (2021) <arXiv:2104.14091>. The estimator is doubly robust against errors in the estimation of the intermediate nuisance parameters. Users can choose from the flexible estimation models provided in the package, or use any other preferred model.
Intensive longitudinal data have become increasingly prevalent in various scientific disciplines. Many such data sets are noisy, multivariate, and multi-subject in nature. The change functions may also be continuous, or continuous but interspersed with periods of discontinuities (i.e., showing regime switches). The package dynr (Dynamic Modeling in R) is an R package that implements a set of computationally efficient algorithms for handling a broad class of linear and nonlinear discrete- and continuous-time models with regime-switching properties under the constraint of linear Gaussian measurement functions. The discrete-time models can generally take on the form of a state-space or difference equation model. The continuous-time models are generally expressed as a set of ordinary or stochastic differential equations. All estimation and computations are performed in C, but users are provided with the option to specify the model of interest via a set of simple and easy-to-learn model specification functions in R. Model fitting can be performed using single-subject time series data or multiple-subject longitudinal data. Ou, Hunter, & Chow (2019) <doi:10.32614%2FRJ-2019-012> provided a detailed introduction to the interface and more information on the algorithms.
Manipulates date ('Date'), date time ('POSIXct') and time ('hms') vectors. Date/times are considered discrete and are floored whenever encountered. Times are wrapped and time zones are maintained unless explicitly altered by the user.
This package implements S4 classes for probability models based on packages distr and distrEx'.
The twoStepsBenchmark() and threeRuleSmooth() functions allow you to disaggregate a low-frequency time series with higher frequency time series, using the French National Accounts methodology. The aggregated sum of the resulting time series is strictly equal to the low-frequency time series within the benchmarking window. Typically, the low-frequency time series is an annual one, unknown for the last year, and the high frequency one is either quarterly or monthly. See "Methodology of quarterly national accounts", Insee Méthodes N°126, by Insee (2012, ISBN:978-2-11-068613-8, <https://www.insee.fr/en/information/2579410>).
This package provides functionality to infer trajectories from single-cell data, represent them into a common format, and adapt them. Other biological information can also be added, such as cellular grouping, RNA velocity and annotation. Saelens et al. (2019) <doi:10.1038/s41587-019-0071-9>.
Fast & memory-efficient functions to analyze and manipulate large spatial data data sets. It leverages the fast analytical capabilities of DuckDB and its spatial extension (see <https://duckdb.org/docs/stable/core_extensions/spatial/overview>) while maintaining compatibility with Râ s spatial data ecosystem to work with spatial vector data.
Differential Item Functioning (DIF) Analysis with shiny application interfaces. You can run the functions in this package without any arguments and perform your DIF analysis using user-friendly interfaces.
Offers meta programming style tools to generate configurable R functions that produce HTML forms based on table input and SQL meta data. Also generates functions for collecting the parameters of those HTML forms after they are submitted. Useful for quickly generating HTML forms based on existing SQL tables. To use the resultant functions, the output files containing those functions must be read into the R environment (perhaps using base::source()).
Provide tools for drought monitoring based on univariate and multivariate drought indicators.Statistical drought prediction based on Ensemble Streamflow Prediction (ESP), drought risk assessments, and drought propagation are also provided. Please see Hao Zengchao et al. (2017) <doi:10.1016/j.envsoft.2017.02.008>.
Data and miscellanea to support the book "Introduction to Data analysis with R for Forensic Scientists." This book was written by James Curran and published by CRC Press in 2010 (ISBN: 978-1-4200-8826-7).
This package provides tools for exploring the topography of 3d triangle meshes. The functions were developed with dental surfaces in mind, but could be applied to any triangle mesh of class mesh3d'. More specifically, doolkit allows to isolate the border of a mesh, or a subpart of the mesh using the polygon networks method; crop a mesh; compute basic descriptors (elevation, orientation, footprint area); compute slope, angularity and relief index (Ungar and Williamson (2000) <https://palaeo-electronica.org/2000_1/gorilla/issue1_00.htm>; Boyer (2008) <doi:10.1016/j.jhevol.2008.08.002>), inclination and occlusal relief index or gamma (Guy et al. (2013) <doi:10.1371/journal.pone.0066142>), OPC (Evans et al. (2007) <doi:10.1038/nature05433>), OPCR (Wilson et al. (2012) <doi:10.1038/nature10880>), DNE (Bunn et al. (2011) <doi:10.1002/ajpa.21489>; Pampush et al. (2016) <doi:10.1007/s10914-016-9326-0>), form factor (Horton (1932) <doi:10.1029/TR013i001p00350>), basin elongation (Schum (1956) <doi:10.1130/0016-7606(1956)67[597:EODSAS]2.0.CO;2>), lemniscate ratio (Chorley et al; (1957) <doi:10.2475/ajs.255.2.138>), enamel-dentine distance (Guy et al. (2015) <doi:10.1371/journal.pone.0138802>; Thiery et al. (2017) <doi:10.3389/fphys.2017.00524>), absolute crown strength (Schwartz et al. (2020) <doi:10.1098/rsbl.2019.0671>), relief rate (Thiery et al. (2019) <doi:10.1002/ajpa.23916>) and area-relative curvature; draw cumulative profiles of a topographic variable; and map a variable over a 3d triangle mesh.