Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Quickly create numeric matrices for machine learning algorithms that require them. It converts factor columns into onehot vectors.
This package provides a regression framework for response variables which are continuous self-rating scales such as the Visual Analog Scale (VAS) used in pain assessment, or the Linear Analog Self-Assessment (LASA) scales in quality of life studies. These scales measure subjects perception of an intangible quantity, and cannot be handled as ratio variables because of their inherent non-linearity. We treat them as ordinal variables, measured on a continuous scale. A function (the g function) connects the scale with an underlying continuous latent variable. The link function is the inverse of the CDF of the assumed underlying distribution of the latent variable. A variety of link functions are currently implemented. Such models are described in Manuguerra et al (2020) <doi:10.18637/jss.v096.i08>.
Build SVG components using element-based functions. With an svg object, we can modify its graphical elements with a suite of transform functions.
This package creates mock data for testing and package development for the Observational Medical Outcomes Partnership common data model. The package offers functions crafted with pipeline-friendly implementation, enabling users to effortlessly include only the necessary tables for their testing needs.
This package provides tools for checking that the output of an optimization algorithm is indeed at a local mode of the objective function. This is accomplished graphically by calculating all one-dimensional "projection plots" of the objective function, i.e., varying each input variable one at a time with all other elements of the potential solution being fixed. The numerical values in these plots can be readily extracted for the purpose of automated and systematic unit-testing of optimization routines.
Computes A-, MV-, D- and E-optimal or near-optimal row-column designs for two-colour cDNA microarray experiments using the linear fixed effects and mixed effects models where the interest is in a comparison of all pairwise treatment contrasts. The algorithms used in this package are based on the array exchange and treatment exchange algorithms adopted from Debusho, Gemechu and Haines (2018) <doi:10.1080/03610918.2018.1429617> algorithms after adjusting for the row-column designs setup. The package also provides an optional method of using the graphical user interface (GUI) R package tcltk to ensure that it is user friendly.
This package provides a building block for optimization algorithms based on a simplex. The optimsimplex package may be used in the following optimization methods: the simplex method of Spendley et al. (1962) <doi:10.1080/00401706.1962.10490033>, the method of Nelder and Mead (1965) <doi:10.1093/comjnl/7.4.308>, Box's algorithm for constrained optimization (1965) <doi:10.1093/comjnl/8.1.42>, the multi-dimensional search by Torczon (1989) <https://www.cs.wm.edu/~va/research/thesis.pdf>, etc...
This package provides carefully chosen color palettes as used a.o. at OpenAnalytics <http://www.openanalytics.eu>.
Computes optimal cutpoints for diagnostic tests or continuous markers. Various approaches for selecting optimal cutoffs have been implemented, including methods based on cost-benefit analysis and diagnostic test accuracy measures (Sensitivity/Specificity, Predictive Values and Diagnostic Likelihood Ratios). Numerical and graphical output for all methods is easily obtained.
The online principal component regression method can process the online data set. OPCreg implements the online principal component regression method, which is specifically designed to process online datasets efficiently. This method is particularly useful for handling large-scale, streaming data where traditional batch processing methods may be computationally infeasible.The philosophy of the package is described in Guo (2025) <doi:10.1016/j.physa.2024.130308>.
Harvest metadata using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) version 2.0 (for more information, see <https://www.openarchives.org/OAI/openarchivesprotocol.html>).
This package provides functions for optimal policy learning in socioeconomic applications helping users to learn the most effective policies based on data in order to maximize empirical welfare. Specifically, OPL allows to find "treatment assignment rules" that maximize the overall welfare, defined as the sum of the policy effects estimated over all the policy beneficiaries. Documentation about OPL is provided by several international articles via Athey et al (2021, <doi:10.3982/ECTA15732>), Kitagawa et al (2018, <doi:10.3982/ECTA13288>), Cerulli (2022, <doi:10.1080/13504851.2022.2032577>), the paper by Cerulli (2021, <doi:10.1080/13504851.2020.1820939>) and the book by Gareth et al (2013, <doi:10.1007/978-1-4614-7138-7>).
This package provides a collection of functions to facilitate analysis of proteomic data from Olink, primarily NPX data that has been exported from Olink Software. The functions also work on QUANT data from Olink by log- transforming the QUANT data. The functions are focused on reading data, facilitating data wrangling and quality control analysis, performing statistical analysis and generating figures to visualize the results of the statistical analysis. The goal of this package is to help users extract biological insights from proteomic data run on the Olink platform.
Calculate ocean wave height summary statistics and process data from bottom-mounted pressure sensor data loggers. Derived primarily from MATLAB functions provided by U. Neumeier at <http://neumeier.perso.ch/matlab/waves.html>. Wave number calculation based on the algorithm in Hunt, J. N. (1979, ISSN:0148-9895) "Direct Solution of Wave Dispersion Equation", American Society of Civil Engineers Journal of the Waterway, Port, Coastal, and Ocean Division, Vol 105, pp 457-459.
Calculate the ratio of iron oxides, hematite and goethite, in soil using the diffuse reflectance technique. The Kubelka-Munk theory, second derivative analysis, and spectral region amplitudes related to hematite and goethite content are used for quantification (Torrent, J., & Barron, V. (2008) <doi:10.2136/sssabookser5.5.c13>). Additionally, the package calculates soil color in the visible spectrum using Munsell and RGB color spaces, based on color theory (Viscarra et al. (2006) <doi:10.1016/j.geoderma.2005.07.017>).
This package provides a set of commands to manage an abstract optimization method. The goal is to provide a building block for a large class of specialized optimization methods. This package manages: the number of variables, the minimum and maximum bounds, the number of non linear inequality constraints, the cost function, the logging system, various termination criteria, etc...
This package provides a function for fitting various penalized Bayesian cumulative link ordinal response models when the number of parameters exceeds the sample size. These models have been described in Zhang and Archer (2021) <doi:10.1186/s12859-021-04432-w>.
Match, download, convert and import Open Street Map data extracts obtained from several providers.
The Ontario Marginalization Index is a socioeconomic model that is built on Statistics Canada census data. The model consists of four dimensions: In 2021, these dimensions were updated to "Material Resources" (previously called "Material Deprivation"), "Households and Dwellings" (previously called "Residential Instability"), "Age and Labour Force" (previously called "Dependency"), and "Racialized and Newcomer Populations" (previously called "Ethnic Concentration"). This update reflects a movement away from deficit-based language. 2021 data will load with these new dimension names, wheras 2011 and 2016 data will load with the historical dimension names. Each of these dimensions are imported for a variety of geographic levels (DA, CD, etc.) for the 2021, 2011 and 2016 administrations of the census. These data sets contribute to community analysis of equity with respect to Ontario's Anti-Racism Act. The Ontario Marginalization Index data is retrieved from the Public Health Ontario website: <https://www.publichealthontario.ca/en/data-and-analysis/health-equity/ontario-marginalization-index>. The shapefile data is retrieved from the Statistics Canada website: <https://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-eng.cfm>.
OpenTelemetry is a collection of tools, APIs', and SDKs used to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) for analysis in order to understand your software's performance and behavior. This package contains the OpenTelemetry SDK', and exporters. Use this package to export traces, metrics, logs from instrumented R code. Use the otel package to instrument your R code for OpenTelemetry'.
Conduct sensitivity analysis of omitted variable bias in linear econometric models using the methodology presented in Basu (2025) <doi:10.2139/ssrn.4704246>.
This package provides functionalities and data structures to retrieve, analyze and visualize aviation data. It includes a client interface to the OpenSky API <https://opensky-network.org>. It allows retrieval of flight information, as well as aircraft state vectors.
This package provides a unified object-oriented framework for numerical optimizers in R. Allows for both minimization and maximization with any optimizer, optimization over more than one function argument, measuring of computation time, setting a time limit for long optimization tasks.
This package provides a first implementation of automated parsing of user stories, when used to defined functional requirements for operational research mathematical models. It allows reading user stories, splitting them on the who-what-why template, and classifying them according to the parts of the mathematical model that they represent. Also provides semantic grouping of stories, for project management purposes.