Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The identity provider ['OneLogin']<http://onelogin.com> is used for authentication via Single Sign On (SSO). This package provides an R interface to their API.
This package provides a collection of general-purpose helper functions that I (and maybe others) find useful when developing data science software. Includes tools for simulation, data transformation, input validation, and more.
Allows code to be run only once on a given computer, using lockfiles. Typical use cases include startup messages shown only when a package is loaded for the very first time.
This package provides a toolbox for working with public opinion data from Argentina. It facilitates access to microdata and the calculation of indicators of the Trust in Government Index (ICG), prepared by the Torcuato Di Tella University. Although we will try to document everything possible in English, by its very nature Spanish will be the main language. El paquete fue pensado como una caja de herramientas para el trabajo con datos de opinión pública de Argentina. El mismo facilita el acceso a los microdatos y el cálculos de indicadores del à ndice de Confianza en el Gobierno (ICG), elaborado por la Universidad Torcuato Di Tella.
Aids in the analysis of genes influencing cancer survival by including a principal function, calculator(), which calculates the P-value for each provided gene under the optimal cutoff in cancer survival studies. Grounded in methodologies from significant works, this package references Therneau's survival package (Therneau, 2024; <https://CRAN.R-project.org/package=survival>) and the survival analysis extensions by Therneau and Grambsch (2000, ISBN 0-387-98784-3). It also integrates the survminer package by Kassambara et al. (2021; <https://CRAN.R-project.org/package=survminer>), enhancing survival curve visualizations with ggplot2'.
An approach to outlier detection in RNA-seq and related data based on five statistics. OutSeekR implements an outlier test by comparing the distributions of these statistics in observed data with those of simulated null data.
The online principal component method can process the online data set. The philosophy of the package is described in Guo G. (2018) <doi:10.1080/10485252.2018.1531130>.
Microarray probe ID is not convenient for further enrichment analysis and target gene selection. The package is created for the rice microarray probe ID conversion. This package can convert microarray probe ID from GPL6864 <https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GPL6864>, GPL8852 <https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GPL8852>, and GPL2025 <https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GPL2025> platforms to RAP-DB ID. RAP-DB "The Rice Annotation Project Database" <https://rapdb.dna.affrc.go.jp> is a well-known database for rice Oryza sativa, and the gene ID in this database is widely used in many areas related to rice research. For multiple probes representing a single gene, This package can merge them by taking the mean, max, or min value of these probes. Or we can keep multiple probes by appending sequence numbers to duplicate the RAP-DB ID.
This package provides a data set package with the "Orsi" and "Park/Durand" fronts as SpatialLinesDataFrame objects. The Orsi et al. (1995) fronts are published at the Southern Ocean Atlas Database Page, and the Park et al. (2019) fronts are published at the SEANOE Altimetry-derived Antarctic Circumpolar Current fronts page, please see package CITATION for details.
Use health data in the Observational Medical Outcomes Partnership Common Data Model format in Spark'. Functionality includes creating all required tables and fields and creation of a single reference to the data. Native Spark functionality is supported.
Function library for the identification and separation of exponentially decaying signal components in continuous-wave optically stimulated luminescence measurements. A special emphasis is laid on luminescence dating with quartz, which is known for systematic errors due to signal components with unequal physical behaviour. Also, this package enables an easy to use signal decomposition of data sets imported and analysed with the R package Luminescence'. This includes the optional automatic creation of HTML reports. Further information and tutorials can be found at <https://luminescence.de>.
I tend to repeat the same code chunks over and over again. At first, this was fine for me and I paid little attention to such redundancies. A little later, when I got tired of manually replacing Linux filepaths with the referring Windows versions, and vice versa, I started to stuff some very frequently used work-steps into functions and, even later, into a proper R package. And that's what this package is - a hodgepodge of various R functions meant to simplify (my) everyday-life coding work without, at the same time, being devoted to a particular scope of application.
Useful functions for one-sample (individual level data) Mendelian randomization and instrumental variable analyses. The package includes implementations of; the Sanderson and Windmeijer (2016) <doi:10.1016/j.jeconom.2015.06.004> conditional F-statistic, the multiplicative structural mean model Hernán and Robins (2006) <doi:10.1097/01.ede.0000222409.00878.37>, and two-stage predictor substitution and two-stage residual inclusion estimators explained by Terza et al. (2008) <doi:10.1016/j.jhealeco.2007.09.009>.
Optimal scaling of a data vector, relative to a set of targets, is obtained through a least-squares transformation subject to appropriate measurement constraints. The targets are usually predicted values from a statistical model. If the data are nominal level, then the transformation must be identity-preserving. If the data are ordinal level, then the transformation must be monotonic. If the data are discrete, then tied data values must remain tied in the optimal transformation. If the data are continuous, then tied data values can be untied in the optimal transformation.
This package implements a tree-based method specifically designed for personalized medicine applications. By using genomic and mutational data, ODT efficiently identifies optimal drug recommendations tailored to individual patient profiles. The ODT algorithm constructs decision trees that bifurcate at each node, selecting the most relevant markers (discrete or continuous) and corresponding treatments, thus ensuring that recommendations are both personalized and statistically robust. This iterative approach enhances therapeutic decision-making by refining treatment suggestions until a predefined group size is achieved. Moreover, the simplicity and interpretability of the resulting trees make the method accessible to healthcare professionals. Includes functions for training the decision tree, making predictions on new samples or patients, and visualizing the resulting tree. For detailed insights into the methodology, please refer to Gimeno et al. (2023) <doi:10.1093/bib/bbad200>.
Import data from Our World in Data', an organisation which publishes research and data on global economic and social issues.
This package provides functions for extracting text and tables from PDF-based order documents. It provides an n-gram-based approach for identifying the language of an order document. It furthermore uses R-package pdftools to extract the text from an order document. In the case that the PDF document is only including an image (because it is scanned document), R package tesseract is used for OCR. Furthermore, the package provides functionality for identifying and extracting order position tables in order documents based on a clustering approach.
Fits ordinal regression models with elastic net penalty. Supported model families include cumulative probability, stopping ratio, continuation ratio, and adjacent category. These families are a subset of vector glm's which belong to a model class we call the elementwise link multinomial-ordinal (ELMO) class. Each family in this class links a vector of covariates to a vector of class probabilities. Each of these families has a parallel form, which is appropriate for ordinal response data, as well as a nonparallel form that is appropriate for an unordered categorical response, or as a more flexible model for ordinal data. The parallel model has a single set of coefficients, whereas the nonparallel model has a set of coefficients for each response category except the baseline category. It is also possible to fit a model with both parallel and nonparallel terms, which we call the semi-parallel model. The semi-parallel model has the flexibility of the nonparallel model, but the elastic net penalty shrinks it toward the parallel model. For details, refer to Wurm, Hanlon, and Rathouz (2021) <doi:10.18637/jss.v099.i06>.
Overture Maps offers free and open geospatial map data sourced from various providers and standardized to a common schema. This tool allows you to download Overture Maps data for a specific region of interest and convert it to several different file formats. For more information, visit <https://overturemaps.org/download/>.
Automates and standardizes the import of raw data from Oregon RFID (radio-frequency identification) ORMR (Oregon RFID Multi-Reader) and ORSR (Oregon RFID Single Reader) antenna readers. Compiled data can then be combined within multi-reader arrays for further analysis, including summarizing tag and reader detections, determining tag direction, and calculating antenna efficiency.
An implementation of the Ordered Forest estimator as developed in Lechner & Okasa (2019) <arXiv:1907.02436>. The Ordered Forest flexibly estimates the conditional probabilities of models with ordered categorical outcomes (so-called ordered choice models). Additionally to common machine learning algorithms the orf package provides functions for estimating marginal effects as well as statistical inference thereof and thus provides similar output as in standard econometric models for ordered choice. The core forest algorithm relies on the fast C++ forest implementation from the ranger package (Wright & Ziegler, 2017) <arXiv:1508.04409>.
Data used in compiling the Handbook of UK Urban Tree Allometric Equations and Size Characteristics (Fennel 2024). The data include measurements of height, crown radius and diameter at breast height (DBH) for UK urban trees. For more details see Fennell (2024) Handbook of UK Urban Tree Allometric Equations and Size Characteristics (Version 1.4). <doi:10.13140/RG.2.2.28745.04961>.
Efficient Monte Carlo Algorithms for the price and the sensitivities of Asian and European Options under Geometric Brownian Motion.
This package provides functions to estimate the optimal threshold of diagnostic markers or treatment selection markers. The optimal threshold is the marker value that maximizes the utility of the marker based-strategy (for diagnostic or treatment selection) in a given population. The utility function depends on the type of marker (diagnostic or treatment selection), but always takes into account the preferences of the patients or the physician in the decision process. For estimating the optimal threshold, ones must specify the distributions of the marker in different groups (defined according to the type of marker, diagnostic or treatment selection) and provides data to estimate the parameters of these distributions. Ones must also provide some features of the target populations (disease prevalence or treatment efficacies) as well as the preferences of patients or physicians. The functions rely on Bayesian inference which helps producing several indicators derived from the optimal threshold. See Blangero, Y, Rabilloud, M, Ecochard, R, and Subtil, F (2019) <doi:10.1177/0962280218821394> for the original article that describes the estimation method for treatment selection markers and Subtil, F, and Rabilloud, M (2019) <doi:10.1002/bimj.200900242> for diagnostic markers.