Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
R Interface to ONNX - Open Neural Network Exchange <https://onnx.ai/>. ONNX provides an open source format for machine learning models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.
It implements the online Bayesian methods for change point analysis. It can also perform missing data imputation with methods from VIM'. The reference is Yigiter A, Chen J, An L, Danacioglu N (2015) <doi:10.1080/02664763.2014.1001330>. The link to the package is <https://CRAN.R-project.org/package=onlineBcp>.
This package provides a database containing the names of the babies born in Ontario between 1917 and 2018. Counts of fewer than 5 names were suppressed for privacy.
Offers a gene-based meta-analysis test with filtering to detect gene-environment interactions (GxE) with association data, proposed by Wang et al. (2018) <doi:10.1002/gepi.22115>. It first conducts a meta-filtering test to filter out unpromising SNPs by combining all samples in the consortia data. It then runs a test of omnibus-filtering-based GxE meta-analysis (ofGEM) that combines the strengths of the fixed- and random-effects meta-analysis with meta-filtering. It can also analyze data from multiple ethnic groups.
Convert odds ratio to relative risk in cohort studies with partial data information (Wang (2013) <doi:10.18637/jss.v055.i05>).
An interface to the Apache OpenNLP tools (version 1.5.3). The Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text written in Java. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and coreference resolution. See <https://opennlp.apache.org/> for more information.
Algorithms for D-, A-, I-, and c-optimal designs. For more details, see the package description. Some of the functions in this package require the gurobi software and its accompanying R package. For their installation, please follow the instructions at <https://www.gurobi.com> and the file gurobi_inst.txt, respectively.
An implementation of several functions for feature extraction in ordinal time series datasets. Specifically, some of the features proposed by Weiss (2019) <doi:10.1080/01621459.2019.1604370> can be computed. These features can be used to perform inferential tasks or to feed machine learning algorithms for ordinal time series, among others. The package also includes some interesting datasets containing financial time series. Practitioners from a broad variety of fields could benefit from the general framework provided by otsfeatures'.
Analyze repertory grids, a qualitative-quantitative data collection technique devised by George A. Kelly in the 1950s. Today, grids are used across various domains ranging from clinical psychology to marketing. The package contains functions to quantitatively analyze and visualize repertory grid data (e.g. Fransella', Bell', & Bannister', 2004, ISBN: 978-0-470-09080-0). The package is part of the The package is part of the <https://openrepgrid.org/> project.
This package provides functions for optimal policy learning in socioeconomic applications helping users to learn the most effective policies based on data in order to maximize empirical welfare. Specifically, OPL allows to find "treatment assignment rules" that maximize the overall welfare, defined as the sum of the policy effects estimated over all the policy beneficiaries. Documentation about OPL is provided by several international articles via Athey et al (2021, <doi:10.3982/ECTA15732>), Kitagawa et al (2018, <doi:10.3982/ECTA13288>), Cerulli (2022, <doi:10.1080/13504851.2022.2032577>), the paper by Cerulli (2021, <doi:10.1080/13504851.2020.1820939>) and the book by Gareth et al (2013, <doi:10.1007/978-1-4614-7138-7>).
Simplified odds ratio calculation of GAM(M)s & GLM(M)s. Provides structured output (data frame) of all predictors and their corresponding odds ratios and confident intervals for further analyses. It helps to avoid false references of predictors and increments by specifying these parameters in a list instead of using exp(coef(model)) (standard approach of odds ratio calculation for GLMs) which just returns a plain numeric output. For GAM(M)s, odds ratio calculation is highly simplified with this package since it takes care of the multiple predict() calls of the chosen predictor while holding other predictors constant. Also, this package allows odds ratio calculation of percentage steps across the whole predictor distribution range for GAM(M)s. In both cases, confident intervals are returned additionally. Calculated odds ratio of GAM(M)s can be inserted into the smooth function plot.
Introduces optional types with some() and none, as well as match_with() from functional languages.
This is a tool to find the optimal rerandomization threshold in non-sequential experiments. We offer three procedures based on assumptions made on the residuals distribution: (1) normality assumed (2) excess kurtosis assumed (3) entire distribution assumed. Illustrations are included. Also included is a routine to unbiasedly estimate Frobenius norms of variance-covariance matrices. Details of the method can be found in "Optimal Rerandomization via a Criterion that Provides Insurance Against Failed Experiments" Adam Kapelner, Abba M. Krieger, Michael Sklar and David Azriel (2020) <arXiv:1905.03337>.
Predictive scores must be updated with care, because actions taken on the basis of existing risk scores causes bias in risk estimates from the updated score. A holdout set is a straightforward way to manage this problem: a proportion of the population is held-out from computation of the previous risk score. This package provides tools to estimate a size for this holdout set and associated errors. Comprehensive vignettes are included. Please see: Haidar-Wehbe S, Emerson SR, Aslett LJM, Liley J (2022) <doi:10.48550/arXiv.2202.06374> (to appear in Annals of Applied Statistics) for details of methods.
This package provides functions to perform subspace clustering and classification.
This package provides an R interface to the OMOPHub API for accessing OHDSI ATHENA standardized medical vocabularies. Supports concept search, vocabulary exploration, hierarchy navigation, relationship queries, and concept mappings with automatic pagination and rate limiting.
Growing collection of helper functions for point pattern analysis. Most functions are designed to work with the spatstat (<http://spatstat.org>) package. The focus of most functions are either null models or summary functions for spatial point patterns. For a detailed description of all null models and summary functions, see Wiegand and Moloney (2014, ISBN:9781420082548).
Inference using a class of Hidden Markov models (HMMs) called oHMMed'(ordered HMM with emission densities <doi:10.1186/s12859-024-05751-4>): The oHMMed algorithms identify the number of comparably homogeneous regions within observed sequences with autocorrelation patterns. These are modelled as discrete hidden states; the observed data points are then realisations of continuous probability distributions with state-specific means that enable ordering of these distributions. The observed sequence is labelled according to the hidden states, permitting only neighbouring states that are also neighbours within the ordering of their associated distributions. The parameters that characterise these state-specific distributions are then inferred. Relevant for application to genomic sequences, time series, or any other sequence data with serial autocorrelation.
Advanced forecasting algorithms for long-term energy demand at the national or regional level. The methodology is based on Grandón et al. (2024) <doi:10.1016/j.apenergy.2023.122249>; Zimmermann & Ziel (2024) <doi:10.1016/j.apenergy.2025.125444>. Real-time data, including power demand, weather conditions, and macroeconomic indicators, are provided through automated API integration with various institutions. The modular approach maintains transparency on the various model selection processes and encompasses the ability to be adapted to individual needs. oRaklE tries to help facilitating robust decision-making in energy management and planning.
Computes A-, MV-, D- and E-optimal or near-optimal block designs for two-colour cDNA microarray experiments using the linear fixed effects and mixed effects models where the interest is in a comparison of all possible elementary treatment contrasts. The algorithms used in this package are based on the treatment exchange and array exchange algorithms of Debusho, Gemechu and Haines (2018) <doi:10.1080/03610918.2018.1429617>. The package also provides an optional method of using the graphical user interface (GUI) R package tcltk to ensure that it is user friendly.
Necessary functions for optimized automated evaluation of the number and parameters of Gaussian mixtures in one-dimensional data. Various methods are available for parameter estimation and for determining the number of modes in the mixture. A detailed description of the methods ca ben found in Lotsch, J., Malkusch, S. and A. Ultsch. (2022) <doi:10.1016/j.imu.2022.101113>.
In bulk epigenome/transcriptome experiments, molecular expression is measured in a tissue, which is a mixture of multiple types of cells. This package tests association of a disease/phenotype with a molecular marker for each cell type. The proportion of cell types in each sample needs to be given as input. The package is applicable to epigenome-wide association study (EWAS) and differential gene expression analysis. Takeuchi and Kato (submitted) "omicwas: cell-type-specific epigenome-wide and transcriptome association study".
Conduct sensitivity analysis of omitted variable bias in linear econometric models using the methodology presented in Basu (2025) <doi:10.2139/ssrn.4704246>.
Estimates ordered probit switching regression models - a Heckman type selection model with an ordinal selection and continuous outcomes. Different model specifications are allowed for each treatment/regime. For more details on the method, see Wang & Mokhtarian (2024) <doi:10.1016/j.tra.2024.104072> or Chiburis & Lokshin (2007) <doi:10.1177/1536867X0700700202>.