Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An implementation of the algorithm described in "Efficient Large- Scale Internet Media Selection Optimization for Online Display Advertising" by Paulson, Luo, and James (Journal of Marketing Research 2018; see URL below for journal text/citation and <http://faculty.marshall.usc.edu/gareth-james/Research/ELMSO.pdf> for a full-text version of the paper). The algorithm here is designed to allocate budget across a set of online advertising opportunities using a coordinate-descent approach, but it can be used in any resource-allocation problem with a matrix of visitation (in the case of the paper, website page- views) and channels (in the paper, websites). The package contains allocation functions both in the presence of bidding, when allocation is dependent on channel-specific cost curves, and when advertising costs are fixed at each channel.
This package provides various functions for reading and preparing the Panel Study of Income Dynamics (PSID) for longitudinal analysis, including functions that read the PSID's fixed width format files directly into R, rename all of the PSID's longitudinal variables so that recurring variables have consistent names across years, simplify assembling longitudinal datasets from cross sections of the PSID Family Files, and export the resulting PSID files into file formats common among other statistical programming languages ('SAS', STATA', and SPSS').
Evidential regression analysis for dichotomous and quantitative outcome data. The following references described the methods in this package: Strug, L. J., Hodge, S. E., Chiang, T., Pal, D. K., Corey, P. N., & Rohde, C. (2010) <doi:10.1038/ejhg.2010.47>. Strug, L. J., & Hodge, S. E. (2006) <doi:10.1159/000094709>. Royall, R. (1997) <ISBN:0-412-04411-0>.
This package provides a non-parametric framework based on estimation statistics principle. Its main purpose is to infer orders of empirical distributions from different categories based on a probability of finding a value in one distribution that is greater than an expectation of another distribution. Given a set of ordered-pair of real-category values the framework is capable of 1) inferring orders of domination of categories and representing orders in the form of a graph; 2) estimating magnitude of difference between a pair of categories in forms of mean-difference confidence intervals; and 3) visualizing domination orders and magnitudes of difference of categories. The publication of this package is at Chainarong Amornbunchornvej, Navaporn Surasvadi, Anon Plangprasopchok, and Suttipong Thajchayapong (2020) <doi:10.1016/j.heliyon.2020.e05435>.
This package contains methods for the estimation of Shannon's entropy, variants of Renyi's entropy, mutual information, Kullback-Leibler divergence, and generalized Simpson's indices. The estimators used have a bias that decays exponentially fast.
The EXPOS model uses a digital elevation model (DEM) to estimate exposed and protected areas for a given hurricane wind direction and inflection angle. The resulting topographic exposure maps can be combined with output from the HURRECON model to estimate hurricane wind damage across a region. For details on the original version of the EXPOS model written in Borland Pascal', see: Boose, Foster, and Fluet (1994) <doi:10.2307/2937142>, Boose, Chamberlin, and Foster (2001) <doi:10.1890/0012-9615(2001)071[0027:LARIOH]2.0.CO;2>, and Boose, Serrano, and Foster (2004) <doi:10.1890/02-4057>.
The core of this package is a function eDT() which enhances DT::datatable() such that it can be used to interactively modify data in shiny'. By the use of generic dplyr methods it supports many types of data storage, with relational databases ('dbplyr') being the main use case.
We introduced a novel ensemble-based explainable machine learning model using Model Confidence Set (MCS) and two stage Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) algorithm. The model combined the predictive capabilities of different machine-learning models and integrates the interpretability of explainability methods. To develop the proposed algorithm, a two-stage Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) framework was employed. The package has been developed using the algorithm of Paul et al. (2023) <doi:10.1007/s40009-023-01218-x> and Yeasin and Paul (2024) <doi:10.1007/s11227-023-05542-3>.
This package provides functions to create simulated time series of environmental exposures (e.g., temperature, air pollution) and health outcomes for use in power analysis and simulation studies in environmental epidemiology. This package also provides functions to evaluate the results of simulation studies based on these simulated time series. This work was supported by a grant from the National Institute of Environmental Health Sciences (R00ES022631) and a fellowship from the Colorado State University Programs for Research and Scholarly Excellence.
This package provides a function for distribution free control chart based on the change point model, for multivariate statistical process control. The main constituent of the chart is the energy test that focuses on the discrepancy between empirical characteristic functions of two random vectors. This new control chart highlights in three aspects. Firstly, it is distribution free, requiring no knowledge of the random processes. Secondly, this control chart can monitor mean and variance simultaneously. Thirdly it is devised for multivariate time series which is more practical in real data application. Fourthly, it is designed for online detection (Phase II), which is central for real time surveillance of stream data. For more information please refer to O. Okhrin and Y.F. Xu (2017) <https://github.com/YafeiXu/working_paper/raw/master/CPM102.pdf>.
This package provides functions for extreme value theory, which may be divided into the following groups; exploratory data analysis, block maxima, peaks over thresholds (univariate and bivariate), point processes, gev/gpd distributions.
This package provides a graphical user interface for open source event detection.
This package implements the hybrid framework for event prediction described in Fang & Zheng (2011, <doi:10.1016/j.cct.2011.05.013>). To estimate the survival function the event prediction is based on, a piecewise exponential hazard function is fit to the time-to-event data to infer the potential change points. Prior to the last identified change point, the survival function is estimated using Kaplan-Meier, and the tail after the change point is fit using piecewise exponential.
Package implements entropy balancing, a data preprocessing procedure described in Hainmueller (2008, <doi:10.1093/pan/mpr025>) that allows users to reweight a dataset such that the covariate distributions in the reweighted data satisfy a set of user specified moment conditions. This can be useful to create balanced samples in observational studies with a binary treatment where the control group data can be reweighted to match the covariate moments in the treatment group. Entropy balancing can also be used to reweight a survey sample to known characteristics from a target population.
This package provides a generic function for running the Expectation-Maximization (EM) algorithm within a maximum likelihood framework, based on Dempster, Laird, and Rubin (1977) <doi:10.1111/j.2517-6161.1977.tb01600.x> is implemented. It can be applied after a model fitting using R's existing functions and packages.
Two classifiers for open set recognition and novelty detection based on extreme value theory. The first classifier is based on the generalized Pareto distribution (GPD) and the second classifier is based on the generalized extreme value (GEV) distribution. For details, see Vignotto, E., & Engelke, S. (2018) <arXiv:1808.09902>.
Integrates methods for epidemiological analysis, modeling, and visualization, including functions for summary statistics, SIR (Susceptible-Infectious-Recovered) modeling, DALY (Disability-Adjusted Life Years) estimation, age standardization, diagnostic test evaluation, NLP (Natural Language Processing) keyword extraction, clinical trial power analysis, survival analysis, SNP (Single Nucleotide Polymorphism) association, and machine learning methods such as logistic regression, k-means clustering, Random Forest, and Support Vector Machine (SVM). Includes datasets for prevalence estimation, SIR modeling, genomic analysis, clinical trials, DALY, diagnostic tests, and survival analysis. Methods are based on Gelman et al. (2013) <doi:10.1201/b16018> and Wickham et al. (2019, ISBN:9781492052040>.
This package produces tables for descriptive epidemiological analysis. These tables include attack rates, case fatality ratios, and mortality rates (with appropriate confidence intervals), with additional functionality to calculate Mantel-Haenszel odds, risk, and incidence rate ratios. The methods implemented follow standard epidemiological approaches described in Rothman et al. (2008, ISBN:978-0-19-513554-2). This package is part of the R4EPIs project <https://R4EPI.github.io/sitrep/>.
This package implements event extraction and early classification of events in data streams in R. It has the functionality to generate 2-dimensional data streams with events belonging to 2 classes. These events can be extracted and features computed. The event features extracted from incomplete-events can be classified using a partial-observations-classifier (Kandanaarachchi et al. 2018) <doi:10.1371/journal.pone.0236331>.
Connect to Elasticsearch', a NoSQL database built on the Java Virtual Machine. Interacts with the Elasticsearch HTTP API (<https://www.elastic.co/elasticsearch/>), including functions for setting connection details to Elasticsearch instances, loading bulk data, searching for documents with both HTTP query variables and JSON based body requests. In addition, elastic provides functions for interacting with API's for indices', documents, nodes, clusters, an interface to the cat API, and more.
Estimating individual-level covariate-outcome associations using aggregate data ("ecological inference") or a combination of aggregate and individual-level data ("hierarchical related regression").
Empirical likelihood (EL) inference for two-sample problems. The following statistics are included: the difference of two-sample means, smooth Huber estimators, quantile (qdiff) and cumulative distribution functions (ddiff), probability-probability (P-P) and quantile-quantile (Q-Q) plots as well as receiver operating characteristic (ROC) curves. EL calculations are based on J. Valeinis, E. Cers (2011) <http://home.lu.lv/~valeinis/lv/petnieciba/EL_TwoSample_2011.pdf>.
Fit and sample from the ensemble model described in Spence et al (2018): "A general framework for combining ecosystem models"<doi:10.1111/faf.12310>.
Addresses tasks along the pipeline from raw data to analysis and visualization for eye-tracking data. Offers several popular types of analyses, including linear and growth curve time analyses, onset-contingent reaction time analyses, as well as several non-parametric bootstrapping approaches. For references to the approach see Mirman, Dixon & Magnuson (2008) <doi:10.1016/j.jml.2007.11.006>, and Barr (2008) <doi:10.1016/j.jml.2007.09.002>.