Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Addressing measurement error in covariates and misclassification in binary outcome variables within causal inference, the ATE.ERROR package implements inverse probability weighted estimation methods proposed by Shu and Yi (2017, <doi:10.1177/0962280217743777>; 2019, <doi:10.1002/sim.8073>). These methods correct errors to accurately estimate average treatment effects (ATE). The package includes two main functions: ATE.ERROR.Y() for handling misclassification in the outcome variable and ATE.ERROR.XY() for correcting both outcome misclassification and covariate measurement error. It employs logistic regression for treatment assignment and uses bootstrap sampling to calculate standard errors and confidence intervals, with simulated datasets provided for practical demonstration.
Allow user to run the Adaptive Correlated Spike and Slab (ACSS) algorithm, corresponding INdependent Spike and Slab (INSS) algorithm, and Giannone, Lenza and Primiceri (GLP) algorithm with adaptive burn-in. All of the three algorithms are used to fit high dimensional data set with either sparse structure, or dense structure with smaller contributions from all predictors. The state-of-the-art GLP algorithm is in Giannone, D., Lenza, M., & Primiceri, G. E. (2021, ISBN:978-92-899-4542-4) "Economic predictions with big data: The illusion of sparsity". The two new algorithms, ACSS algorithm and INSS algorithm, and the discussion on their performance can be seen in Yang, Z., Khare, K., & Michailidis, G. (2024, submitted to Journal of Business & Economic Statistics) "Bayesian methodology for adaptive sparsity and shrinkage in regression".
Loss reserving generally focuses on identifying a single model that can generate superior predictive performance. However, different loss reserving models specialise in capturing different aspects of loss data. This is recognised in practice in the sense that results from different models are often considered, and sometimes combined. For instance, actuaries may take a weighted average of the prediction outcomes from various loss reserving models, often based on subjective assessments. This package allows for the use of a systematic framework to objectively combine (i.e. ensemble) multiple stochastic loss reserving models such that the strengths offered by different models can be utilised effectively. Our framework is developed in Avanzi et al. (2023). Firstly, our criteria model combination considers the full distributional properties of the ensemble and not just the central estimate - which is of particular importance in the reserving context. Secondly, our framework is that it is tailored for the features inherent to reserving data. These include, for instance, accident, development, calendar, and claim maturity effects. Crucially, the relative importance and scarcity of data across accident periods renders the problem distinct from the traditional ensemble techniques in statistical learning. Our framework is illustrated with a complex synthetic dataset. In the results, the optimised ensemble outperforms both (i) traditional model selection strategies, and (ii) an equally weighted ensemble. In particular, the improvement occurs not only with central estimates but also relevant quantiles, such as the 75th percentile of reserves (typically of interest to both insurers and regulators). Reference: Avanzi B, Li Y, Wong B, Xian A (2023) "Ensemble distributional forecasting for insurance loss reserving" <doi:10.48550/arXiv.2206.08541>.
This package provides a routine to partial out factors with many levels during the optimization of the log-likelihood function of the corresponding generalized linear model (glm). The package is based on the algorithm described in Stammann (2018) <doi:10.48550/arXiv.1707.01815> and is restricted to glm's that are based on maximum likelihood estimation and nonlinear. It also offers an efficient algorithm to recover estimates of the fixed effects in a post-estimation routine and includes robust and multi-way clustered standard errors. Further the package provides analytical bias corrections for binary choice models derived by Fernandez-Val and Weidner (2016) <doi:10.1016/j.jeconom.2015.12.014> and Hinz, Stammann, and Wanner (2020) <doi:10.48550/arXiv.2004.12655>.
An interactive document on the topic of one-way and two-way analysis of variance using rmarkdown and shiny packages. Runtime examples are provided in the package function as well as at <https://kartikeyab.shinyapps.io/ANOVAShiny/>.
Make the compiled Java modules of the Amazon Web Services ('AWS') SDK available to be used in downstream R packages interacting with AWS'. See <https://aws.amazon.com/sdk-for-java> for more information on the AWS SDK for Java.
In order to make Arrow Database Connectivity ('ADBC <https://arrow.apache.org/adbc/>) accessible from R, an interface compliant with the DBI package is provided, using driver back-ends that are implemented in the adbcdrivermanager framework. This enables interacting with database systems using the Arrow data format, thereby offering an efficient alternative to ODBC for analytical applications.
Extremely efficient procedures for fitting the entire group lasso and group elastic net regularization path for GLMs, multinomial, the Cox model and multi-task Gaussian models. Similar to the R package glmnet in scope of models, and in computational speed. This package provides R bindings to the C++ code underlying the corresponding Python package adelie'. These bindings offer a general purpose group elastic net solver, a wide range of matrix classes that can exploit special structure to allow large-scale inputs, and an assortment of generalized linear model classes for fitting various types of data. The package is an implementation of Yang, J. and Hastie, T. (2024) <doi:10.48550/arXiv.2405.08631>.
Description: Computes maximum likelihood estimates of general, zero-inflated, and zero-altered models for discrete and continuous distributions. It also performs Kolmogorov-Smirnov (KS) tests and likelihood ratio tests for general, zero-inflated, and zero-altered data. Additionally, it obtains the inverse of the Fisher information matrix and confidence intervals for the parameters of general, zero-inflated, and zero-altered models. The package simulates random deviates from zero-inflated or hurdle models to obtain maximum likelihood estimates. Based on the work of Aldirawi et al. (2022) <doi:10.1007/s42519-021-00230-y> and Dousti Mousavi et al. (2023) <doi:10.1080/00949655.2023.2207020>.
Analysis of means (ANOM) as used in technometrical computing. The package takes results from multiple comparisons with the grand mean (obtained with multcomp', SimComp', nparcomp', or MCPAN') or corresponding simultaneous confidence intervals as input and produces ANOM decision charts that illustrate which group means deviate significantly from the grand mean.
Allows the user to implement an address search auto completion menu on shiny text inputs. This is done using the Algolia Places JavaScript library. See <https://community.algolia.com/places/>.
EM algorithm for estimation of parameters and other methods in a quantile regression.
This package provides a very fast and robust interface to ArcGIS Geocoding Services'. Provides capabilities for reverse geocoding, finding address candidates, character-by-character search autosuggestion, and batch geocoding. The public ArcGIS World Geocoder is accessible for free use via arcgisgeocode for all services except batch geocoding. arcgisgeocode also integrates with arcgisutils to provide access to custom locators or private ArcGIS World Geocoder hosted on ArcGIS Enterprise'. Learn more in the Geocode service API reference <https://developers.arcgis.com/rest/geocode/api-reference/overview-world-geocoding-service.htm>.
This package implements wavelet-based approaches for describing population admixture. Principal Components Analysis (PCA) is used to define the population structure and produce a localized admixture signal for each individual. Wavelet summaries of the PCA output describe variation present in the data and can be related to population-level demographic processes. For more details, see J Sanderson, H Sudoyo, TM Karafet, MF Hammer and MP Cox. 2015. Reconstructing past admixture processes from local genomic ancestry using wavelet transformation. Genetics 200:469-481 <doi:10.1534/genetics.115.176842>.
The ArcGIS Places service is a ready-to-use location service that can search for businesses and geographic locations around the world. It allows you to find, locate, and discover detailed information about each place. Query for places near a point, within a bounding box, filter based on categories, or provide search text. arcgisplaces integrates with sf for out of the box compatibility with other spatial libraries. Learn more in the Places service API reference <https://developers.arcgis.com/rest/places/>.
Compute a tree level hierarchy, judgment matrix, consistency index and ratio, priority vectors, hierarchic synthesis and rank. Based on the book entitled "Models, Methods, Concepts and Applications of the Analytic Hierarchy Process" by Saaty and Vargas (2012, ISBN 978-1-4614-3597-6).
It extends the functionality of logger package. Additional logging metadata can be configured to be collected. Logging messages are displayed on console and optionally they are sent to Azure Log Analytics workspace in real-time.
It covers various approaches to analysis of variance, provides an assumption testing section in order to provide a decision diagram that allows selecting the most appropriate technique. It provides the classical analysis of variance, the nonparametric equivalent of Kruskal Wallis, and the Bayesian approach. These results are shown in an interactive shiny panel, which allows modifying the arguments of the tests, contains interactive graphics and presents automatic conclusions depending on the tests in order to contribute to the interpretation of these analyzes. AovBay uses Stan and FactorBayes for Bayesian analysis and Highcharts for interactive charts.
Create and evaluate models using tidymodels and h2o <https://h2o.ai/>. The package enables users to specify h2o as an engine for several modeling methods.
It calculates the Air Pollution Tolerance Index (APTI) of plant species using biochemical parameters such as chlorophyll content, leaf extract pH, relative water content, and ascorbic acid content. It helps in identifying tolerant species for greenbelt development and pollution mitigation studies. It includes a shiny app for interactive APTI calculation and visualisation. For method details see, Sahu et al. (2020).<DOI:10.1007/s42452-020-3120-6>.
Machine learning based package to predict anti-angiogenic peptides using heterogeneous sequence descriptors. AntAngioCOOL exploits five descriptor types of a peptide of interest to do prediction including: pseudo amino acid composition, k-mer composition, k-mer composition (reduced alphabet), physico-chemical profile and atomic profile. According to the obtained results, AntAngioCOOL reached to a satisfactory performance in anti-angiogenic peptide prediction on a benchmark non-redundant independent test dataset.
This package provides functions to estimate and interpret the alpha-NOMINATE ideal point model developed in Carroll et al. (2013, <doi:10.1111/ajps.12029>). alpha-NOMINATE extends traditional spatial voting frameworks by allowing for a mixture of Gaussian and quadratic utility functions, providing flexibility in modeling political actors preferences. The package uses Markov Chain Monte Carlo (MCMC) methods for parameter estimation, supporting robust inference about individuals ideological positions and the shape of their utility functions. It also contains functions to simulate data from the model and to calculate the probability of a vote passing given the ideal points of the legislators/voters and the estimated location of the choice alternatives.
Streamline use of the All of Us Researcher Workbench (<https://www.researchallofus.org/data-tools/workbench/>)with tools to extract and manipulate data from the All of Us database. Increase interoperability with the Observational Health Data Science and Informatics ('OHDSI') tool stack by decreasing reliance of All of Us tools and allowing for cohort creation via Atlas'. Improve reproducible and transparent research using All of Us'.
This package provides an htmlwidgets interface to apexcharts.js'. Apexcharts is a modern JavaScript charting library to build interactive charts and visualizations with simple API. Apexcharts examples and documentation are available here: <https://apexcharts.com/>.