Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Frequentist inference on adaptively generated data. The methods implemented are based on Zhan et al. (2021) <doi:10.48550/arXiv.2106.02029> and Hadad et al. (2021) <doi:10.48550/arXiv.1911.02768>. For illustration, several functions for simulating non-contextual and contextual adaptive experiments using Thompson sampling are also supplied.
This package provides a framework for data manipulation and visualization using a web-based point and click user interface where analysis pipelines are decomposed into re-usable and parameterizable blocks.
Fit Bayesian time series models using Stan for full Bayesian inference. A wide range of distributions and models are supported, allowing users to fit Seasonal ARIMA, ARIMAX, Dynamic Harmonic Regression, GARCH, t-student innovation GARCH models, asymmetric GARCH, Random Walks, stochastic volatility models for univariate time series. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their beliefs. Model fit can easily be assessed and compared with typical visualization methods, information criteria such as loglik, AIC, BIC WAIC, Bayes factor and leave-one-out cross-validation methods. References: Hyndman (2017) <doi:10.18637/jss.v027.i03>; Carpenter et al. (2017) <doi:10.18637/jss.v076.i01>.
Estimation of association between disease or death counts (e.g. COVID-19) and socio-environmental risk factors using a zero-inflated Bayesian spatiotemporal model. Non-spatiotemporal models and/or models without zero-inflation are also included for comparison. Functions to produce corresponding maps are also included. See Chakraborty et al. (2022) <doi:10.1007/s13253-022-00487-1> for more details on the method.
Simulation and visualization depth-dependent integrated visual fields. Visual fields are measured monocularly at a single depth, yet real-life activities involve predominantly binocular vision at multiple depths. The package provides functions to simulate and visualize binocular visual field impairment in a depth-dependent fashion from monocular visual field results based on Ping Liu, Allison McKendrick, Anna Ma-Wyatt, Andrew Turpin (2019) <doi:10.1167/tvst.9.3.8>. At each location and depth plane, sensitivities are linearly interpolated from corresponding locations in monocular visual field and returned as the higher value of the two. Its utility is demonstrated by evaluating DD-IVF defects associated with 12 glaucomatous archetypes of 24-2 visual field pattern in the included shiny apps.
The Autoregressive Integrated Moving Average (ARIMA) model is very popular univariate time series model. Its application has been widened by the incorporation of exogenous variable(s) (X) in the model and modified as ARIMAX by Bierens (1987) <doi:10.1016/0304-4076(87)90086-8>. In this package we estimate the ARIMAX model using Bayesian framework.
Model agnostic tool for decomposition of predictions from black boxes. Break Down Table shows contributions of every variable to a final prediction. Break Down Plot presents variable contributions in a concise graphical way. This package work for binary classifiers and general regression models.
This package provides a class of Bayesian beta regression models for the analysis of continuous data with support restricted to an unknown finite support. The response variable is modeled using a four-parameter beta distribution with the mean or mode parameter depending linearly on covariates through a link function. When the response support is known to be (0,1), the above class of models reduce to traditional (0,1) supported beta regression models. Model choice is carried out via the logarithm of the pseudo marginal likelihood (LPML), the deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). See Zhou and Huang (2022) <doi:10.1016/j.csda.2021.107345>.
Collection of tools to make R more convenient. Includes tools to summarize data using statistics not available with base R and manipulate objects for analyses.
Enable users to evaluate long-term trends using a Generalized Additive Modeling (GAM) approach. The model development includes selecting a GAM structure to describe nonlinear seasonally-varying changes over time, incorporation of hydrologic variability via either a river flow or salinity, the use of an intervention to deal with method or laboratory changes suspected to impact data values, and representation of left- and interval-censored data. The approach has been applied to water quality data in the Chesapeake Bay, a major estuary on the east coast of the United States to provide insights to a range of management- and research-focused questions. Methodology described in Murphy (2019) <doi:10.1016/j.envsoft.2019.03.027>.
Fitting, cross-validating, and predicting with Bayesian Knowledge Tracing (BKT) models. It is designed for analyzing educational datasets to trace student knowledge over time. The package includes functions for fitting BKT models, evaluating their performance using various metrics, and making predictions on new data. It provides the similar functionality as the Python package pyBKT authored by Zachary A. Pardos (zp@berkeley.edu) at <https://github.com/CAHLR/pyBKT>.
Enables quick calibration of radiocarbon dates under various calibration curves (including user generated ones); age-depth modelling as per the algorithm of Haslett and Parnell (2008) <DOI:10.1111/j.1467-9876.2008.00623.x>; Relative sea level rate estimation incorporating time uncertainty in polynomial regression models (Parnell and Gehrels 2015) <DOI:10.1002/9781118452547.ch32>; non-parametric phase modelling via Gaussian mixtures as a means to determine the activity of a site (and as an alternative to the Oxcal function SUM(); currently unpublished), and reverse calibration of dates from calibrated into 14C years (also unpublished).
Generates nonparametric bootstrap confidence intervals (Efron and Tibshirani, 1993: <doi:10.1201/9780429246593>) for standardized regression coefficients (beta) and other effect sizes, including multiple correlation, semipartial correlations, improvement in R-squared, squared partial correlations, and differences in standardized regression coefficients, for models fitted by lm().
Arrays of structured data types can require large volumes of disk space to store. Blosc is a library that provides a fast and efficient way to compress such data. It is often applied in storage of n-dimensional arrays, such as in the case of the geo-spatial zarr file format. This package can be used to compress and decompress data using Blosc'.
This package provides statistical tools for Bayesian estimation of mixture distributions, mainly a mixture of Gamma, Normal, and t-distributions. The package is implemented based on the Bayesian literature for the finite mixture of distributions, including Mohammadi and et al. (2013) <doi:10.1007/s00180-012-0323-3> and Mohammadi and Salehi-Rad (2012) <doi:10.1080/03610918.2011.588358>.
Exposes the binary search functions of the C++ standard library (std::lower_bound, std::upper_bound) plus other convenience functions, allowing faster lookups on sorted vectors.
Response surface methods for drug synergy analysis. Available methods include generalized and classical Loewe formulations as well as Highest Single Agent methodology. Response surfaces can be plotted in an interactive 3-D plot and formal statistical tests for presence of synergistic effects are available. Implemented methods and tests are described in the article "BIGL: Biochemically Intuitive Generalized Loewe null model for prediction of the expected combined effect compatible with partial agonism and antagonism" by Koen Van der Borght, Annelies Tourny, Rytis Bagdziunas, Olivier Thas, Maxim Nazarov, Heather Turner, Bie Verbist & Hugo Ceulemans (2017) <doi:10.1038/s41598-017-18068-5>.
Geographically referenced data and statistics of nighttime lights from NASA Black Marble <https://blackmarble.gsfc.nasa.gov/>.
Computation of key characteristics and plots for blinded sample size recalculation. Continuous as well as binary endpoints are supported in superiority and non-inferiority trials. See Baumann, Pilz, Kieser (2022) <doi:10.32614/RJ-2022-001> for a detailed description. The implemented methods include the approaches by Lu, K. (2019) <doi:10.1002/pst.1737>, Kieser, M. and Friede, T. (2000) <doi:10.1002/(SICI)1097-0258(20000415)19:7%3C901::AID-SIM405%3E3.0.CO;2-L>, Friede, T. and Kieser, M. (2004) <doi:10.1002/pst.140>, Friede, T., Mitchell, C., Mueller-Veltern, G. (2007) <doi:10.1002/bimj.200610373>, and Friede, T. and Kieser, M. (2011) <doi:10.3414/ME09-01-0063>.
It contains some example datasets used in bibliometrix'. The data are bibliographic datasets exported from the SCOPUS (<https://scopus.com>) and Clarivate Analytics Web of Science (<https://www.webofscience.com/>) databases. They can be used to test the different features of the package bibliometrix (<https://bibliometrix.org>).
This package provides a suite of open-source R functions designed to produce standard metrics for forest management and ecology from forest inventory data. The overarching goal is to minimize potential inconsistencies introduced by the algorithms used to compute and summarize core forest metrics. Learn more about the purpose of the package and the specific algorithms used in the package at <https://github.com/kearutherford/BerkeleyForestsAnalytics>.
Bayesian approaches for analyzing multivariate data in ecology. Estimation is performed using Markov Chain Monte Carlo (MCMC) methods via Three. JAGS types of models may be fitted: 1) With explanatory variables only, boral fits independent column Generalized Linear Models (GLMs) to each column of the response matrix; 2) With latent variables only, boral fits a purely latent variable model for model-based unconstrained ordination; 3) With explanatory and latent variables, boral fits correlated column GLMs with latent variables to account for any residual correlation between the columns of the response matrix.
For a balanced design of experiments, this package calculates the sample size required to detect a certain standardized effect size, under a significance level. This package also provides three graphs; detectable standardized effect size vs power, sample size vs detectable standardized effect size, and sample size vs power, which show the mutual relationship between the sample size, power and the detectable standardized effect size. The detailed procedure is described in R. V. Lenth (2006-9) <https://homepage.divms.uiowa.edu/~rlenth/Power/>, Y. B. Lim (1998), M. A. Kastenbaum, D. G. Hoel and K. O. Bowman (1970) <doi:10.2307/2334851>, and Douglas C. Montgomery (2013, ISBN: 0849323312).
This package provides a light-weight object-oriented system with python'-like syntax which supports multiple inheritances and incorporates a python'-like method resolution order.