Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides methods for estimating the area under the concentration versus time curve (AUC) and its standard error in the presence of Below the Limit of Quantification (BLOQ) observations. Two approaches are implemented: direct estimation using censored maximum likelihood, and a two-step approach that first imputes BLOQ values using various methods and then computes the AUC using the imputed data. Technical details are described in Barnett et al. (2020), "Methods for Non-Compartmental Pharmacokinetic Analysis With Observations Below the Limit of Quantification," Statistics in Biopharmaceutical Research. <doi:10.1080/19466315.2019.1701546>.
Provide early termination phase II trial designs with a decreasingly informative prior (DIP) or a regular Bayesian prior chosen by the user. The program can determine the minimum planned sample size necessary to achieve the user-specified admissible designs. The program can also perform power and expected sample size calculations for the tests in early termination Phase II trials. See Wang C and Sabo RT (2022) <doi:10.18203/2349-3259.ijct20221110>; Sabo RT (2014) <doi:10.1080/10543406.2014.888441>.
This package provides a molecular genetics tool that processes binary data from fragment analysis. It consolidates replicate sample pairs, outputs summary statistics, and produces hierarchical clustering trees and nMDS plots. This package was developed from the publication available here: <doi:10.1016/j.biocontrol.2020.104426>. The GUI version of this package is available on the R Shiny online server at: <https://clarkevansteenderen.shinyapps.io/BINMAT/> or it is accessible via GitHub by typing: shiny::runGitHub("BinMat", "clarkevansteenderen") into the console in R. Two real-world datasets accompany the package: an AFLP dataset of Bunias orientalis samples from Tewes et. al. (2017) <doi:10.1111/1365-2745.12869>, and an ISSR dataset of Nymphaea specimens from Reid et. al. (2021) <doi:10.1016/j.aquabot.2021.103372>. The authors of these publications are thanked for allowing the use of their data.
This package provides a collection of methods to determine the required sample size for the evaluation of inequality constrained hypotheses by means of a Bayes factor. Alternatively, for a given sample size, the unconditional error probabilities or the expected conditional error probabilities can be determined. Additional material on the methods in this package is available in Klaassen, F., Hoijtink, H. & Gu, X. (2019) <doi:10.31219/osf.io/d5kf3>.
The blocked weighted bootstrap (BBW) is an estimation technique for use with data from two-stage cluster sampled surveys in which either prior weighting (e.g. population-proportional sampling or PPS as used in Standardized Monitoring and Assessment of Relief and Transitions or SMART surveys) or posterior weighting (e.g. as used in rapid assessment method or RAM and simple spatial sampling method or S3M surveys) is implemented. See Cameron et al (2008) <doi:10.1162/rest.90.3.414> for application of bootstrap to cluster samples. See Aaron et al (2016) <doi:10.1371/journal.pone.0163176> and Aaron et al (2016) <doi:10.1371/journal.pone.0162462> for application of the blocked weighted bootstrap to estimate indicators from two-stage cluster sampled surveys.
This package provides functions to allow you to easily pass command-line arguments into R, and functions to aid in submitting your R code in parallel on a cluster and joining the results afterward (e.g. multiple parameter values for simulations running in parallel, splitting up a permutation test in parallel, etc.). See `parseCommandArgs(...) for the main example of how to use this package.
This package implements z-test, t-test, and normal moment prior Bayes factors based on summary statistics, along with functionality to perform corresponding power and sample size calculations as described in Pawel and Held (2025) <doi:10.1080/00031305.2025.2467919>.
These data contain morphological image measurements for dried beans from Koklu and Ozkan (2020) <doi:10.1016/j.compag.2020.105507>.
Finite Population bootstrap algorithms to estimate the variance of the Horvitz-Thompson estimator for single-stage sampling. For a survey of bootstrap methods for finite populations, see Mashreghi et Al. (2016) <doi:10.1214/16-SS113>.
Implementation of the BRIk, FABRIk and FDEBRIk algorithms to initialise k-means. These methods are intended for the clustering of multivariate and functional data, respectively. They make use of the Modified Band Depth and bootstrap to identify appropriate initial seeds for k-means, which are proven to be better options than many techniques in the literature. Torrente and Romo (2021) <doi:10.1007/s00357-020-09372-3> It makes use of the functions kma and kma.similarity, from the archived package fdakma, by Alice Parodi et al.
This package provides a Bayesian, global planktic foraminifera core top calibration to modern sea-surface temperatures. Includes four calibration models, considering species-specific calibration parameters and seasonality.
R/C++ implementation of the model proposed by Primiceri ("Time Varying Structural Vector Autoregressions and Monetary Policy", Review of Economic Studies, 2005), with functionality for computing posterior predictive distributions and impulse responses.
The backtest package provides facilities for exploring portfolio-based conjectures about financial instruments (stocks, bonds, swaps, options, et cetera).
This package provides functions are pre-configured to utilize Bootstrap 5 classes and HTML structures to create Bootstrap-styled HTML quickly and easily. Includes functions for creating common Bootstrap elements such as containers, rows, cols, navbars, etc. Intended to be used with the html5 package. Learn more at <https://getbootstrap.com/>.
The network autocorrelation model (NAM) can be used for studying the degree of social influence regarding an outcome variable based on one or more known networks. The degree of social influence is quantified via the network autocorrelation parameters. In case of a single network, the Bayesian methods of Dittrich, Leenders, and Mulder (2017) <DOI:10.1016/j.socnet.2016.09.002> and Dittrich, Leenders, and Mulder (2019) <DOI:10.1177/0049124117729712> are implemented using a normal, flat, or independence Jeffreys prior for the network autocorrelation. In the case of multiple networks, the Bayesian methods of Dittrich, Leenders, and Mulder (2020) <DOI:10.1177/0081175020913899> are implemented using a multivariate normal prior for the network autocorrelation parameters. Flat priors are implemented for estimating the coefficients. For Bayesian testing of equality and order-constrained hypotheses, the default Bayes factor of Gu, Mulder, and Hoijtink, (2018) <DOI:10.1111/bmsp.12110> is used with the posterior mean and posterior covariance matrix of the NAM parameters based on flat priors as input.
Bayesian inference on the (discrete) power spectrum of time series.
Bisulfite-treated RNA non-conversion in a set of samples is analysed as follows : each sample's non-conversion distribution is identified to a Poisson distribution. P-values adjusted for multiple testing are calculated in each sample. Combined non-conversion P-values and standard errors are calculated on the intersection of the set of samples. For further details, see C Legrand, F Tuorto, M Hartmann, R Liebers, D Jakob, M Helm and F Lyko (2017) <doi:10.1101/gr.210666.116>.
Estimation and interpretation of Bayesian distributed lag interaction models (BDLIMs). A BDLIM regresses a scalar outcome on repeated measures of exposure and allows for modification by a categorical variable under four specific patterns of modification. The main function is bdlim(). There are also summary and plotting files. Details on methodology are described in Wilson et al. (2017) <doi:10.1093/biostatistics/kxx002>.
Implementing the Block Coordinate Ascent with One-Step Generalized Rosen (BCA1SG) algorithm on the semiparametric models for panel count data, interval-censored survival data, and degradation data. A comprehensive description of the BCA1SG algorithm can be found in Wang et al. (2020) <https://github.com/yudongstat/BCA1SG/blob/master/BCA1SG.pdf>. For details of the semiparametric models for panel count data, interval-censored survival data, and degradation data, please see Wellner and Zhang (2007) <doi:10.1214/009053607000000181>, Huang and Wellner (1997) <ISBN:978-0-387-94992-5>, and Wang and Xu (2010) <doi:10.1198/TECH.2009.08197>, respectively.
This package provides tools to read, write, parse, and analyze forest fire history data (e.g. FHX). Described in Malevich et al. (2018) <doi:10.1016/j.dendro.2018.02.005>.
This package provides a wrapper to allow users to download Bus Open Data Service BODS transport information from the API (<https://www.bus-data.dft.gov.uk/>). This includes timetable and fare metadata (including links for full datasets), timetable data at line level, and real-time location data.
Posterior sampling and inference for Bayesian Poisson regression models. The model specification makes use of Gaussian (or conditionally Gaussian) prior distributions on the regression coefficients. Details on the algorithm are found in D'Angelo and Canale (2023) <doi:10.1080/10618600.2022.2123337>.
Call the data wrappers for Bursa Metropolitan Municipality's Open Data Portal <https://acikyesil.bursa.bel.tr/>. This will return all datasets stored in different formats.
Interact with the Brandwatch API <https://developers.brandwatch.com/docs>. Allows you to authenticate to the API and obtain data for projects, queries, query groups tags and categories. Also allows you to directly obtain mentions and aggregate data for a specified query or query group.