Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Collection of tools to work with European basketball data. Functions available are related to friendly web scraping, data management and visualization. Data were obtained from <https://www.euroleaguebasketball.net/euroleague/>, <https://www.euroleaguebasketball.net/eurocup/> and <https://www.acb.com/>, following the instructions of their respectives robots.txt files, when available. Box score data are available for the three leagues. Play-by-play and spatial shooting data are also available for the Spanish league. Methods for analysis include a population pyramid, 2D plots, circular plots of players percentiles, plots of players monthly/yearly stats, team heatmaps, team shooting plots, team four factors plots, cross-tables with the results of regular season games, maps of nationalities, combinations of lineups, possessions-related variables, timeouts, performance by periods, personal fouls, offensive rebounds and different types of shooting charts. Please see Vinue (2020) <doi:10.1089/big.2018.0124> and Vinue (2024) <doi:10.1089/big.2023.0177>.
Bayesian synthetic likelihood (BSL, Price et al. (2018) <doi:10.1080/10618600.2017.1302882>) is an alternative to standard, non-parametric approximate Bayesian computation (ABC). BSL assumes a multivariate normal distribution for the summary statistic likelihood and it is suitable when the distribution of the model summary statistics is sufficiently regular. This package provides a Metropolis Hastings Markov chain Monte Carlo implementation of four methods (BSL, uBSL, semiBSL and BSLmisspec) and two shrinkage estimators (graphical lasso and Warton's estimator). uBSL (Price et al. (2018) <doi:10.1080/10618600.2017.1302882>) uses an unbiased estimator to the normal density. A semi-parametric version of BSL (semiBSL, An et al. (2018) <arXiv:1809.05800>) is more robust to non-normal summary statistics. BSLmisspec (Frazier et al. 2019 <arXiv:1904.04551>) estimates the Gaussian synthetic likelihood whilst acknowledging that there may be incompatibility between the model and the observed summary statistic. Shrinkage estimation can help to decrease the number of model simulations when the dimension of the summary statistic is high (e.g., BSLasso, An et al. (2019) <doi:10.1080/10618600.2018.1537928>). Extensions to this package are planned. For a journal article describing how to use this package, see An et al. (2022) <doi:10.18637/jss.v101.i11>.
The main purpose of this package is to propose a transparent methodological framework to compare bioregionalisation methods based on hierarchical and non-hierarchical clustering algorithms (Kreft & Jetz (2010) <doi:10.1111/j.1365-2699.2010.02375.x>) and network algorithms (Lenormand et al. (2019) <doi:10.1002/ece3.4718> and Leroy et al. (2019) <doi:10.1111/jbi.13674>).
This package provides a Bayesian framework to estimate the Student's t-distribution's degrees of freedom is developed. Markov Chain Monte Carlo sampling routines are developed as in <doi:10.3390/axioms11090462> to sample from the posterior distribution of the degrees of freedom. A random walk Metropolis algorithm is used for sampling when Jeffrey's and Gamma priors are endowed upon the degrees of freedom. In addition, the Metropolis-adjusted Langevin algorithm for sampling is used under the Jeffrey's prior specification. The Log-normal prior over the degrees of freedom is posed as a viable choice with comparable performance in simulations and real-data application, against other prior choices, where an Elliptical Slice Sampler is used to sample from the concerned posterior.
This package provides a unified syntax for the simulation-based comparison of different single-stage basket trial designs with a binary endpoint and equal sample sizes in all baskets. Methods include the designs by Baumann et al. (2024) <doi:10.48550/arXiv.2309.06988>, Fujikawa et al. (2020) <doi:10.1002/bimj.201800404>, Berry et al. (2020) <doi:10.1177/1740774513497539>, Neuenschwander et al. (2016) <doi:10.1002/pst.1730> and Psioda et al. (2021) <doi:10.1093/biostatistics/kxz014>. For the latter three designs, the functions are mostly wrappers for functions provided by the packages bhmbasket and bmabasket'.
Estimates VAR and VARX models with Structured Penalties.
Investigating and visualising Bayesian Additive Regression Tree (BART) (Chipman, H. A., George, E. I., & McCulloch, R. E. 2010) <doi:10.1214/09-AOAS285> model fits. We construct conventional plots to analyze a modelâ s performance and stability as well as create new tree-based plots to analyze variable importance, interaction, and tree structure. We employ Value Suppressing Uncertainty Palettes (VSUP) to construct heatmaps that display variable importance and interactions jointly using colour scale to represent posterior uncertainty. Our visualisations are designed to work with the most popular BART R packages available, namely BART Rodney Sparapani and Charles Spanbauer and Robert McCulloch 2021 <doi:10.18637/jss.v097.i01>, dbarts (Vincent Dorie 2023) <https://CRAN.R-project.org/package=dbarts>, and bartMachine (Adam Kapelner and Justin Bleich 2016) <doi:10.18637/jss.v070.i04>.
This package provides a hodgepodge of hopefully helpful functions. Two of these perform shrinkage estimation: one using a simple weighted method where the user can specify the degree of shrinkage required, and one using James-Stein shrinkage estimation for the case of unequal variances.
Resurrects the standard plot for shapes established by the base and graphics packages. This is suited to workflows that require plotting using the established and traditional idioms of plotting spatially coincident data where it belongs. This package depends on sf and only replaces the plot method.
Estimation of latent variable models using Bayesian methods. Currently estimates the loglinear cognitive diagnosis model of Henson, Templin, and Willse (2009) <doi:10.1007/s11336-008-9089-5>.
Multicenter randomized trials involve the collection and analysis of data from numerous study participants across multiple sites. Outliers may be present. To identify outliers, this package examines data at the individual level (univariate and multivariate) and site-level (with and without covariate adjustment). Methods are outlined in further detail in Rigdon et al (to appear).
Network meta-analyses using Bayesian framework following Dias et al. (2013) <DOI:10.1177/0272989X12458724>. Based on the data input, creates prior, model file, and initial values needed to run models in rjags'. Able to handle binomial, normal and multinomial arm-level data. Can handle multi-arm trials and includes methods to incorporate covariate and baseline risk effects. Includes standard diagnostics and visualization tools to evaluate the results.
Maleknia et al. (2020) <doi:10.1101/2020.01.13.905448>. A novel pathway enrichment analysis package based on Bayesian network to investigate the topology features of the pathways. firstly, 187 kyoto encyclopedia of genes and genomes (KEGG) human non-metabolic pathways which their cycles were eliminated by biological approach, enter in analysis as Bayesian network structures. The constructed Bayesian network were optimized by the Least Absolute Shrinkage Selector Operator (lasso) and the parameters were learned based on gene expression data. Finally, the impacted pathways were enriched by Fisherâ s Exact Test on significant parameters.
Implementation of the Generalized Pairwise Comparisons (GPC) as defined in Buyse (2010) <doi:10.1002/sim.3923> for complete observations, and extended in Peron (2018) <doi:10.1177/0962280216658320> to deal with right-censoring. GPC compare two groups of observations (intervention vs. control group) regarding several prioritized endpoints to estimate the probability that a random observation drawn from one group performs better/worse/equivalently than a random observation drawn from the other group. Summary statistics such as the net treatment benefit, win ratio, or win odds are then deduced from these probabilities. Confidence intervals and p-values are obtained based on asymptotic results (Ozenne 2021 <doi:10.1177/09622802211037067>), non-parametric bootstrap, or permutations. The software enables the use of thresholds of minimal importance difference, stratification, non-prioritized endpoints (O Brien test), and can handle right-censoring and competing-risks.
Generates different posterior distributions of adjusted odds ratio under different priors of sensitivity and specificity, and plots the models for comparison. It also provides estimations for the specifications of the models using diagnostics of exposure status with a non-linear mixed effects model. It implements the methods that are first proposed in <doi:10.1016/j.annepidem.2006.04.001> and <doi:10.1177/0272989X09353452>.
An implementation of functions to generate and plot postestimation quantities after estimating Bayesian regression models using Markov chain Monte Carlo (MCMC). Functionality includes the estimation of the Precision-Recall curves (see Beger, 2016 <doi:10.2139/ssrn.2765419>), the implementation of the observed values method of calculating predicted probabilities by Hanmer and Kalkan (2013) <doi:10.1111/j.1540-5907.2012.00602.x>, the implementation of the average value method of calculating predicted probabilities (see King, Tomz, and Wittenberg, 2000 <doi:10.2307/2669316>), and the generation and plotting of first differences to summarize typical effects across covariates (see Long 1997, ISBN:9780803973749; King, Tomz, and Wittenberg, 2000 <doi:10.2307/2669316>). This package can be used with MCMC output generated by any Bayesian estimation tool including JAGS', BUGS', MCMCpack', and Stan'.
The binomialRF is a new feature selection technique for decision trees that aims at providing an alternative approach to identify significant feature subsets using binomial distributional assumptions (Rachid Zaim, S., et al. (2019)) <doi:10.1101/681973>. Treating each splitting variable selection as a set of exchangeable correlated Bernoulli trials, binomialRF then tests whether a feature is selected more often than by random chance.
Provide a tool to easily build customized data flows to pre-process large volumes of information from different sources. To this end, bdpar allows to (i) easily use and create new functionalities and (ii) develop new data source extractors according to the user needs. Additionally, the package provides by default a predefined data flow to extract and pre-process the most relevant information (tokens, dates, ... ) from some textual sources (SMS, Email, YouTube comments).
Interface with the Brickset API <https://brickset.com/article/52664/api-version-3-documentation> for getting data about LEGO sets. Data sets that can be used for teaching and learning without the need of a Brickset account and API key are also included. Includes all LEGO since through the end of 2023.
Bayesian Additive Regression Trees (BART) provide flexible nonparametric modeling of covariates for continuous, binary, categorical and time-to-event outcomes. For more information see Sparapani, Spanbauer and McCulloch <doi:10.18637/jss.v097.i01>.
Flags and checks occurrence data that are in Darwin Core format. The package includes generic functions and data as well as some that are specific to bees. This package is meant to build upon and be complimentary to other excellent occurrence cleaning packages, including bdc and CoordinateCleaner'. This package uses datasets from several sources and particularly from the Discover Life Website, created by Ascher and Pickering (2020). For further information, please see the original publication and package website. Publication - Dorey et al. (2023) <doi:10.1101/2023.06.30.547152> and package website - Dorey et al. (2023) <https://github.com/jbdorey/BeeBDC>.
We use a Bayesian approach to run individual patient data meta-analysis and network meta-analysis using JAGS'. The methods incorporate shrinkage methods and calculate patient-specific treatment effects as described in Seo et al. (2021) <DOI:10.1002/sim.8859>. This package also includes user-friendly functions that impute missing data in an individual patient data using mice-related packages.
This package provides methods to estimate optimal dynamic treatment regimes using Bayesian likelihood-based regression approach as described in Yu, W., & Bondell, H. D. (2023) <doi:10.1093/jrsssb/qkad016> Uses backward induction and dynamic programming theory for computing expected values. Offers options for future parallel computing.
This package provides an alternative approach to aoristic analyses for archaeological datasets by fitting Bayesian parametric growth models and non-parametric random-walk Intrinsic Conditional Autoregressive (ICAR) models on time frequency data (Crema (2024)<doi:10.1111/arcm.12984>). It handles event typo-chronology based timespans defined by start/end date as well as more complex user-provided vector of probabilities.