Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Bayesian analysis of item-level hierarchical twin data using an integrated item response theory model. Analyses are based on Schwabe & van den Berg (2014) <doi:10.1007/s10519-014-9649-7>, Molenaar & Dolan (2014) <doi:10.1007/s10519-014-9647-9>, Schwabe, Jonker & van den Berg (2016) <doi:10.1007/s10519-015-9768-9> and Schwabe, Boomsma & van den Berg (2016) <doi:10.1016/j.lindif.2017.01.018>.
Anonymised Bay Area bike share trip data for the year 2014. Also contains additional metadata on stations and weather.
This package provides a chemical speciation and toxicity prediction model for the toxicity of metals to aquatic organisms. The Biotic Ligand Model (BLM) engine was originally programmed in PowerBasic by Robert Santore and others. The main way the BLM can be used is to predict the toxicity of a metal to an organism with a known sensitivity (i.e., it is known how much of that metal must accumulate on that organism's biotic ligand to cause a physiological effect in a certain percentage of the population, such as a 20% loss in reproduction or a 50% mortality rate). The second way the BLM can be used is to estimate the chemical speciation of the metal and other constituents in water, including estimating the amount of metal accumulated to an organism's biotic ligand during a toxicity test. In the first application of the BLM, the amount of metal associated with a toxicity endpoint, or regulatory limit will be predicted, while in the second application, the amount of metal is known and the portions of that metal that exist in various forms will be determined. This version of the engine has been re-structured to perform the calculations in a different way that will make it more efficient in R, while also making it more flexible and easier to maintain in the future. Because of this, it does not currently match the desktop model exactly, but we hope to improve this comparability in the future.
The mixed model for repeated measures (MMRM) is a popular model for longitudinal clinical trial data with continuous endpoints, and brms is a powerful and versatile package for fitting Bayesian regression models. The brms.mmrm R package leverages brms to run MMRMs, and it supports a simplified interfaced to reduce difficulty and align with the best practices of the life sciences. References: Bürkner (2017) <doi:10.18637/jss.v080.i01>, Mallinckrodt (2008) <doi:10.1177/009286150804200402>.
Facilitates scalable spatiotemporally varying coefficient modelling with Bayesian kernelized tensor regression. The important features of this package are: (a) Enabling local temporal and spatial modeling of the relationship between the response variable and covariates. (b) Implementing the model described by Lei et al. (2023) <doi:10.48550/arXiv.2109.00046>. (c) Using a Bayesian Markov Chain Monte Carlo (MCMC) algorithm to sample from the posterior distribution of the model parameters. (d) Employing a tensor decomposition to reduce the number of estimated parameters. (e) Accelerating tensor operations and enabling graphics processing unit (GPU) acceleration with the torch package.
Bayes factors represent the ratio of probabilities assigned to data by competing scientific hypotheses. However, one drawback of Bayes factors is their dependence on prior specifications that define null and alternative hypotheses. Additionally, there are challenges in their computation. To address these issues, we define Bayes factor functions (BFFs) directly from common test statistics. BFFs express Bayes factors as a function of the prior densities used to define the alternative hypotheses. These prior densities are centered on standardized effects, which serve as indices for the BFF. Therefore, BFFs offer a summary of evidence in favor of alternative hypotheses that correspond to a range of scientifically interesting effect sizes. Such summaries remove the need for arbitrary thresholds to determine "statistical significance." BFFs are available in closed form and can be easily computed from z, t, chi-squared, and F statistics. They depend on hyperparameters "r" and "tau^2", which determine the shape and scale of the prior distributions defining the alternative hypotheses. Plots of BFFs versus effect size provide informative summaries of hypothesis tests that can be easily aggregated across studies.
This package provides a client for retrieving data and metadata from major central bank APIs. It supports access to the Bundesbank SDMX Web Service API (<https://www.bundesbank.de/en/statistics/time-series-databases/help-for-sdmx-web-service/web-service-interface-data>), the Swiss National Bank Data Portal (<https://data.snb.ch/en>), the European Central Bank Data Portal API (<https://data.ecb.europa.eu/help/api/overview>), the Bank of England Interactive Statistical Database (<https://www.bankofengland.co.uk/boeapps/database>), the Banco de España API (<https://www.bde.es/webbe/en/estadisticas/recursos/api-estadisticas-bde.html>), the Banque de France Web Service (<https://webstat.banque-france.fr/en/pages/guide-migration-api/>), and Bank of Canada Valet API (<https://www.bankofcanada.ca/valet/docs>).
Stock, Options and Futures Trading Strategies for Traders and Investors with Bearish Outlook. The indicators, strategies, calculations, functions and all other discussions are for academic, research, and educational purposes only and should not be construed as investment advice and come with absolutely no Liability. Guy Cohen (â The Bible of Options Strategies (2nd ed.)â , 2015, ISBN: 9780133964028). Juan A. Serur, Juan A. Serur (â 151 Trading Strategiesâ , 2018, ISBN: 9783030027919). Chartered Financial Analyst Institute ("Chartered Financial Analyst Program Curriculum 2020 Level I Volumes 1-6. (Vol. 5, pp. 385-453)", 2019, ISBN: 9781119593577). John C. Hull (â Options, Futures, and Other Derivatives (11th ed.)â , 2022, ISBN: 9780136939979).
Defines operating characteristics of Bayesian Adaptive Trials considering a generalised linear model response via Monte Carlo simulations of Bayesian GLM fitted via integrated Laplace approximations (INLA).
We implemented a Bayesian-statistics approach for subtraction of incoherent scattering from neutron total-scattering data. In this approach, the estimated background signal associated with incoherent scattering maximizes the posterior probability, which combines the likelihood of this signal in reciprocal and real spaces with the prior that favors smooth lines. The description of the corresponding approach could be found at Gagin and Levin (2014) <DOI:10.1107/S1600576714023796>.
This package provides a framework to infer causality on binary data using techniques in frequent pattern mining and estimation statistics. Given a set of individual vectors S=x where x(i) is a realization value of binary variable i, the framework infers empirical causal relations of binary variables i,j from S in a form of causal graph G=(V,E) where V is a set of nodes representing binary variables and there is an edge from i to j in E if the variable i causes j. The framework determines dependency among variables as well as analyzing confounding factors before deciding whether i causes j. The publication of this package is at Chainarong Amornbunchornvej, Navaporn Surasvadi, Anon Plangprasopchok, and Suttipong Thajchayapong (2023) <doi:10.1016/j.heliyon.2023.e15947>.
Tool to perform Bayesian inference of carcass processing/transport strategy and bone attrition from archaeofaunal skeletal profiles characterized by percentages of MAU (Minimum Anatomical Units). The approach is based on a generative model for skeletal profiles that replicates the two phases of formation of any faunal assemblage: initial accumulation as a function of human transport strategies and subsequent attrition.Two parameters define this model: 1) the transport preference (alpha), which can take any value between - 1 (mostly axial contribution) and 1 (mostly appendicular contribution) following strategies constructed as a function of butchering efficiency of different anatomical elements and the results of ethnographic studies, and 2) degree of attrition (beta), which can vary between 0 (no attrition) and 10 (maximum attrition) and relates the survivorship of bone elements to their maximum bone density. Starting from uniform prior probability distribution functions of alpha and beta, a Monte Carlo Markov Chain sampling based on a random walk Metropolis-Hasting algorithm is adopted to derive the posterior probability distribution functions, which are then available for interpretation. During this process, the likelihood of obtaining the observed percentages of MAU given a pair of parameter values is estimated by the inverse of the Chi2 statistic, multiplied by the proportion of elements within a 1 percent of the observed value. See Ana B. Marin-Arroyo, David Ocio (2018).<doi:10.1080/08912963.2017.1336620>.
The Biodem package provides a number of functions for Biodemographic analysis.
This package implements Bayesian hybrid designs that incorporate historical control data into a current clinical trial. The package uses a dynamic power prior method to determine the degree of borrowing from the historical data, creating a hybrid control arm. This approach is primarily designed for studies with a binary primary endpoint, such as the overall response rate (ORR). Functions are provided for design calibration, sample size calculation, power evaluation, and final analysis. Additionally, it includes functions adapted from the SAMprior package (v1.1.1) by Yang et al. (2023) <https://academic.oup.com/biometrics/article/79/4/2857/7587575> to support the Self-Adapting Mixture (SAM) prior framework for comparison.
This package provides a system of functions and data aiming to apply quantitative analyses to forest ecology, silviculture and decision-making. Besides, the package helps to carry out data management, exploratory analysis, and model assessment.
Suite of tools that facilitate exposure-response analysis using Bayesian methods. The package provides a streamlined workflow for fitting types of models that are commonly used in exposure-response analysis - linear and Emax for continuous endpoints, logistic linear and logistic Emax for binary endpoints, as well as performing simulation and visualization. Learn more about the workflow at <https://genentech.github.io/BayesERbook/>.
Provide early termination phase II trial designs with a decreasingly informative prior (DIP) or a regular Bayesian prior chosen by the user. The program can determine the minimum planned sample size necessary to achieve the user-specified admissible designs. The program can also perform power and expected sample size calculations for the tests in early termination Phase II trials. See Wang C and Sabo RT (2022) <doi:10.18203/2349-3259.ijct20221110>; Sabo RT (2014) <doi:10.1080/10543406.2014.888441>.
Perform seasonal adjustment and forecasting of weekly data. The package provides a user-friendly interface for computing seasonally adjusted estimates and forecasts of weekly time series and includes functions for the construction of country-specific prior adjustment variables, as well as diagnostic tools to assess the quality of the adjustments. The methodology is described in more detail in Ginker (2024) <doi:10.13140/RG.2.2.12221.44000>.
This package implements a bootstrap-based heterogeneity test for standardized mean differences (d), Fisher-transformed Pearson's correlations (r), and natural-logarithm-transformed odds ratio (or) in meta-analysis studies. Depending on the presence of moderators, this Monte Carlo based test can be implemented in the random- or mixed-effects model. This package uses rma() function from the R package metafor to obtain parameter estimates and likelihoods, so installation of R package metafor is required. This approach refers to the studies of Anscombe (1956) <doi:10.2307/2332926>, Haldane (1940) <doi:10.2307/2332614>, Hedges (1981) <doi:10.3102/10769986006002107>, Hedges & Olkin (1985, ISBN:978-0123363800), Silagy, Lancaster, Stead, Mant, & Fowler (2004) <doi:10.1002/14651858.CD000146.pub2>, Viechtbauer (2010) <doi:10.18637/jss.v036.i03>, and Zuckerman (1994, ISBN:978-0521432009).
Waffle plots are rectangular pie charts that represent a quantity or abundances using colored squares or other symbol. This makes them better at transmitting information as the discrete number of squares is easier to read than the circular area of pie charts. While the original waffle charts were rectangular with 10 rows and columns, with a single square representing 1%, they are nowadays popular in various infographics to visualize any proportional ratios.
These are miscellaneous functions for working with panel data, quantiles, and printing results. For panel data, the package includes functions for making a panel data balanced (that is, dropping missing individuals that have missing observations in any time period), converting id numbers to row numbers, and to treat repeated cross sections as panel data under the assumption of rank invariance. For quantiles, there are functions to make distribution functions from a set of data points (this is particularly useful when a distribution function is created in several steps), to combine distribution functions based on some external weights, and to invert distribution functions. Finally, there are several other miscellaneous functions for obtaining weighted means, weighted distribution functions, and weighted quantiles; to generate summary statistics and their differences for two groups; and to add or drop covariates from formulas.
Bayesian adaptive trial algorithm implements multiple-stage interim analysis. Package includes data generating function, and Bayesian hypothesis testing function.
For a series of binary responses, create stopping boundary with exact results after stopping, allowing updating for missing assessments.
This R package offers block Gibbs samplers for the Bayesian (adaptive) graphical lasso, ridge, and naive elastic net priors. These samplers facilitate the simulation of the posterior distribution of precision matrices for Gaussian distributed data and were originally proposed by: Wang (2012) <doi:10.1214/12-BA729>; Smith et al. (2022) <doi:10.48550/arXiv.2210.16290> and Smith et al. (2023) <doi:10.48550/arXiv.2306.14199>, respectively.