Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Supervised learning using Boltzmann Bayes model inference, which extends naive Bayes model to include interactions. Enables classification of data into multiple response groups based on a large number of discrete predictors that can take factor values of heterogeneous levels. Either pseudo-likelihood or mean field inference can be used with L2 regularization, cross-validation, and prediction on new data. <doi:10.18637/jss.v101.i05>.
We implemented a Bayesian-statistics approach for subtraction of incoherent scattering from neutron total-scattering data. In this approach, the estimated background signal associated with incoherent scattering maximizes the posterior probability, which combines the likelihood of this signal in reciprocal and real spaces with the prior that favors smooth lines. The description of the corresponding approach could be found at Gagin and Levin (2014) <DOI:10.1107/S1600576714023796>.
Simplify bivariate and regression analyses by automating result generation, including summary tables, statistical tests, and customizable graphs. It supports tests for continuous and dichotomous data, as well as stepwise regression for linear, logistic, and Firth penalized logistic models. While not a substitute for tailored analysis, BiVariAn accelerates workflows and is expanding features like multilingual interpretations of results.The methods for selecting significant statistical tests, as well as the predictor selection in prediction functions, can be referenced in the works of Marc Kery (2003) <doi:10.1890/0012-9623(2003)84[92:NORDIG]2.0.CO;2> and Rainer Puhr (2017) <doi:10.1002/sim.7273>.
Facilitates retrieval, transformation and analysis of the data from the Barcode of Life Data Systems (BOLD) database <https://boldsystems.org/>. This package allows both public and private user data to be easily downloaded into the R environment using a variety of inputs such as: IDs (processid, sampleid), BINs, dataset codes, project codes, taxonomy, geography etc. It provides frictionless data conversion into formats compatible with other R-packages and third-party tools, as well as functions for sequence alignment & clustering, biodiversity analysis and spatial mapping.
It is designed to calculate connection between (among) brain regions and plot connection lines. Also, the summary function is included to summarize group-level connectivity network. Kang, Jian (2016) <doi:10.1016/j.neuroimage.2016.06.042>.
Bit-level reading and writing are necessary when dealing with many file formats e.g. compressed data and binary files. Currently, R connections are manipulated at the byte level. This package wraps existing connections and raw vectors so that it is possible to read bits, bit sequences, unaligned bytes and low-bit representations of integers.
Tutorials for statistics, aimed at biological scientists. Subjects range from basic descriptive statistics through to complex linear modelling. The tutorials include text, videos, interactive coding exercises and multiple choice quizzes. The package also includes 19 datasets which are used in the tutorials.
This package implements functions to update Bayesian Predictive Power Computations after not stopping a clinical trial at an interim analysis. Such an interim analysis can either be blinded or unblinded. Code is provided for Normally distributed endpoints with known variance, with a prominent example being the hazard ratio.
This package implements Bayesian inference to detect signal from blinded clinical trial when total number of adverse events of special concerns and total risk exposures from all patients are available in the study. For more details see the article by Mukhopadhyay et. al. (2018) titled Bayesian Detection of Potential Risk Using Inference on Blinded Safety Data', in Pharmaceutical Statistics (to appear).
Allows the user to carry out GLM on very large data sets. Data can be created using the data_frame() function and appended to the object with object$append(data); data_frame and data_matrix objects are available that allow the user to store large data on disk. The data is stored as doubles in binary format and any character columns are transformed to factors and then stored as numeric (binary) data while a look-up table is stored in a separate .meta_data file in the same folder. The data is stored in blocks and GLM regression algorithm is modified and carries out a MapReduce- like algorithm to fit the model. The functions bglm(), and summary() and bglm_predict() are available for creating and post-processing of models. The library requires Armadillo installed on your system. It may not function on windows since multi-core processing is done using mclapply() which forks R on Unix/Linux type operating systems.
Complex machine learning models are often difficult to interpret. Shapley values serve as a powerful tool to understand and explain why a model makes a particular prediction. This package computes variable contributions using permutation-based Shapley values for Bayesian Additive Regression Trees (BART) and its extension with Post-Stratification (BARP). The permutation-based SHAP method proposed by Strumbel and Kononenko (2014) <doi:10.1007/s10115-013-0679-x> is grounded in data obtained via MCMC sampling. Similar to the BART model introduced by Chipman, George, and McCulloch (2010) <doi:10.1214/09-AOAS285>, this package leverages Bayesian posterior samples generated during model estimation, allowing variable contributions to be computed without requiring additional sampling. The BART model is designed to work with the following R packages: BART <doi:10.18637/jss.v097.i01>, bartMachine <doi:10.18637/jss.v070.i04>, and dbarts <https://CRAN.R-project.org/package=dbarts>. For XGBoost and baseline adjustments, the approach by Lundberg et al. (2020) <doi:10.1038/s42256-019-0138-9> is also considered. The BARP model proposed by Bisbee (2019) <doi:10.1017/S0003055419000480> was implemented with reference to <https://github.com/jbisbee1/BARP> and is designed to work with modified functions based on that implementation. BARP extends post-stratification by computing variable contributions within each stratum defined by stratifying variables. The resulting Shapley values are visualized through both global and local explanation methods.
Bayesian optimal interval based on both efficacy and toxicity outcomes (BOIN-ET) design is a model-assisted oncology phase I/II trial design, aiming to establish an optimal biological dose accounting for efficacy and toxicity in the framework of dose-finding. Some extensions of BOIN-ET design are also available to allow for time-to-event efficacy and toxicity outcomes based on cumulative and pending data (time-to-event BOIN-ET: TITE-BOIN-ET), ordinal graded efficacy and toxicity outcomes (generalized BOIN-ET: gBOIN-ET), and their combination (TITE-gBOIN-ET). boinet is a package to implement the BOIN-ET design family and supports the conduct of simulation studies to assess operating characteristics of BOIN-ET, TITE-BOIN-ET, gBOIN-ET, and TITE-gBOIN-ET, where users can choose design parameters in flexible and straightforward ways depending on their own application.
Generation of samples from a mix of binary, ordinal and continuous random variables with a pre-specified correlation matrix and marginal distributions. The details of the method are explained in Demirtas et al. (2012) <DOI:10.1002/sim.5362>.
Companion package, functions, data sets, examples for the book Patrice Bertail and Anna Dudek (2025), Bootstrap for Dependent Data, with an R package (by Bernard Desgraupes and Karolina Marek) - submitted. Kreiss, J.-P. and Paparoditis, E. (2003) <doi:10.1214/aos/1074290332> Politis, D.N., and White, H. (2004) <doi:10.1081/ETC-120028836> Patton, A., Politis, D.N., and White, H. (2009) <doi:10.1080/07474930802459016> Tsybakov, A. B. (2018) <doi:10.1007/b13794> Bickel, P., and Sakov, A. (2008) <doi:10.1214/18-AOS1803> Götze, F. and RaÄ kauskas, A. (2001) <doi:10.1214/lnms/1215090074> Politis, D. N., Romano, J. P., & Wolf, M. (1999, ISBN:978-0-387-98854-2) Carlstein E. (1986) <doi:10.1214/aos/1176350057> Künsch, H. (1989) <doi:10.1214/aos/1176347265> Liu, R. and Singh, K. (1992) <https://www.stat.purdue.edu/docs/research/tech-reports/1991/tr91-07.pdf> Politis, D.N. and Romano, J.P. (1994) <doi:10.1080/01621459.1994.10476870> Politis, D.N. and Romano, J.P. (1992) <https://www.stat.purdue.edu/docs/research/tech-reports/1991/tr91-07.pdf> Patrice Bertail, Anna E. Dudek. (2022) <doi:10.3150/23-BEJ1683> Dudek, A.E., LeÅ kow, J., Paparoditis, E. and Politis, D. (2014a) <https://ideas.repec.org/a/bla/jtsera/v35y2014i2p89-114.html> Beran, R. (1997) <doi:10.1023/A:1003114420352> B. Efron, and Tibshirani, R. (1993, ISBN:9780429246593) Bickel, P. J., Götze, F. and van Zwet, W. R. (1997) <doi:10.1007/978-1-4614-1314-1_17> A. C. Davison, D. Hinkley (1997) <doi:10.2307/1271471> Falk, M., & Reiss, R. D. (1989) <doi:10.1007/BF00354758> Lahiri, S. N. (2003) <doi:10.1007/978-1-4757-3803-2> Shimizu, K. .(2017) <doi:10.1007/978-3-8348-9778-7> Park, J.Y. (2003) <doi:10.1111/1468-0262.00471> Kirch, C. and Politis, D. N. (2011) <doi:10.48550/arXiv.1211.4732> Bertail, P. and Dudek, A.E. (2024) <doi:10.3150/23-BEJ1683> Dudek, A. E. (2015) <doi:10.1007/s00184-014-0505-9> Dudek, A. E. (2018) <doi:10.1080/10485252.2017.1404060> Bertail, P., Clémençon, S. (2006a) <https://ideas.repec.org/p/crs/wpaper/2004-47.html> Bertail, P. and Clémençon, S. (2006, ISBN:978-0-387-36062-1) RaduloviÄ , D. (2006) <doi:10.1007/BF02603005> Bertail, P. Politis, D. N. Rhomari, N. (2000) <doi:10.1080/02331880008802701> Nordman, D.J. Lahiri, S.N.(2004) <doi:10.1214/009053604000000779> Politis, D.N. Romano, J.P. (1993) <doi:10.1006/jmva.1993.1085> Hurvich, C. M. and Zeger, S. L. (1987, ISBN:978-1-4612-0099-4) Bertail, P. and Dudek, A. (2021) <doi:10.1214/20-EJS1787> Bertail, P., Clémençon, S. and Tressou, J. (2015) <doi:10.1111/jtsa.12105> Asmussen, S. (1987) <doi:10.1007/978-3-662-11657-9> Efron, B. (1979) <doi:10.1214/aos/1176344552> Gray, H., Schucany, W. and Watkins, T. (1972) <doi:10.2307/2335521> Quenouille, M.H. (1949) <doi:10.1111/j.2517-6161.1949.tb00023.x> Quenouille, M. H. (1956) <doi:10.2307/2332914> Prakasa Rao, B. L. S. and Kulperger, R. J. (1989) <https://www.jstor.org/stable/25050735> Rajarshi, M.B. (1990) <doi:10.1007/BF00050835> Dudek, A.E. Maiz, S. and Elbadaoui, M. (2014) <doi:10.1016/j.sigpro.2014.04.022> Beran R. (1986) <doi:10.1214/aos/1176349847> Maritz, J. S. and Jarrett, R. G. (1978) <doi:10.2307/2286545> Bertail, P., Politis, D., Romano, J. (1999) <doi:10.2307/2670177> Bertail, P. and Clémençon, S. (2006b) <doi:10.1007/0-387-36062-X_1> RaduloviÄ , D. (2004) <doi:10.1007/BF02603005> Hurd, H.L., Miamee, A.G. (2007) <doi:10.1002/9780470182833> Bühlmann, P. (1997) <doi:10.2307/3318584> Choi, E., Hall, P. (2000) <doi:10.1111/1467-9868.00244> Efron, B., Tibshirani, R. (1993, ISBN:9780429246593) Bertail, P., Clémençon, S. and Tressou, J. (2009) <doi:10.1007/s10687-009-0081-y> Bertail, P., Medina-Garay, A., De Lima-Medina, F. and Jales, I. (2024) <doi:10.1080/02331888.2024.2344670>.
This package provides a collection of functions for downloading and processing automatic weather station (AWS) data from INMET (Brazilâ s National Institute of Meteorology), designed to support the estimation of reference evapotranspiration (ETo). The package facilitates streamlined access to meteorological data and aims to simplify analyses in agricultural and environmental contexts.
Extends blockr.core with interactive blocks for visual data wrangling using dplyr and tidyr operations. Users can build data transformation pipelines through a graphical interface without writing code directly. Includes blocks for filtering, selecting, mutating, summarizing, joining, and arranging data, with support for complex expressions, grouping operations, and real-time validation.
Implementation of No-Effect-Concentration estimation that uses brms (see Burkner (2017)<doi:10.18637/jss.v080.i01>; Burkner (2018)<doi:10.32614/RJ-2018-017>; Carpenter et al. (2017)<doi:10.18637/jss.v076.i01> to fit concentration(dose)-response data using Bayesian methods for the purpose of estimating ECx values, but more particularly NEC (see Fox (2010)<doi:10.1016/j.ecoenv.2009.09.012>), NSEC (see Fisher and Fox (2023)<doi:10.1002/etc.5610>), and N(S)EC (see Fisher et al. 2023<doi:10.1002/ieam.4809>). A full description of this package can be found in Fisher et al. (2024)<doi:10.18637/jss.v110.i05>. This package expands and supersedes an original version implemented in R2jags (see Su and Yajima (2020)<https://CRAN.R-project.org/package=R2jags>; Fisher et al. (2020)<doi:10.5281/ZENODO.3966864>).
The main function generateDataset() processes a user-supplied .R file that contains metadata parameters in order to generate actual data. The metadata parameters have to be structured in the form of metadata objects, the format of which is outlined in the package vignette. This approach allows to generate artificial data in a transparent and reproducible manner.
Query information and generate badge for using in README and GitHub Pages.
Fitting, cross-validating, and predicting with Bayesian Knowledge Tracing (BKT) models. It is designed for analyzing educational datasets to trace student knowledge over time. The package includes functions for fitting BKT models, evaluating their performance using various metrics, and making predictions on new data. It provides the similar functionality as the Python package pyBKT authored by Zachary A. Pardos (zp@berkeley.edu) at <https://github.com/CAHLR/pyBKT>.
Statistical classification and regression have been popular among various fields and stayed in the limelight of scientists of those fields. Examples of the fields include clinical trials where the statistical classification of patients is indispensable to predict the clinical courses of diseases. Considering the negative impact of diseases on performing daily tasks, correctly classifying patients based on the clinical information is vital in that we need to identify patients of the high-risk group to develop a severe state and arrange medical treatment for them at an opportune moment. Deep learning - a part of artificial intelligence - has gained much attention, and research on it burgeons during past decades: see, e.g, Kazemi and Mirroshandel (2018) <DOI:10.1016/j.artmed.2017.12.001>. It is a veritable technique which was originally designed for the classification, and hence, the Buddle package can provide sublime solutions to various challenging classification and regression problems encountered in the clinical trials. The Buddle package is based on the back-propagation algorithm - together with various powerful techniques such as batch normalization and dropout - which performs a multi-layer feed-forward neural network: see Krizhevsky et. al (2017) <DOI:10.1145/3065386>, Schmidhuber (2015) <DOI:10.1016/j.neunet.2014.09.003> and LeCun et al. (1998) <DOI:10.1109/5.726791> for more details. This package contains two main functions: TrainBuddle() and FetchBuddle(). TrainBuddle() builds a feed-forward neural network model and trains the model. FetchBuddle() recalls the trained model which is the output of TrainBuddle(), classifies or regresses given data, and make a final prediction for the data.
This package provides functions to compute the joint probability mass function (pmf), cumulative distribution function (cdf), and survival function (sf) of the Basu-Dhar bivariate geometric distribution. Additional functionalities include the calculation of the correlation coefficient, covariance, and cross-factorial moments, as well as the generation of random variates. The package also implements parameter estimation based on the method of moments.
Bayesian Model Averaging for linear models with a wide choice of (customizable) priors. Built-in priors include coefficient priors (fixed, hyper-g and empirical priors), 5 kinds of model priors, moreover model sampling by enumeration or various MCMC approaches. Post-processing functions allow for inferring posterior inclusion and model probabilities, various moments, coefficient and predictive densities. Plotting functions available for posterior model size, MCMC convergence, predictive and coefficient densities, best models representation, BMA comparison. Also includes Bayesian normal-conjugate linear model with Zellner's g prior, and assorted methods.
Verification of continually updating time series data where we expect new values, but want to ensure previous data remains unchanged. Data previously recorded could change for a number of reasons, such as discovery of an error in model code, a change in methodology or instrument recalibration. Monitoring data sources for these changes is not always possible. Other unnoticed changes could include a jump in time or measurement frequency, due to instrument failure or software updates. Functionality is provided that can be used to check and flag changes to previous data to prevent changes going unnoticed, as well as unexpected jumps in time.