Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Co-clustering of the rows and columns of a contingency or binary matrix, or double binary matrices and model selection for the number of row and column clusters. Three models are considered: the Poisson latent block model for contingency matrix, the binary latent block model for binary matrix and a new model we develop: the multiple latent block model for double binary matrices. A new procedure named bikm1 is implemented to investigate more efficiently the grid of numbers of clusters. Then, the studied model selection criteria are the integrated completed likelihood (ICL) and the Bayesian integrated likelihood (BIC). Finally, the co-clustering adjusted Rand index (CARI) to measure agreement between co-clustering partitions is implemented. Robert Valerie, Vasseur Yann, Brault Vincent (2021) <doi:10.1007/s00357-020-09379-w>.
Assigns standardized diagnoses using the Banff Classification (Category 1 to 6 diagnoses, including Acute and Chronic active T-cell mediated rejection as well as Active, Chronic active, and Chronic antibody mediated rejection). The main function considers a minimal dataset containing biopsies information in a specific format (described by a data dictionary), verifies its content and format (based on the data dictionary), assigns diagnoses, and creates a summary report. The package is developed on the reference guide to the Banff classification of renal allograft pathology Roufosse C, Simmonds N, Clahsen-van Groningen M, et al. A (2018) <doi:10.1097/TP.0000000000002366>. The full description of the Banff classification is available at <https://banfffoundation.org/>.
This package provides tools designed to make it easier for beginner and intermediate users to build and validate binary logistic regression models. Includes bivariate analysis, comprehensive regression output, model fit statistics, variable selection procedures, model validation techniques and a shiny app for interactive model building.
R/C++ implementation of the model proposed by Primiceri ("Time Varying Structural Vector Autoregressions and Monetary Policy", Review of Economic Studies, 2005), with functionality for computing posterior predictive distributions and impulse responses.
This package provides tools for Bayesian copula generalized linear models (GLMs). The sampling scheme is based on Pitt, Chan, and Kohn (2006) <doi:10.1093/biomet/93.3.537>. Regression parameters (including coefficients and dispersion parameters) are estimated via the adaptive random walk Metropolis approach developed by Haario, Saksman, and Tamminen (1999) <doi:10.1007/s001800050022>. The prior for the correlation matrix is based on Hoff (2007) <doi:10.1214/07-AOAS107>.
Generates confidence intervals for standardized regression coefficients using delta method standard errors for models fitted by lm() as described in Yuan and Chan (2011) <doi:10.1007/s11336-011-9224-6> and Jones and Waller (2015) <doi:10.1007/s11336-013-9380-y>. The package can also be used to generate confidence intervals for differences of standardized regression coefficients and as a general approach to performing the delta method. A description of the package and code examples are presented in Pesigan, Sun, and Cheung (2023) <doi:10.1080/00273171.2023.2201277>.
Simultaneously clusters the Periodontal diseases (PD) patients and their tooth sites after taking the patient- and site-level covariates into consideration. BAREB uses the determinantal point process (DPP) prior to induce diversity among different biclusters to facilitate parsimony and interpretability. Essentially, BAREB is a cluster-wise linear model based on Yuliang (2020) <doi:10.1002/sim.8536>.
This package provides functions for Bayesian Data Analysis, with datasets from the book "Bayesian data Analysis (second edition)" by Gelman, Carlin, Stern and Rubin. Not all datasets yet, hopefully completed soon.
Preprocessing tools and biodiversity measures (species abundance, species richness, population heterogeneity and sensitivity) for analysing marine benthic data. See Van Loon et al. (2015) <doi:10.1016/j.seares.2015.05.002> for an application of these tools.
Bisulfite-treated RNA non-conversion in a set of samples is analysed as follows : each sample's non-conversion distribution is identified to a Poisson distribution. P-values adjusted for multiple testing are calculated in each sample. Combined non-conversion P-values and standard errors are calculated on the intersection of the set of samples. For further details, see C Legrand, F Tuorto, M Hartmann, R Liebers, D Jakob, M Helm and F Lyko (2017) <doi:10.1101/gr.210666.116>.
Temporal Exponential Random Graph Models (TERGM) estimated by maximum pseudolikelihood with bootstrapped confidence intervals or Markov Chain Monte Carlo maximum likelihood. Goodness of fit assessment for ERGMs, TERGMs, and SAOMs. Micro-level interpretation of ERGMs and TERGMs. The methods are described in Leifeld, Cranmer and Desmarais (2018), JStatSoft <doi:10.18637/jss.v083.i06>.
We implemented a Bayesian-statistics approach for subtraction of incoherent scattering from neutron total-scattering data. In this approach, the estimated background signal associated with incoherent scattering maximizes the posterior probability, which combines the likelihood of this signal in reciprocal and real spaces with the prior that favors smooth lines. The description of the corresponding approach could be found at Gagin and Levin (2014) <DOI:10.1107/S1600576714023796>.
Included are two main interfaces, bentcable.ar() and bentcable.dev.plot(), for fitting and diagnosing bent-cable regressions for autoregressive time-series data (Chiu and Lockhart 2010, <doi:10.1002/cjs.10070>) or independent data (time series or otherwise - Chiu, Lockhart and Routledge 2006, <doi:10.1198/016214505000001177>). Some components in the package can also be used as stand-alone functions. The bent cable (linear-quadratic-linear) generalizes the broken stick (linear-linear), which is also handled by this package. Version 0.2 corrected a glitch in the computation of confidence intervals for the CTP. References that were updated from Versions 0.2.1 and 0.2.2 appear in Version 0.2.3 and up. Version 0.3.0 improved robustness of the error-message producing mechanism. Version 0.3.1 improves the NAMESPACE file of the package. It is the author's intention to distribute any future updates via GitHub.
This package provides an integrated data management solution for assets installed via the Biobricks.ai platform. Streamlines the process of loading and interacting with diverse datasets in a consistent manner. A list of bricks is available at <https://status.biobricks.ai>. Documentation for Biobricks.ai is available at <https://docs.biobricks.ai>.
Interact with the Brandwatch API <https://developers.brandwatch.com/docs>. Allows you to authenticate to the API and obtain data for projects, queries, query groups tags and categories. Also allows you to directly obtain mentions and aggregate data for a specified query or query group.
This package provides a client for the Base Adresses Nationale ('BAN') API, which allows to (batch) geocode and reverse-geocode French addresses. For more information about the BAN and its API, please see <https://adresse.data.gouv.fr/outils/api-doc/adresse>.
Decomposition for differences-in-differences with variation in treatment timing from Goodman-Bacon (2018) <doi:10.3386/w25018>.
Computes Blyth-Still-Casella exact binomial confidence intervals based on a refining procedure proposed by George Casella (1986) <doi:10.2307/3314658>.
Calculates the necessary quantities to perform Bayesian multigroup equivalence testing. Currently the package includes the Bayesian models and equivalence criteria outlined in Pourmohamad and Lee (2023) <doi:10.1002/sta4.645>, but more models and equivalence testing features may be added over time.
Extend lasso and elastic-net model fitting for large data sets that cannot be loaded into memory. Designed to be more memory- and computation-efficient than existing lasso-fitting packages like glmnet and ncvreg', thus allowing the user to analyze big data with limited RAM <doi:10.32614/RJ-2021-001>.
Enables the user to infer potential synthetic lethal relationships by analysing relationships between bimodally distributed gene pairs in big gene expression datasets. Enables the user to visualise these candidate synthetic lethal relationships.
Israeli baby names provided by Israel's Central Bureau of Statistics. The package contains only names used for at least 5 children in at least one gender and sector ("Jewish", "Muslim", "Christian", "Druze" and "Other"). Data was downloaded from: <https://www.cbs.gov.il/he/publications/LochutTlushim/2020/%D7%A9%D7%9E%D7%95%D7%AA-%D7%A4%D7%A8%D7%98%D7%99%D7%99%D7%9D.xlsx>.
Some elementary matrix algebra tools are implemented to manage block matrices or partitioned matrix, i.e. "matrix of matrices" (http://en.wikipedia.org/wiki/Block_matrix). The block matrix is here defined as a new S3 object. In this package, some methods for "matrix" object are rewritten for "blockmatrix" object. New methods are implemented. This package was created to solve equation systems with block matrices for the analysis of environmental vector time series . Bugs/comments/questions/collaboration of any kind are warmly welcomed.
Fits, validates and compares a number of Bayesian models for spatial and space time point referenced and areal unit data. Model fitting is done using several packages: rstan', INLA', spBayes', spTimer', spTDyn', CARBayes and CARBayesST'. Model comparison is performed using the DIC and WAIC, and K-fold cross-validation where the user is free to select their own subset of data rows for validation. Sahu (2022) <doi:10.1201/9780429318443> describes the methods in detail.