Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Model agnostic tool for decomposition of predictions from black boxes. Break Down Table shows contributions of every variable to a final prediction. Break Down Plot presents variable contributions in a concise graphical way. This package work for binary classifiers and general regression models.
This package provides tools for sampling from the PolyaGamma distribution based on Polson, Scott, and Windle (2013) <doi:10.1080/01621459.2013.829001>. Useful for logistic regression.
This package produces an economic evaluation of a sample of suitable variables of cost and effectiveness / utility for two or more interventions, e.g. from a Bayesian model in the form of MCMC simulations. This package computes the most cost-effective alternative and produces graphical summaries and probabilistic sensitivity analysis, see Baio et al (2017) <doi:10.1007/978-3-319-55718-2>.
This package creates plots showing scored HR experiments and plots of distribution of means of ranks of HR score from bootstrapping. Authors (2019) <doi:10.5281/zenodo.3374507>.
This package provides a client for retrieving data and metadata from major central bank APIs. It supports access to the Bundesbank SDMX Web Service API (<https://www.bundesbank.de/en/statistics/time-series-databases/help-for-sdmx-web-service/web-service-interface-data>), the Swiss National Bank Data Portal (<https://data.snb.ch/en>), the European Central Bank Data Portal API (<https://data.ecb.europa.eu/help/api/overview>), the Bank of England Interactive Statistical Database (<https://www.bankofengland.co.uk/boeapps/database>), the Banco de España API (<https://www.bde.es/webbe/en/estadisticas/recursos/api-estadisticas-bde.html>), the Banque de France Web Service (<https://webstat.banque-france.fr/en/pages/guide-migration-api/>), and Bank of Canada Valet API (<https://www.bankofcanada.ca/valet/docs>).
This package provides a tuneable and interpretable method for relaxing the instrumental variables (IV) assumptions to infer treatment effects in the presence of unobserved confounding. For a treatment-associated covariate to be a valid IV, it must be (a) unconfounded with the outcome and (b) have a causal effect on the outcome that is exclusively mediated by the exposure. There is no general test of the validity of these IV assumptions for any particular pre-treatment covariate. However, if different pre-treatment covariates give differing causal effect estimates when treated as IVs, then we know at least some of the covariates violate these assumptions. budgetIVr exploits this fact by taking as input a minimum budget of pre-treatment covariates assumed to be valid IVs and idenfiying the set of causal effects that are consistent with the user's data and budget assumption. The following generalizations of this principle can be used in this package: (1) a vector of multiple budgets can be assigned alongside corresponding thresholds that model degrees of IV invalidity; (2) budgets and thresholds can be chosen using specialist knowledge or varied in a principled sensitivity analysis; (3) treatment effects can be nonlinear and/or depend on multiple exposures (at a computational cost). The methods in this package require only summary statistics. Confidence sets are constructed under the "no measurement error" (NOME) assumption from the Mendelian randomization literature. For further methodological details, please refer to Penn et al. (2024) <doi:10.48550/arXiv.2411.06913>.
Waffle plots are rectangular pie charts that represent a quantity or abundances using colored squares or other symbol. This makes them better at transmitting information as the discrete number of squares is easier to read than the circular area of pie charts. While the original waffle charts were rectangular with 10 rows and columns, with a single square representing 1%, they are nowadays popular in various infographics to visualize any proportional ratios.
The binomialRF is a new feature selection technique for decision trees that aims at providing an alternative approach to identify significant feature subsets using binomial distributional assumptions (Rachid Zaim, S., et al. (2019)) <doi:10.1101/681973>. Treating each splitting variable selection as a set of exchangeable correlated Bernoulli trials, binomialRF then tests whether a feature is selected more often than by random chance.
Render SVG as interactive figures to display contextual information, with selectable and clickable user interface elements. These figures can be seamlessly integrated into rmarkdown and Quarto documents, as well as shiny applications, allowing manipulation of elements and reporting actions performed on them. Additional features include pan, zoom in/out functionality, and the ability to export the figures in SVG or PNG formats.
This package provides a practical tool for estimating the burden of common communicable diseases in settlements of displaced populations. An online version of the tool can be found at <http://who-refugee-bod.ecdf.ed.ac.uk/shiny/app/>. Estimates of burden of disease aim to synthesize data about cause-specific morbidity and mortality through a systematic approach that enables evidence-based decisions and comparisons across settings. The focus of this tool is on four acute communicable diseases and syndromes, including Acute respiratory infections, Acute diarrheal diseases, Acute jaundice syndrome and Acute febrile illnesses.
This package provides a build system based on GNU make that creates and maintains (simply) make files in an R session and provides GUI debugging support through Microsoft Visual Code'.
Standard template library containers are used to implement an efficient binary segmentation algorithm, which is log-linear on average and quadratic in the worst case.
Bayesian analysis for exponential random graph models using advanced computational algorithms. More information can be found at: <https://acaimo.github.io/Bergm/>.
Nuclear magnetic resonance (NMR) is a highly versatile analytical technique for studying molecular configuration, conformation, and dynamics, especially those of biomacromolecules such as proteins. Biological Magnetic Resonance Data Bank ('BMRB') is a repository for Data from NMR Spectroscopy on Proteins, Peptides, Nucleic Acids, and other Biomolecules. Currently, BMRB offers an R package RBMRB to fetch data, however, it doesn't easily offer individual data file downloading and storing in a local directory. When using RBMRB', the data will stored as an R object, which fundamentally hinders the NMR researches to access the rich information from raw data, for example, the metadata. Here, BMRBr File Downloader ('BMRBr') offers a more fundamental, low level downloader, which will download original deposited .str format file. This type of file contains information such as entry title, authors, citation, protein sequences, and so on. Many factors affect NMR experiment outputs, such as temperature, resonance sensitivity and etc., approximately 40% of the entries in the BMRB have chemical shift accuracy problems [1,2] Unfortunately, current reference correction methods are heavily dependent on the availability of assigned protein chemical shifts or protein structure. This is my current research project is going to solve, which will be included in the future release of the package. The current version of the package is sufficient and robust enough for downloading individual BMRB data file from the BMRB database <http://www.bmrb.wisc.edu>. The functionalities of this package includes but not limited: * To simplifies NMR researches by combine data downloading and results analysis together. * To allows NMR data reaches a broader audience that could utilize more than just chemical shifts but also metadata. * To offer reference corrected data for entries without assignment or structure information (future release). Reference: [1] E.L. Ulrich, H. Akutsu, J.F. Doreleijers, Y. Harano, Y.E. Ioannidis, J. Lin, et al., BioMagResBank, Nucl. Acids Res. 36 (2008) D402â 8. <doi:10.1093/nar/gkm957>. [2] L. Wang, H.R. Eghbalnia, A. Bahrami, J.L. Markley, Linear analysis of carbon-13 chemical shift differences and its application to the detection and correction of errors in referencing and spin system identifications, J. Biomol. NMR. 32 (2005) 13â 22. <doi:10.1007/s10858-005-1717-0>.
Analysis workflow for finding geographic boundaries of ecological or landscape traits and comparing the placement of geographic boundaries of two traits. If data are trait values, trait data are transformed to boundary intensities based on approximate first derivatives across latitude and longitude. The package includes functions to create custom null models based on the input data. The boundary statistics are described in: Fortin, Drapeau, and Jacquez (1996) <doi:10.2307/3545584>.
Different adjustment methods for batch effects in biomarker data, such as from tissue microarrays. Some methods attempt to retain differences between batches that may be due to between-batch differences in "biological" factors that influence biomarker values.
Allows the user to manage easily R packages removal and installation. It offers many functions to display installed packages according to specific dates and removes them if needed. The user is always prompted when running the removal functions in order to confirm the required action. It also provides functions that will install Github starred R packages whether available on CRAN or not.
Bone Profiler is a scientific method and a software used to model bone section for paleontological and ecological studies. See Girondot and Laurin (2003) <https://www.researchgate.net/publication/280021178_Bone_profiler_A_tool_to_quantify_model_and_statistically_compare_bone-section_compactness_profiles> and Gônet, Laurin and Girondot (2022) <https://palaeo-electronica.org/content/2022/3590-bone-section-compactness-model>.
The main function generateDataset() processes a user-supplied .R file that contains metadata parameters in order to generate actual data. The metadata parameters have to be structured in the form of metadata objects, the format of which is outlined in the package vignette. This approach allows to generate artificial data in a transparent and reproducible manner.
This package provides the bayesGARCH() function which performs the Bayesian estimation of the GARCH(1,1) model with Student's t innovations as described in Ardia (2008) <doi:10.1007/978-3-540-78657-3>.
Includes algorithms to assess alpha and beta diversity in all their dimensions (taxonomic, phylogenetic and functional). It allows performing a number of analyses based on species identities/abundances, phylogenetic/functional distances, trees, convex-hulls or kernel density n-dimensional hypervolumes depicting species relationships. Cardoso et al. (2015) <doi:10.1111/2041-210X.12310>.
The sample size according to the Bethel's procedure.
Reproducible and automated analysis of multiplex bead assays such as CBA (Morgan et al. 2004; <doi: 10.1016/j.clim.2003.11.017>), LEGENDplex (Yu et al. 2015; <doi: 10.1084/jem.20142318>), and MACSPlex (Miltenyi Biotec 2014; Application note: Data acquisition and analysis without the MACSQuant analyzer; <https://www.miltenyibiotec.com/upload/assets/IM0021608.PDF>). The package provides functions for streamlined reading of fcs files, and identification of bead clusters and analyte expression. The package eases the calculation of standard curves and the subsequent calculation of the analyte concentration.
Shows statistics about bytes contained in a file as a circle graph of deviations from mean in sigma increments. The function can be useful for statistically analyze the content of files in a glimpse: text files are shown as a green centered crown, compressed and encrypted files should be shown as equally distributed variations with a very low CV (sigma/mean), and other types of files can be classified between these two categories depending on their text vs binary content, which can be useful to quickly determine how information is stored inside them (databases, multimedia files, etc).