Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Prediction limits for the Poisson distribution are produced from both frequentist and Bayesian viewpoints. Limiting results are provided in a Bayesian setting with uniform, Jeffreys and gamma as prior distributions. More details on the methodology are discussed in Bejleri and Nandram (2018) <doi:10.1080/03610926.2017.1373814> and Bejleri, Sartore and Nandram (2021) <doi:10.1007/s42952-021-00157-x>.
Functional claims reserving methods based on aggregated chain-ladder data, also known as a run-off triangle, implemented in three nonparametric algorithms (PARALLAX, REACT, and MACRAME) proposed in Maciak, Mizera, and Pešta (2022) <doi:10.1017/asb.2022.4>. Additional methods including permutation bootstrap for completed run-off triangles are also provided.
The first goal of this package is to provide a multitude of tree models, i.e., functions that generate rooted binary trees with a given number of leaves. Second, the package allows for an easy evaluation and comparison of tree shape statistics by estimating their power to differentiate between different tree models. Please note that this R package was developed alongside the manuscript "Tree balance in phylogenetic models" by S. J. Kersting, K. Wicke, and M. Fischer (2024) <doi:10.48550/arXiv.2406.05185>, which provides further background and the respective mathematical definitions. This project was supported by the project ArtIGROW, which is a part of the WIR!-Alliance ArtIFARM â Artificial Intelligence in Farming funded by the German Federal Ministry of Education and Research (No. 03WIR4805).
Uses provenance post-execution to help the user understand and debug their script by providing functions to look at intermediate steps and data values, their forwards and backwards lineage, and to understand the steps leading up to warning and error messages. provDebugR uses provenance produced by rdtLite (available on CRAN), stored in PROV-JSON format.
This package provides functions are primarily functions for systems of ordinary differential equations, difference equations, and eigenanalysis and projection of demographic matrices; data are for examples.
Perform flexible and quick calculations for Demand and Supply Planning, such as projected inventories and coverages, as well as replenishment plan. For any time bucket, daily, weekly or monthly, and any granularity level, product or group of products.
Allows specification and fitting of some parameter estimation examples inspired by time-resolved spectroscopy via a Shiny GUI.
This package provides an implementation of a rare variant association test that utilizes protein tertiary structure to increase signal and to identify likely causal variants. Performs structure-guided collapsing, which leads to local tests that borrow information from neighboring variants on a protein and that provide association information on a variant-specific level. For details of the implemented method see West, R. M., Lu, W., Rotroff, D. M., Kuenemann, M., Chang, S-M., Wagner M. J., Buse, J. B., Motsinger-Reif, A., Fourches, D., and Tzeng, J-Y. (2019) <doi:10.1371/journal.pcbi.1006722>.
Handle data from evolve and resequence experiments. Measured allele frequencies (e.g., from variants called from high-throughput sequencing data) are compared using an update of the PsiSeq algorithm (Earley, Eric and Corbin Jones (2011) <doi:10.1534/genetics.111.129445>). Functions for saving and loading important files are also included, as well as functions for basic data visualization.
Estimates corrected Procrustean correlation between matrices for removing overfitting effect. Coissac Eric and Gonindard-Melodelima Christelle (2019) <doi:10.1101/842070>.
This package provides a set of Study Data Tabulation Model (SDTM) datasets constructed by modifying the pharmaversesdtm package to meet J&J Innovative Medicine's standard data structure for Clinical and Statistical Programming.
Population genetic analyses for hierarchical analysis of partially clonal populations built upon the architecture of the adegenet package. Originally described in Kamvar, Tabima, and Grünwald (2014) <doi:10.7717/peerj.281> with version 2.0 described in Kamvar, Brooks, and Grünwald (2015) <doi:10.3389/fgene.2015.00208>.
Interactive shiny application for working with Probability Distributions. Calculations and Graphs are provided.
The permubiome R package was created to perform a permutation-based non-parametric analysis on microbiome data for biomarker discovery aims. This test executes thousands of comparisons in a pairwise manner, after a random shuffling of data into the different groups of study with a prior selection of the microbiome features with the largest variation among groups. Previous to the permutation test itself, data can be normalized according to different methods proposed to handle microbiome data ('proportions or Anders'). The median-based differences between groups resulting from the multiple simulations are fitted to a normal distribution with the aim to calculate their significance. A multiple testing correction based on Benjamini-Hochberg method (fdr) is finally applied to extract the differentially presented features between groups of your dataset. LATEST UPDATES: v1.1 and olders incorporates function to parse COLUMN format; v1.2 and olders incorporates -optimize- function to maximize evaluation of features with largest inter-class variation; v1.3 and olders includes the -size.effect- function to perform estimation statistics using the bootstrap-coupled approach implemented in the dabestr (>=0.3.0) R package. Current v1.3.2 fixed bug with "Class" recognition and updated dabestr functions.
Computes the minimum sample size required for the development of a new multivariable prediction model using the criteria proposed by Riley et al. (2018) <doi: 10.1002/sim.7992>. pmsampsize can be used to calculate the minimum sample size for the development of models with continuous, binary or survival (time-to-event) outcomes. Riley et al. (2018) <doi: 10.1002/sim.7992> lay out a series of criteria the sample size should meet. These aim to minimise the overfitting and to ensure precise estimation of key parameters in the prediction model.
This package provides a comprehensive and easy to use R implementation of confirmatory phylogenetic path analysis as described by Von Hardenberg and Gonzalez-Voyer (2012) <doi:10.1111/j.1558-5646.2012.01790.x>.
Sample size calculations for practical equivalence trial design with a time to event endpoint.
Puzzle game that can be played in the R console. Restore the pixel art by shifting rows.
XKCD described a supposedly "bad" colormap that it called a "Painbow" (see <https://xkcd.com/2537/>). But simple tests demonstrate that under some circumstances, the colormap can perform very well, and people can find information that is difficult to detect with the ggplot2 default and even supposedly "good" colormaps like viridis. This library let's you use the Painbow in your own ggplot graphs.
This package performs genomic prediction of hybrid performance using eight GS methods including GBLUP, BayesB, RKHS, PLS, LASSO, Elastic net, XGBoost and LightGBM. GBLUP: genomic best liner unbiased prediction, RKHS: reproducing kernel Hilbert space, PLS: partial least squares regression, LASSO: least absolute shrinkage and selection operator, XGBoost: extreme gradient boosting, LightGBM: light gradient boosting machine. It also provides fast cross-validation and mating design scheme for training population (Xu S et al (2016) <doi:10.1111/tpj.13242>; Xu S (2017) <doi:10.1534/g3.116.038059>). A complete manual for this package is provided in the manual folder of the package installation directory. You can locate the manual by running the following command in R: system.file("manual", package = "predhy.GUI").
This package provides a versatile R visualization package that empowers researchers with comprehensive visualization tools for seamlessly mapping peptides to protein sequences, identifying distinct domains and regions of interest, accentuating mutations, and highlighting post-translational modifications, all while enabling comparisons across diverse experimental conditions. Potential applications of PepMapViz include the visualization of cross-software mass spectrometry results at the peptide level for specific protein and domain details in a linearized format and post-translational modification coverage across different experimental conditions; unraveling insights into disease mechanisms. It also enables visualization of Major histocompatibility complex-presented peptide clusters in different antibody regions predicting immunogenicity in antibody drug development.
Wrangle and annotate different types of political texts. It also introduces Urgency Analysis, a new method for the analysis of urgency in political texts.
Conduct penalized meta-analysis, see Van Lissa, Van Erp, & Clapper (2023) <doi:10.31234/osf.io/6phs5>. In meta-analysis, there are often between-study differences. These can be coded as moderator variables, and controlled for using meta-regression. However, if the number of moderators is large relative to the number of studies, such an analysis may be overfit. Penalized meta-regression is useful in these cases, because it shrinks the regression slopes of irrelevant moderators towards zero.
An implementation of a non-parametric statistical model using a parallelised Monte Carlo sampling scheme. The method implemented in this package allows non-parametric inference to be regularized for small sample sizes, while also being more accurate than approximations such as variational Bayes. The concentration parameter is an effective sample size parameter, determining the faith we have in the model versus the data. When the concentration is low, the samples are close to the exact Bayesian logistic regression method; when the concentration is high, the samples are close to the simplified variational Bayes logistic regression. The method is described in full in the paper Lyddon, Walker, and Holmes (2018), "Nonparametric learning from Bayesian models with randomized objective functions" <arXiv:1806.11544>.