Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
We described a novel Topology-based pathway enrichment analysis, which integrated the global position of the nodes and the topological property of the pathways in Kyoto Encyclopedia of Genes and Genomes Database. We also provide some functions to obtain the latest information about pathways to finish pathway enrichment analysis using this method.
Theme ggplot2', lattice', and base graphics based on a few choices, including foreground color, background color, accent color, and font family. Fonts that aren't available on the system, but are available via download on Google Fonts', can be automatically downloaded, cached, and registered for use with the showtext and ragg packages.
This package provides functions are provided for prior specification in divergence time estimation using fossils as well as other kinds of data. It provides tools for interacting with the input and output of Bayesian platforms in evolutionary biology such as BEAST2', MrBayes', RevBayes', or MCMCTree'. It Implements a simple measure similarity between probability density functions for comparing prior and posterior Bayesian densities, as well as code for calculating the combination of distributions using conflation of Hill (2008). Functions for estimating the origination time in collections of distributions using the x-intercept (e.g., Draper and Smith, 1998) and stratigraphic intervals (Marshall 2010) are also available. Hill, T. 2008. "Conflations of probability distributions". Transactions of the American Mathematical Society, 363:3351-3372. <doi:10.48550/arXiv.0808.1808>, Draper, N. R. and Smith, H. 1998. "Applied Regression Analysis". 1--706. Wiley Interscience, New York. <DOI:10.1002/9781118625590>, Marshall, C. R. 2010. "Using confidence intervals to quantify the uncertainty in the end-points of stratigraphic ranges". Quantitative Methods in Paleobiology, 291--316. <DOI:10.1017/S1089332600001911>.
Data frame class for storing collective movement data (e.g. fish schools, ungulate herds, baboon troops) collected from GPS trackers or computer vision tracking software.
This package provides a coherent interface for evaluating models fit with the trending package. This package is part of the RECON (<https://www.repidemicsconsortium.org/>) toolkit for outbreak analysis.
This package provides a crawler for programmatically navigating THREDDS Data Server (<https://www.unidata.ucar.edu/software/tds/>) catalogs, and access dataset metadata and resources.
Allow to compute and visualise convective parameters commonly used in the operational prediction of severe convective storms. Core algorithm is based on a highly optimized C++ code linked into R via Rcpp'. Highly efficient engine allows to derive thermodynamic and kinematic parameters from large numerical datasets such as reanalyses or operational Numerical Weather Prediction models in a reasonable amount of time. Package has been developed since 2017 by research meteorologists specializing in severe thunderstorms. The most relevant methods used in the package based on the following publications Stipanuk (1973) <https://apps.dtic.mil/sti/pdfs/AD0769739.pdf>, McCann et al. (1994) <doi:10.1175/1520-0434(1994)009%3C0532:WNIFFM%3E2.0.CO;2>, Bunkers et al. (2000) <doi:10.1175/1520-0434(2000)015%3C0061:PSMUAN%3E2.0.CO;2>, Corfidi et al. (2003) <doi:10.1175/1520-0434(2003)018%3C0997:CPAMPF%3E2.0.CO;2>, Showalter (1953) <doi:10.1175/1520-0477-34.6.250>, Coffer et al. (2019) <doi:10.1175/WAF-D-19-0115.1>, Gropp and Davenport (2019) <doi:10.1175/WAF-D-17-0150.1>, Czernecki et al. (2019) <doi:10.1016/j.atmosres.2019.05.010>, Taszarek et al. (2020) <doi:10.1175/JCLI-D-20-0346.1>, Sherburn and Parker (2014) <doi:10.1175/WAF-D-13-00041.1>, Romanic et al. (2022) <doi:10.1016/j.wace.2022.100474>.
This package provides tools that stem and lemmatize text. Stemming is a process that removes endings such as affixes. Lemmatization is the process of grouping inflected forms together as a single base form.
The data that is generated from independent and consecutive GillespieSSA runs for a generic biochemical network is formatted as rows and constitutes an observation. The first column of each row is the computed timestep for each run. Subsequent columns are used for the number of molecules of each participating molecular species or "metabolite" of a generic biochemical network. In this way TemporalGSSA', is a wrapper for the R-package GillespieSSA'. The number of observations must be at least 30. This will generate data that is statistically significant. TemporalGSSA', transforms this raw data into a simulation time-dependent and metabolite-specific trial. Each such trial is defined as a set of linear models (n >= 30) between a timestep and number of molecules for a metabolite. Each linear model is characterized by coefficients such as the slope, arbitrary constant, etc. The user must enter an integer from 1-4. These specify the statistical modality utilized to compute a representative timestep (mean, median, random, all). These arguments are mandatory and will be checked. Whilst, the numeric indicator "0" indicates suitability, "1" prompts the user to revise and re-enter their data. An optional logical argument controls the output to the console with the default being "TRUE" (curtailed) whilst "FALSE" (verbose). The coefficients of each linear model are averaged (mean slope, mean constant) and are incorporated into a metabolite-specific linear regression model as the dependent variable. The independent variable is the representative timestep chosen previously. The generated data is the imputed molecule number for an in silico experiment with (n >=30) observations. These steps can be replicated with multiple set of observations. The generated "technical replicates" can be statistically evaluated (mean, standard deviation) and will constitute simulation time-dependent molecules for each metabolite. For SSA-generated datasets with varying simulation times TemporalGSSA will generate a simulation time-dependent trajectory for each metabolite of the biochemical network under study. The relevant publication with the mathematical derivation of the algorithm is (2022, Journal of Bioinformatics and Computational Biology) <doi:10.1142/S0219720022500184>. The algorithm has been deployed in the following publications (2021, Heliyon) <doi:10.1016/j.heliyon.2021.e07466> and (2016, Journal of Theoretical Biology) <doi:10.1016/j.jtbi.2016.07.002>.
This package provides a toolset that allows you to easily import and tidy data sheets retrieved from Gapminder data web tools. It will therefore contribute to reduce the time used in data cleaning of Gapminder indicator data sheets as they are very messy.
Easily carry out latent profile analysis ("LPA"), determine the correct number of classes based on best practices, and tabulate and plot the results. Provides functionality to estimate commonly-specified models with free means, variances, and covariances for each profile. Follows a tidy approach, in that output is in the form of a data frame that can subsequently be computed on. Models can be estimated using the free open source R packages Mclust and OpenMx', or using the commercial program MPlus', via the MplusAutomation package.
This package provides a version of the Titanic survival data tailored for people analytics demonstrations and practice. While another package, titanic', reproduces the Kaggle competition files with minimal preprocessing, tidytitanic combines the train and test datasets into the single dataset, passengers', for exploration and summary across all passengers. It also extracts personal identifiersâ such as first names, last names, and titles from the raw name field, enabling demographic analysis. The passengers data does not cover the crew, but this package also provides the more bare-bones, crew-containing datasets tidy_titanic and flat_titanic based on the Titanic data set from datasets for further exploration. This human-centered data package is designed to support exploratory data analysis, feature engineering, and pedagogical use cases.
This package provides a common way of validating a biological assay for is through a procedure, where m levels of an analyte are measured with n replicates at each level, and if all m estimates of the coefficient of variation (CV) are less than some prespecified level, then the assay is declared validated for precision within the range of the m analyte levels. Two limitations of this procedure are: there is no clear statistical statement of precision upon passing, and it is unclear how to modify the procedure for assays with constant standard deviation. We provide tools to convert such a procedure into a set of m hypothesis tests. This reframing motivates the m:n:q procedure, which upon completion delivers a 100q% upper confidence limit on the CV. Additionally, for a post-validation assay output of y, the method gives an ``effective standard deviation interval of log(y) plus or minus r, which is a 68% confidence interval on log(mu), where mu is the expected value of the assay output for that sample. Further, the m:n:q procedure can be straightforwardly applied to constant standard deviation assays. We illustrate these tools by applying them to a growth inhibition assay. This is an implementation of the methods described in Fay, Sachs, and Miura (2018) <doi:10.1002/sim.7528>.
This package provides a wrapper for the TexTra API <https://mt-auto-minhon-mlt.ucri.jgn-x.jp/>, a web service for translating texts between different languages. TexTra API account is required to use the service.
An implementation of hypothesis testing in an extended Rasch modeling framework, including sample size planning procedures and power computations. Provides 4 statistical tests, i.e., gradient test (GR), likelihood ratio test (LR), Rao score or Lagrange multiplier test (RS), and Wald test, for testing a number of hypotheses referring to the Rasch model (RM), linear logistic test model (LLTM), rating scale model (RSM), and partial credit model (PCM). Three types of functions for power and sample size computations are provided. Firstly, functions to compute the sample size given a user-specified (predetermined) deviation from the hypothesis to be tested, the level alpha, and the power of the test. Secondly, functions to evaluate the power of the tests given a user-specified (predetermined) deviation from the hypothesis to be tested, the level alpha of the test, and the sample size. Thirdly, functions to evaluate the so-called post hoc power of the tests. This is the power of the tests given the observed deviation of the data from the hypothesis to be tested and a user-specified level alpha of the test. Power and sample size computations are based on a Monte Carlo simulation approach. It is computationally very efficient. The variance of the random error in computing power and sample size arising from the simulation approach is analytically derived by using the delta method. Additionally, functions to compute the power of the tests as a function of an effect measure interpreted as explained variance are provided. Draxler, C., & Alexandrowicz, R. W. (2015), <doi:10.1007/s11336-015-9472-y>.
This package provides a framework to work with decision rules. Rules can be extracted from supported models, augmented with (custom) metrics using validation data, manipulated using standard dataframe operations, reordered and pruned based on a metric, predict on unseen (test) data. Utilities include; Creating a rulelist manually, Exporting a rulelist as a SQL case statement and so on. The package offers two classes; rulelist and ruleset based on dataframe.
This package implements measures of tree similarity, including information-based generalized Robinson-Foulds distances (Phylogenetic Information Distance, Clustering Information Distance, Matching Split Information Distance; Smith 2020) <doi:10.1093/bioinformatics/btaa614>; Jaccard-Robinson-Foulds distances (Bocker et al. 2013) <doi:10.1007/978-3-642-40453-5_13>, including the Nye et al. (2006) metric <doi:10.1093/bioinformatics/bti720>; the Matching Split Distance (Bogdanowicz & Giaro 2012) <doi:10.1109/TCBB.2011.48>; the Hierarchical Mutual Information (Perotti et al. 2015) <doi:10.1103/PhysRevE.92.062825>; Maximum Agreement Subtree distances; the Kendall-Colijn (2016) distance <doi:10.1093/molbev/msw124>, and the Nearest Neighbour Interchange (NNI) distance, approximated per Li et al. (1996) <doi:10.1007/3-540-61332-3_168>. Includes tools for visualizing mappings of tree space (Smith 2022) <doi:10.1093/sysbio/syab100>, for identifying islands of trees (Silva and Wilkinson 2021) <doi:10.1093/sysbio/syab015>, for calculating the median of sets of trees, and for computing the information content of trees and splits.
The TEQR package contains software to calculate the operating characteristics for the TEQR and the ACT designs.The TEQR (toxicity equivalence range) design is a toxicity based cumulative cohort design with added safety rules. The ACT (Activity constrained for toxicity) design is also a cumulative cohort design with additional safety rules. The unique feature of this design is that dose is escalated based on lack of activity rather than on lack of toxicity and is de-escalated only if an unacceptable level of toxicity is experienced.
Utils for basic statistical experiments, that can be used for teaching introductory statistics. Each experiment generates a tibble. Dice rolls and coin flips are simulated using sample(). The properties of the dice can be changed, like the number of sides. A coin flip is simulated using a two sided dice. Experiments can be combined with the pipe-operator.
Class definitions and constructors for pseudo-vectors containing all permutations, combinations and subsets of objects taken from a vector. Simplifies working with structures commonly encountered in combinatorics.
Adds some functions to help in your coding etiquette. tinycodet primarily focuses on 4 aspects. 1) Safer decimal (in)equality testing, standard-evaluated alternatives to with() and aes(), and other functions for safer coding. 2) A new package import system, that attempts to combine the benefits of using a package without attaching it, with the benefits of attaching a package. 3) Extending the string manipulation capabilities of the stringi R package. 4) Reducing repetitive code. Besides linking to Rcpp', tinycodet has only one other dependency, namely stringi'.
This package provides functions for managing cashflows and interest rate curves.
Creates, manipulates, queries and repairs vectors of parameter terms. Parameter terms are the labels used to reference values in vectors, matrices and arrays. They represent the names in coefficient tables and the column names in mcmc and mcmc.list objects.
The tcplfit2 R package performs basic concentration-response curve fitting. The original tcplFit() function in the tcpl R package performed basic concentration-response curvefitting to 3 models. With tcplfit2, the core tcpl concentration-response functionality has been expanded to process diverse high-throughput screen (HTS) data generated at the US Environmental Protection Agency, including targeted ToxCast, high-throughput transcriptomics (HTTr) and high-throughput phenotypic profiling (HTPP). tcplfit2 can be used independently to support analysis for diverse chemical screening efforts.