Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Algorithms for automatically finding appropriate thresholds for numerical data, with special functions for thresholding images. Provides the ImageJ Auto Threshold plugin functionality to R users. See <https://imagej.net/plugins/auto-threshold> and Landini et al. (2017) <DOI:10.1111/jmi.12474>.
This package provides the conditional Nelson-Aalen and Aalen-Johansen estimators. The methods are based on Bladt & Furrer (2023), in preparation.
In mathematics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the Acceptance-Rejection method or Accept-Reject algorithm and is a type of Monte Carlo method. Acceptance-Rejection method is based on the observation that to sample a random variable one can perform a uniformly random sampling of the 2D cartesian graph, and keep the samples in the region under the graph of its density function. Package AR is able to generate/simulate random data from a probability density function by Acceptance-Rejection method. Moreover, this package is a useful teaching resource for graphical presentation of Acceptance-Rejection method. From the practical point of view, the user needs to calculate a constant in Acceptance-Rejection method, which package AR is able to compute this constant by optimization tools. Several numerical examples are provided to illustrate the graphical presentation for the Acceptance-Rejection Method.
Three Shiny apps are provided that introduce Harvest Control Rules (HCR) for fisheries management. Introduction to HCRs provides a simple overview to how HCRs work. Users are able to select their own HCR and step through its performance, year by year. Biological variability and estimation uncertainty are introduced. Measuring performance builds on the previous app and introduces the idea of using performance indicators to measure HCR performance. Comparing performance allows multiple HCRs to be created and tested, and their performance compared so that the preferred HCR can be selected.
Plots simulation results of clinical trials. Its main feature is allowing users to simultaneously investigate the impact of several simulation input dimensions through dynamic filtering of the simulation results. A more detailed description of the app can be found in Meyer et al. <DOI:10.1016/j.softx.2023.101347> or the vignettes on GitHub'.
Targeted differential and global enrichment analysis of taxonomic rank by shared ASVs (Amplicon Sequence Variant), for high-throughput eDNA sequencing of fungi, bacteria, and metazoan. Actually works in two steps: I) Targeted differential analysis from QIIME2 data and II) Global analysis by Taxon Mann-Whitney U test analysis from targeted analysis (I) (I) Estimate variance-mean dependence in count/abundance ASVs data from high-throughput sequencing assays and test for differential represented ASVs based on a model using the negative binomial distribution. (II) NCBITaxon_MWU uses continuous measure of significance (such as fold-change or -log(p-value)) to identify NCBITaxon that are significantly enriches with either up- or down-represented ASVs. If the measure is binary (0 or 1) the script will perform a typical NCBITaxon enrichment analysis based Fisher's exact test: it will show NCBITaxon over-represented among the ASVs that have 1 as their measure. On the plot, different fonts are used to indicate significance and color indicates enrichment with either up (red) or down (blue) regulated ASVs. No colors are shown for binary measure analysis. The tree on the plot is hierarchical clustering of NCBITaxon based on shared ASVs. Categories with no branch length between them are subsets of each other. The fraction next to the category name indicates the fraction of good ASVs in it; good ASVs are the ones exceeding the arbitrary absValue cutoff (option in taxon_mwuPlot()). For Fisher's based test, specify absValue=0.5. This value does not affect statistics and is used for plotting only. The original idea was for genes differential expression analysis from Wright et al (2015) <doi:10.1186/s12864-015-1540-2>; adapted here for taxonomic analysis. The Anaconda package makes it possible to carry out these analyses by automatically creating several graphs and tables and storing them in specially created subfolders. You will need your QIIME2 pipeline output for each kingdom (eg; Fungi and/or Bacteria and/or Metazoan): i) taxonomy.tsv, ii) taxonomy_RepSeq.tsv, iii) ASV.tsv and iv) SampleSheet_comparison.txt (the latter being created by you).
Analysis of dyadic network and relational data using additive and multiplicative effects (AME) models. The basic model includes regression terms, the covariance structure of the social relations model (Warner, Kenny and Stoto (1979) <DOI:10.1037/0022-3514.37.10.1742>, Wong (1982) <DOI:10.2307/2287296>), and multiplicative factor models (Hoff(2009) <DOI:10.1007/s10588-008-9040-4>). Several different link functions accommodate different relational data structures, including binary/network data, normal relational data, zero-inflated positive outcomes using a tobit model, ordinal relational data and data from fixed-rank nomination schemes. Several of these link functions are discussed in Hoff, Fosdick, Volfovsky and Stovel (2013) <DOI:10.1017/nws.2013.17>. Development of this software was supported in part by NIH grant R01HD067509.
Graphical functionalities for the representation of multivariate data. It is a complete re-implementation of the functions available in the ade4 package.
Some functions for drawing some special plots: The function bagplot plots a bagplot, faces plots chernoff faces, iconplot plots a representation of a frequency table or a data matrix, plothulls plots hulls of a bivariate data set, plotsummary plots a graphical summary of a data set, puticon adds icons to a plot, skyline.hist combines several histograms of a one dimensional data set in one plot, slider functions supports some interactive graphics, spin3R helps an inspection of a 3-dim point cloud, stem.leaf plots a stem and leaf plot, stem.leaf.backback plots back-to-back versions of stem and leaf plot.
Survival analysis is employed to model the time it takes for events to occur. Survival model examines the relationship between survival and one or more predictors, usually termed covariates in the survival-analysis literature. To this end, Cox-proportional (Cox-PH) hazard rate model introduced in a seminal paper by Cox (1972) <doi:10.1111/j.2517-6161.1972.tb00899.x>, is a broadly applicable and the most widely used method of survival analysis. This package can be used to estimate the effect of fixed and time-dependent covariates and also to compute the survival probabilities of the lactation of dairy animal. This package has been developed using algorithm of Klein and Moeschberger (2003) <doi:10.1007/b97377>.
This package provides a collection of functions related to density estimation by using Chen's (2000) idea. Mean Squared Errors (MSE) are calculated for estimated curves. For this purpose, R functions allow the distribution to be Gamma, Exponential or Weibull. For details see Chen (2000), Scaillet (2004) <doi:10.1080/10485250310001624819> and Khan and Akbar.
This package provides assessment tools for regression models with discrete and semicontinuous outcomes proposed in Yang (2023) <doi:10.48550/arXiv.2308.15596>. It calculates the double probability integral transform (DPIT) residuals, constructs QQ plots of residuals and the ordered curve for assessing mean structures.
Anytime-valid sequential estimation of the p-value of a test calibrated by Monte-Carlo simulation, as described in Stoepker & Castro (2024) <doi:10.48550/arXiv.2409.18908>.
High performance variant of apply() for a fixed set of functions. Considerable speedup of this implementation is a trade-off for universality: user defined functions cannot be used with this package. However, about 20 most currently employed functions are available for usage. They can be divided in three types: reducing functions (like mean(), sum() etc., giving a scalar when applied to a vector), mapping function (like normalise(), cumsum() etc., giving a vector of the same length as the input vector) and finally, vector reducing function (like diff() which produces result vector of a length different from the length of input vector). Optional or mandatory additional arguments required by some functions (e.g. norm type for norm()) can be passed as named arguments in ...'.
Set of functions to analyse and estimate Artificial Counterfactual models from Carvalho, Masini and Medeiros (2016) <DOI:10.2139/ssrn.2823687>.
Edit an Antares simulation before running it : create new areas, links, thermal clusters or binding constraints or edit existing ones. Update Antares general & optimization settings. Antares is an open source power system generator, more information available here : <https://antares-simulator.org/>.
This package contains data from an observational study concerning possible effects of light daily alcohol consumption on survival and on HDL cholesterol. It also replicates various simple analyses in Rosenbaum (2025a) <doi:10.1080/09332480.2025.2473291>. Finally, it includes new R code in wgtRankCef() that implements and replicates a new method for constructing evidence factors in observational block designs.
Enable translation of a tiny subset of R to C++. The user has to define a R function which gets translated. For a full list of possible functions check the documentation. After translation an R function is returned which is a shallow wrapper around the C++ code. Alternatively an external pointer to the C++ function is returned to the user. The intention of the package is to generate fast functions which can be used as ode-system or during optimization.
This package provides WHO Child Growth Standards (z-scores) with confidence intervals and standard errors around the prevalence estimates, taking into account complex sample designs. More information on the methods is available online: <https://www.who.int/tools/child-growth-standards>.
This package provides a simple interface to the Microsoft Graph API <https://learn.microsoft.com/en-us/graph/overview>. Graph is a comprehensive framework for accessing data in various online Microsoft services. This package was originally intended to provide an R interface only to the Azure Active Directory part, with a view to supporting interoperability of R and Azure': users, groups, registered apps and service principals. However it has since been expanded into a more general tool for interacting with Graph. Part of the AzureR family of packages.
Lightweight validation tool for checking function arguments and validating data analysis scripts. This is an alternative to stopifnot() from the base package and to assert_that() from the assertthat package. It provides more informative error messages and facilitates debugging.
This package provides functions for displaying multiple images or scatterplots with a color scale, i.e., heat maps, possibly with projected coordinates. The package relies on the base graphics system, so graphics are rendered rapidly.
Convenience functions for aggregating a data frame or data table. Currently mean, sum and variance are supported. For Date variables, the recency and duration are supported. There is also support for dummy variables in predictive contexts. Code has been completely re-written in data.table for computational speed.
Collect your data on digital marketing campaigns from Amazon Sp using the Windsor.ai API <https://windsor.ai/api-fields/>.