Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Bayesian inference using the no-U-turn (NUTS) algorithm by Hoffman and Gelman (2014) <https://www.jmlr.org/papers/v15/hoffman14a.html>. Designed for AD Model Builder ('ADMB') models, or when R functions for log-density and log-density gradient are available, such as Template Model Builder models and other special cases. Functionality is similar to Stan', and the rstan and shinystan packages are used for diagnostics and inference.
La libreria ACEP contiene funciones especificas para desarrollar analisis computacional de eventos de protesta. Asimismo, contiene base de datos con colecciones de notas sobre protestas y diccionarios de palabras conflictivas. Coleccion de diccionarios que reune diccionarios de diferentes origenes. The ACEP library contains specific functions to perform computational analysis of protest events. It also contains a database with collections of notes on protests and dictionaries of conflicting words. Collection of dictionaries that brings together dictionaries from different sources.
This package provides a customisable set of tools for assessing and grading R or R-markdown scripts from students. It allows for checking correctness of code output, runtime statistics and static code analysis. The latter feature is made possible by representing R expressions using a tree structure.
Statistical procedures to perform stability analysis in plant breeding and to identify stable genotypes under diverse environments. It is possible to calculate coefficient of homeostaticity by Khangildin et al. (1979), variance of specific adaptive ability by Kilchevsky&Khotyleva (1989), weighted homeostaticity index by Martynov (1990), steadiness of stability index by Udachin (1990), superiority measure by Lin&Binn (1988) <doi:10.4141/cjps88-018>, regression on environmental index by Erberhart&Rassel (1966) <doi:10.2135/cropsci1966.0011183X000600010011x>, Tai's (1971) stability parameters <doi:10.2135/cropsci1971.0011183X001100020006x>, stability variance by Shukla (1972) <doi:10.1038/hdy.1972.87>, ecovalence by Wricke (1962), nonparametric stability parameters by Nassar&Huehn (1987) <doi:10.2307/2531947>, Francis&Kannenberg's parameters of stability (1978) <doi:10.4141/cjps78-157>.
You can use this package to create custom pipeline badges in a standard svg format. This is useful for a company to use internally, where it may not be possible to create badges through external providers. This project was inspired by the anybadge library in python.
Estimate diagnosis performance (Sensitivity, Specificity, Positive predictive value, Negative predicted value) of a diagnostic test where can not measure the golden standard but can estimate it using the attributable fraction.
Experience studies are used by actuaries to explore historical experience across blocks of business and to inform assumption setting activities. This package provides functions for preparing data, creating studies, visualizing results, and beginning assumption development. Experience study methods, including exposure calculations, are described in: Atkinson & McGarry (2016) "Experience Study Calculations" <https://www.soa.org/49378a/globalassets/assets/files/research/experience-study-calculations.pdf>. The limited fluctuation credibility method used by the exp_stats() function is described in: Herzog (1999, ISBN:1-56698-374-6) "Introduction to Credibility Theory".
Build and control interactive 2D and 3D maps with R/Shiny'. Lean set of powerful commands wrapping native calls to AMap <https://lbs.amap.com/api/jsapi-v2/summary/>. Deliver rich mapping functionality with minimal overhead.
In mathematics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the Acceptance-Rejection method or Accept-Reject algorithm and is a type of Monte Carlo method. Acceptance-Rejection method is based on the observation that to sample a random variable one can perform a uniformly random sampling of the 2D cartesian graph, and keep the samples in the region under the graph of its density function. Package AR is able to generate/simulate random data from a probability density function by Acceptance-Rejection method. Moreover, this package is a useful teaching resource for graphical presentation of Acceptance-Rejection method. From the practical point of view, the user needs to calculate a constant in Acceptance-Rejection method, which package AR is able to compute this constant by optimization tools. Several numerical examples are provided to illustrate the graphical presentation for the Acceptance-Rejection Method.
Considering an (n x m) data matrix X, this package is based on the method proposed by Gower, Groener, and Velden (2010) <doi:10.1198/jcgs.2010.07134>, and utilize the resulting matrices from the extended version of the NIPALS decomposition to determine n triangles whose areas are used to visually estimate the elements of a specific column of X. After a 90-degree rotation of the sample points, the triangles are drawn regarding the following points: 1.the origin of the axes; 2.the sample points; 3. the vector endpoint representing some variable.
This package provides a framework for automated machine learning. Concretely, the focus is on the optimisation of bagging workflows. A bagging workflows is composed by three phases: (i) generation: which and how many predictive models to learn; (ii) pruning: after learning a set of models, the worst ones are cut off from the ensemble; and (iii) integration: how the models are combined for predicting a new observation. autoBagging optimises these processes by combining metalearning and a learning to rank approach to learn from metadata. It automatically ranks 63 bagging workflows by exploiting past performance and dataset characterization. A complete description of the method can be found in: Pinto, F., Cerqueira, V., Soares, C., Mendes-Moreira, J. (2017): "autoBagging: Learning to Rank Bagging Workflows with Metalearning" arXiv preprint arXiv:1706.09367.
Fit various smoothing spline models. Includes an ssr() function for smoothing spline regression, an nnr() function for nonparametric nonlinear regression, an snr() function for semiparametric nonlinear regression, an slm() function for semiparametric linear mixed-effects models, and an snm() function for semiparametric nonlinear mixed-effects models. See Wang (2011) <doi:10.1201/b10954> for an overview.
The main application concerns to a new robust optimization package with two major contributions. The first contribution refers to the assessment of the adequacy of probabilistic models through a combination of several statistics, which measure the relative quality of statistical models for a given data set. The second one provides a general purpose optimization method based on meta-heuristics functions for maximizing or minimizing an arbitrary objective function.
This package provides ANOCVA (ANalysis Of Cluster VAriability), a non-parametric statistical test to compare clustering structures with applications in functional magnetic resonance imaging data (fMRI). The ANOCVA allows us to compare the clustering structure of multiple groups simultaneously and also to identify features that contribute to the differential clustering.
Implementation of a hybrid MCDM method build from the AHP (Analytic Hierarchy Process) and TOPSIS-2N (Technique for Order of Preference by Similarity to Ideal Solution - with two normalizations). This method is described in Souza et al. (2018) <doi: 10.1142/S0219622018500207>.
Airport problems, introduced by Littlechild and Owen (1973) <https://www.jstor.org/stable/2629727>, are cost allocation problems where agents share the cost of a facility (or service) based on their ordered needs. Valid allocations must satisfy no-subsidy constraints, meaning that no group of agents contributes more than the highest cost of its members (i.e., no agent is allowed to subsidize another). A rule is a mechanism that selects an allocation vector for a given problem. This package computes several rules proposed in the literature, including both standard rules and their variants, such as weighted versions, rules for clones, and rules based on the agentsâ hierarchy order. These rules can be applied to various problems of interest, including the allocation of liabilities and the maintenance of irrigation systems, among others. Moreover, the package provides functions for graphical representation, enabling users to visually compare the outcomes produced by each rule, or to display the no-subsidy set. In addition, it includes four datasets illustrating different applications and examples of airport problems. For a more detailed explanation of all concepts, see Thomson (2024) <doi:10.1016/j.mathsocsci.2024.03.007>.
For emulating multifidelity computer models. The major methods include univariate autoregressive cokriging and multivariate autoregressive cokriging. The autoregressive cokriging methods are implemented for both hierarchically nested design and non-nested design. For hierarchically nested design, the model parameters are estimated via standard optimization algorithms; For non-nested design, the model parameters are estimated via Monte Carlo expectation-maximization (MCEM) algorithms. In both cases, the priors are chosen such that the posterior distributions are proper. Notice that the uniform priors on range parameters in the correlation function lead to improper posteriors. This should be avoided when Bayesian analysis is adopted. The development of objective priors for autoregressive cokriging models can be found in Pulong Ma (2020) <DOI:10.1137/19M1289893>. The development of the multivariate autoregressive cokriging models with possibly non-nested design can be found in Pulong Ma, Georgios Karagiannis, Bledar A Konomi, Taylor G Asher, Gabriel R Toro, and Andrew T Cox (2019) <arXiv:1909.01836>.
Programming vaccine specific Clinical Data Interchange Standards Consortium (CDISC) compliant Analysis Data Model (ADaM) datasets in R'. Flat model is followed as per Center for Biologics Evaluation and Research (CBER) guidelines for creating vaccine specific domains. ADaM datasets are a mandatory part of any New Drug or Biologics License Application submitted to the United States Food and Drug Administration (FDA). Analysis derivations are implemented in accordance with the "Analysis Data Model Implementation Guide" (CDISC Analysis Data Model Team (2021), <https://www.cdisc.org/standards/foundational/adam/adamig-v1-3-release-package>). The package is an extension package of the admiral package.
Parse Autonomous Recording Unit (ARU) data and for sub-sampling recordings. Extract Metadata from your recordings, select a subset of recordings for interpretation, and prepare files for processing on the WildTrax <https://wildtrax.ca/> platform. Read and process metadata from recordings collected using the SongMeter and BAR-LT types of ARUs.
Use Monte-Carlo and K-fold cross-validation coupled with machine- learning classification algorithms to perform population assignment, with functionalities of evaluating discriminatory power of independent training samples, identifying informative loci, reducing data dimensionality for genomic data, integrating genetic and non-genetic data, and visualizing results.
Assess whether and how a specific continuous or categorical exposure affects the outcome of interest through one- or multi-dimensional mediators using an adaptive bootstrap (AB) approach. The AB method allows to make inference for composite null hypotheses of no mediation effect, providing valid type I error control and thus optimizes statistical power. For more technical details, refer to He, Song and Xu (2024) <doi:10.1093/jrsssb/qkad129>.
This package implements wavelet-based approaches for describing population admixture. Principal Components Analysis (PCA) is used to define the population structure and produce a localized admixture signal for each individual. Wavelet summaries of the PCA output describe variation present in the data and can be related to population-level demographic processes. For more details, see J Sanderson, H Sudoyo, TM Karafet, MF Hammer and MP Cox. 2015. Reconstructing past admixture processes from local genomic ancestry using wavelet transformation. Genetics 200:469-481 <doi:10.1534/genetics.115.176842>.
Trigger animation effects on scroll on any HTML element of shiny and rmarkdown', such as any text or plot, thanks to the AOS Animate On Scroll jQuery library.
This package performs approximate unconditional and permutation testing for 2x2 contingency tables. Motivated by testing for disease association with rare genetic variants in case-control studies. When variants are extremely rare, these tests give better control of Type I error than standard tests.