Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The discrete Laplace exponential family for use in fitting generalized linear models.
This package contains functions to perform copula estimation by the non-parametric Bayesian method, Dirichlet-based Polya Tree. See Ning (2018) <doi:10.1080/00949655.2017.1421194>.
This package contains the function used to create the Dandelion Plot. Dandelion Plot is a visualization method for R-mode Exploratory Factor Analysis.
This package provides a software package for using DEXi models. DEXi models are hierarchical qualitative multi-criteria decision models developed according to the method DEX (Decision EXpert, <https://dex.ijs.si/documentation/DEX_Method/DEX_Method.html>), using the program DEXi (<https://kt.ijs.si/MarkoBohanec/dexi.html>) or DEXiWin (<https://dex.ijs.si/dexisuite/dexiwin.html>). A typical workflow with DEXiR consists of: (1) reading a .dxi file, previously made using the DEXi software (function read_dexi()), (2) making a data frame containing input values of one or more decision alternatives, (3) evaluating those alternatives (function evaluate()), (4) analyzing alternatives (selective_explanation(), plus_minus(), compare_alternatives()), (5) drawing charts. DEXiR is restricted to using models produced externally by the DEXi software and does not provide functionality for creating and/or editing DEXi models directly in R'.
Allows for export of DiagrammeR Graphviz objects to SVG.
Time-varying coefficient models for interval censored and right censored survival data including 1) Bayesian Cox model with time-independent, time-varying or dynamic coefficients for right censored and interval censored data studied by Sinha et al. (1999) <doi:10.1111/j.0006-341X.1999.00585.x> and Wang et al. (2013) <doi:10.1007/s10985-013-9246-8>, 2) Spline based time-varying coefficient Cox model for right censored data proposed by Perperoglou et al. (2006) <doi:10.1016/j.cmpb.2005.11.006>, and 3) Transformation model with time-varying coefficients for right censored data using estimating equations proposed by Peng and Huang (2007) <doi:10.1093/biomet/asm058>.
This package provides constrained triangulation of polygons. Ear cutting (or ear clipping) applies constrained triangulation by successively cutting triangles from a polygon defined by path/s. Holes are supported by introducing a bridge segment between polygon paths. This package wraps the header-only library earcut.hpp <https://github.com/mapbox/earcut.hpp> which includes a reference to the method used by Held, M. (2001) <doi:10.1007/s00453-001-0028-4>.
This package provides methods to apply decomposition-based relative importance analysis for R functions. This package supports the application of decomposition methods by providing lapply'- or Map'-like meta-functions that compute dominance analysis (Azen, R., & Budescu, D. V. (2003) <doi:10.1037/1082-989X.8.2.129>; Grömping, U. (2007) <doi:10.1198/000313007X188252>) an extension of Shapley value regression (Lipovetsky, S., & Conklin, M. (2001) <doi:10.1002/asmb.446>) based on the values returned from other functions.
Robust distance-based methods applied to matrices and data frames, producing distance matrices that can be used as input for various visualization techniques such as graphs, heatmaps, or multidimensional scaling configurations. See Boj and Grané (2024) <doi:10.1016/j.seps.2024.101992>.
This package provides a collection of widely used univariate data sets of various applied domains on applications of distribution theory. The functions allow researchers and practitioners to quickly, easily, and efficiently access and use these data sets. The data are related to different applied domains and as follows: Bio-medical, survival analysis, medicine, reliability analysis, hydrology, actuarial science, operational research, meteorology, extreme values, quality control, engineering, finance, sports and economics. The total 100 data sets are documented along with associated references for further details and uses.
All datasets and functions required for the examples and exercises of the book "Data Science for Psychologists" (by Hansjoerg Neth, Konstanz University, 2025, <doi:10.5281/zenodo.7229812>), freely available at <https://bookdown.org/hneth/ds4psy/>. The book and corresponding courses introduce principles and methods of data science to students of psychology and other biological or social sciences. The ds4psy package primarily provides datasets, but also functions for data generation and manipulation (e.g., of text and time data) and graphics that are used in the book and its exercises. All functions included in ds4psy are designed to be explicit and instructive, rather than efficient or elegant.
The DoseFinding package provides functions for the design and analysis of dose-finding experiments (with focus on pharmaceutical Phase II clinical trials). It provides functions for: multiple contrast tests, fitting non-linear dose-response models (using Bayesian and non-Bayesian estimation), calculating optimal designs and an implementation of the MCPMod methodology (Pinheiro et al. (2014) <doi:10.1002/sim.6052>).
This package provides methods for evaluating the probability mass function, cumulative distribution function, and generating random samples from discrete tempered stable distributions. For more details see Grabchak (2021) <doi:10.1007/s11009-021-09904-3>.
Set of tools aimed at processing meteorological data, converting hourly recorded data to daily, monthly and annual data.
Here, a function has been developed to generate parameters of the input designs, as well as incidence matrices. This is a general function that can be used to investigate the characterization properties of any block design.
Using the Theory of Belief Functions for evidence calculus. Basic probability assignments, or mass functions, can be defined on the subsets of a set of possible values and combined. A mass function can be extended to a larger frame. Marginalization, i.e. reduction to a smaller frame can also be done. These features can be combined to analyze small belief networks and take into account situations where information cannot be satisfactorily described by probability distributions.
This package implements survival proximity score matching in multi-state survival models. Includes tools for simulating survival data and estimating transition-specific coxph models with frailty terms. The primary methodological work on multistate censored data modeling using propensity score matching has been published by Bhattacharjee et al.(2024) <doi:10.1038/s41598-024-54149-y>.
Computes a new measure, DNSL betweenness, via the creation of a new graph from an existing one, duplicating nodes with self-loops. This betweenness centrality does not drop this essential information. Implements Merelo & Molinari (2024) <doi:10.1007/s42001-023-00245-4>.
Efficient covariate-adjusted estimators of quantities that are useful for establishing the effects of treatments on ordinal outcomes.
Item focussed recursive partitioning for simultaneous selection of items and variables that induce Differential Item Functioning (DIF) in dichotomous or polytomous items.
What is funnier than a dad joke? A dad joke in R! This package utilizes the API for <https://icanhazdadjoke.com> and returns dad jokes from several API endpoints.
The data consist of a set of variables measured on several groups of individuals. To each group is associated an estimated probability density function. The package provides tools to create or manage such data and functional methods (principal component analysis, multidimensional scaling, cluster analysis, discriminant analysis...) for such probability densities.
Perform nonparametric Bayesian analysis using Dirichlet processes without the need to program the inference algorithms. Utilise included pre-built models or specify custom models and allow the dirichletprocess package to handle the Markov chain Monte Carlo sampling. Our Dirichlet process objects can act as building blocks for a variety of statistical models including and not limited to: density estimation, clustering and prior distributions in hierarchical models. See Teh, Y. W. (2011) <https://www.stats.ox.ac.uk/~teh/research/npbayes/Teh2010a.pdf>, among many other sources.
This package provides a simple syntax to change the default values for function arguments, whether they are in packages or defined locally.