Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Model adsorption behavior using classical isotherms, including Langmuir, Freundlich, Brunauerâ Emmettâ Teller (BET), and Temkin models. The package supports parameter estimation through both linearized and non-linear fitting techniques and generates high-quality plots for model diagnostics. It is intended for environmental scientists, chemists, and researchers working on adsorption phenomena in soils, water treatment, and material sciences. Functions are compatible with base R and ggplot2 for visualization.
Estimate the linear and nonlinear autoregressive distributed lag (ARDL & NARDL) models and the corresponding error correction models, and test for longrun and short-run asymmetric. The general-to-specific approach is also available in estimating the ARDL and NARDL models. The Pesaran, Shin & Smith (2001) (<doi:10.1002/jae.616>) bounds test for level relationships is also provided. The ardl.nardl package also performs short-run and longrun symmetric restrictions available at Shin et al. (2014) <doi:10.1007/978-1-4899-8008-3_9> and their corresponding tests.
Fits Modern Analogue Technique and Weighted Averaging transfer function models for prediction of environmental data from species data, and related methods used in palaeoecology.
This package provides functions to perform the fitting of an adaptive mixture of Student-t distributions to a target density through its kernel function as described in Ardia et al. (2009) <doi:10.18637/jss.v029.i03>. The mixture approximation can then be used as the importance density in importance sampling or as the candidate density in the Metropolis-Hastings algorithm to obtain quantities of interest for the target density itself.
Empirical likelihood-based approximate Bayesian Computation. Approximates the required posterior using empirical likelihood and estimated differential entropy. This is achieved without requiring any specification of the likelihood or estimating equations that connects the observations with the underlying parameters. The procedure is known to be posterior consistent. More details can be found in Chaudhuri, Ghosh, and Kim (2024) <doi:10.1002/SAM.11711>.
Power and associated functions useful in prospective planning and monitoring of a clinical trial when a recurrent event endpoint is to be assessed by the robust Andersen-Gill model, see Lin, Wei, Yang, and Ying (2010) <doi:10.1111/1467-9868.00259>. The equations developed in Ingel and Jahn-Eimermacher (2014) <doi:10.1002/bimj.201300090> and their consequences are employed.
Generates data for challenging machine learning models in Arena <https://arena.drwhy.ai> - an interactive web application. You can start the server with XAI (Explainable Artificial Intelligence) plots to be generated on-demand or precalculate and auto-upload data file beside shareable Arena URL.
Estimate and plot confounder-adjusted survival curves using either Direct Adjustment', Direct Adjustment with Pseudo-Values', various forms of Inverse Probability of Treatment Weighting', two forms of Augmented Inverse Probability of Treatment Weighting', Empirical Likelihood Estimation or Targeted Maximum Likelihood Estimation'. Also includes a significance test for the difference between two adjusted survival curves and the calculation of adjusted restricted mean survival times. Additionally enables the user to estimate and plot cause-specific confounder-adjusted cumulative incidence functions in the competing risks setting using the same methods (with some exceptions). For details, see Denz et. al (2023) <doi:10.1002/sim.9681>.
An iterative process that optimizes a function by alternately performing restricted optimization over parameter subsets. Instead of joint optimization, it breaks the optimization problem down into simpler sub-problems. This approach can make optimization feasible when joint optimization is too difficult.
This package provides adaptive direct sparse regression for high-dimensional multimodal data with heterogeneous missing patterns and measurement errors. AdapDISCOM extends the DISCOM framework with modality-specific adaptive weighting to handle varying data structures and error magnitudes across blocks. The method supports flexible block configurations (any K blocks) and includes robust variants for heavy-tailed distributions ('AdapDISCOM'-Huber) and fast implementations for large-scale applications (Fast-'AdapDISCOM'). Designed for realistic multimodal scenarios where different data sources exhibit distinct missing data patterns and contamination levels. Diakité et al. (2025) <doi:10.48550/arXiv.2508.00120>.
Scraping content from archived web pages stored in the Internet Archive (<https://archive.org>) using a systematic workflow. Get an overview of the mementos available from the respective homepage, retrieve the Urls and links of the page and finally scrape the content. The final output is stored in tibbles, which can be then easily used for further analysis.
The efficient Markov chain Monte Carlo estimation of stochastic volatility models with and without leverage (asymmetric and symmetric stochastic volatility models). Further, it computes the logarithm of the likelihood given parameters using particle filters.
The empirical cumulative average deviation function introduced by the author is utilized to develop both Ad- and Ud-plots. The Ad-plot can identify symmetry, skewness, and outliers of the data distribution, including anomalies. The Ud-plot created by slightly modifying Ad-plot is exceptional in assessing normality, outperforming normal QQ-plot, normal PP-plot, and their derivations. The d-value that quantifies the degree of proximity between the Ud-plot and the graph of the estimated normal density function helps guide to make decisions on confirmation of normality. Full description of this methodology can be found in the article by Wijesuriya (2025) <doi:10.1080/03610926.2024.2440583>.
This package provides methods to construct frequentist confidence sets with valid marginal coverage for identifying the population-level argmin or argmax based on IID data. For instance, given an n by p loss matrixâ where n is the sample size and p is the number of modelsâ the CS.argmin() method produces a discrete confidence set that contains the model with the minimal (best) expected risk with desired probability. The argmin.HT() method helps check if a specific model should be included in such a confidence set. The main implemented method is proposed by Tianyu Zhang, Hao Lee and Jing Lei (2024) "Winners with confidence: Discrete argmin inference with an application to model selection".
Build and train a variational autoencoder (VAE) for mixed-type tabular data (continuous, binary, categorical). Models are implemented using TensorFlow and Keras via the reticulate interface, enabling reproducible VAE training for heterogeneous tabular datasets.
Using of the accelerated line search algorithm for simultaneously diagonalize a set of symmetric positive definite matrices.
Understanding morphological variation is an important task in many applications. Recent studies in computational biology have focused on developing computational tools for the task of sub-image selection which aims at identifying structural features that best describe the variation between classes of shapes. A major part in assessing the utility of these approaches is to demonstrate their performance on both simulated and real datasets. However, when creating a model for shape statistics, real data can be difficult to access and the sample sizes for these data are often small due to them being expensive to collect. Meanwhile, the landscape of current shape simulation methods has been mostly limited to approaches that use black-box inference---making it difficult to systematically assess the power and calibration of sub-image models. In this R package, we introduce the alpha-shape sampler: a probabilistic framework for simulating realistic 2D and 3D shapes based on probability distributions which can be learned from real data or explicitly stated by the user. The ashapesampler package supports two mechanisms for sampling shapes in two and three dimensions. The first, empirically sampling based on an existing data set, was highlighted in the original main text of the paper. The second, probabilistic sampling from a known distribution, is the computational implementation of the theory derived in that paper. Work based on Winn-Nunez et al. (2024) <doi:10.1101/2024.01.09.574919>.
This package provides a function to calculate the concentration of un-ionized ammonia in the total ammonia in aqueous solution using the pH and temperature values.
Gives some hypothesis test functions (sign test, median and other quantile tests, Wilcoxon signed rank test, coefficient of variation test, test of normal variance, test on weighted sums of Poisson [see Fay and Kim <doi:10.1002/bimj.201600111>], sample size for t-tests with different variances and non-equal n per arm, Behrens-Fisher test, nonparametric ABC intervals, Wilcoxon-Mann-Whitney test [with effect estimates and confidence intervals, see Fay and Malinovsky <doi:10.1002/sim.7890>], two-sample melding tests [see Fay, Proschan, and Brittain <doi:10.1111/biom.12231>], one-way ANOVA allowing var.equal=FALSE [see Brown and Forsythe, 1974, Biometrics]), prevalence confidence intervals that adjust for sensitivity and specificity [see Lang and Reiczigel, 2014 <doi:10.1016/j.prevetmed.2013.09.015>] or Bayer, Fay, and Graubard, 2023 <doi:10.48550/arXiv.2205.13494>). The focus is on hypothesis tests that have compatible confidence intervals, but some functions only have confidence intervals (e.g., prevSeSp).
This package provides tools to perform model selection alongside estimation under Linear, Logistic, Negative binomial, Quantile, and Skew-Normal regression. Under the spike-and-slab method, a probability for each possible model is estimated with the posterior mean, credibility interval, and standard deviation of coefficients and parameters under the most probable model.
Some functions for drawing some special plots: The function bagplot plots a bagplot, faces plots chernoff faces, iconplot plots a representation of a frequency table or a data matrix, plothulls plots hulls of a bivariate data set, plotsummary plots a graphical summary of a data set, puticon adds icons to a plot, skyline.hist combines several histograms of a one dimensional data set in one plot, slider functions supports some interactive graphics, spin3R helps an inspection of a 3-dim point cloud, stem.leaf plots a stem and leaf plot, stem.leaf.backback plots back-to-back versions of stem and leaf plot.
PCA done by eigenvalue decomposition of a data correlation matrix, here it automatically determines the number of factors by eigenvalue greater than 1 and it gives the uncorrelated variables based on the rotated component scores, Such that in each principal component variable which has the high variance are selected. It will be useful for non-statisticians in selection of variables. For more information, see the <http://www.ijcem.org/papers032013/ijcem_032013_06.pdf> web page.
This package provides a simulations-first sample size determination package that aims at making sample size formulae obsolete for most easily computable statistical experiments ; the main envisioned use case is clinical trials. The proposed clinical trial must be written by the user in the form of a function that takes as argument a sample size and returns a boolean (for whether or not the trial is a success). The adsasi functions will then use it to find the correct sample size empirically. The unavoidable mis-specification is obviated by trying sample size values close to the right value, the latter being understood as the value that gives the probability of success the user wants (usually 80 or 90% in biostatistics, corresponding to 20 or 10% type II error).
The transmission between two time-series prices is assessed. It contains several functions for linear and nonlinear threshold co-integration, and furthermore, symmetric and asymmetric error correction models.