Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
ACE (Advanced Cohort Engine) is a powerful tool that allows constructing cohorts of patients extremely quickly and efficiently. This package is designed to interface directly with an instance of ACE search engine and facilitates API queries and data dumps. Prerequisite is a good knowledge of the temporal language to be able to efficiently construct a query. More information available at <https://shahlab.stanford.edu/start>.
Package to incorporate change point analysis in ARIMA forecasting.
Estimation and inference methods for bounding average treatment effects (on the treated) that are valid under an unconfoundedness assumption. The bounds are designed to be robust in challenging situations, for example, when the conditioning variables take on a large number of different values in the observed sample, or when the overlap condition is violated. This robustness is achieved by only using limited "pooling" of information across observations. For more details, see the paper by Lee and Weidner (2021), "Bounding Treatment Effects by Pooling Limited Information across Observations," <arXiv:2111.05243>.
Computes asymmetric LD measures (ALD) for multi-allelic genetic data. These measures are identical to the correlation measure (r) for bi-allelic data.
This package contains various functions for optimal scaling. One function performs optimal scaling by maximizing an aspect (i.e. a target function such as the sum of eigenvalues, sum of squared correlations, squared multiple correlations, etc.) of the corresponding correlation matrix. Another function performs implements the LINEALS approach for optimal scaling by minimization of an aspect based on pairwise correlations and correlation ratios. The resulting correlation matrix and category scores can be used for further multivariate methods such as structural equation models.
Visualisation of multidimensional data through different Andrews curves: Andrews, D. F. (1972) Plots of High-Dimensional Data. Biometrics, 28(1), 125-136. <doi:10.2307/2528964>.
This package provides a collection of tools that support data splitting, predictive modeling, and model evaluation. A typical function is to split a dataset into a training dataset and a test dataset. Then compare the data distribution of the two datasets. Another feature is to support the development of predictive models and to compare the performance of several predictive models, helping to select the best model.
Getting and parsing data of location geocode/reverse-geocode and administrative regions from AutoNavi Maps'<https://lbs.amap.com/api/webservice/summary> API.
An interactive document on the topic of one-way and two-way analysis of variance using rmarkdown and shiny packages. Runtime examples are provided in the package function as well as at <https://tinyurl.com/ANOVAStatsTool>.
Formatter functions in the apa package take the return value of a statistical test function, e.g. a call to chisq.test() and return a string formatted according to the guidelines of the APA (American Psychological Association).
This package provides methods for fitting additive hazards model. Perform the maximum likelihood method as well as the traditional Aalen's method for estimating the additive hazards model. For details see Chengyuan Lu(2021) <arXiv:2004.06156>.
This package provides functions to accompany the book "Applied Statistical Modeling for Ecologists" by Marc Kéry and Kenneth F. Kellner (2024, ISBN: 9780443137150). Included are functions for simulating and customizing the datasets used for the example models in each chapter, summarizing output from model fitting engines, and running custom Markov Chain Monte Carlo.
Processes noble gas mass spectrometer data to determine the isotopic composition of argon (comprised of Ar36, Ar37, Ar38, Ar39 and Ar40) released from neutron-irradiated potassium-bearing minerals. Then uses these compositions to calculate precise and accurate geochronological ages for multiple samples as well as the covariances between them. Error propagation is done in matrix form, which jointly treats all samples and all isotopes simultaneously at every step of the data reduction process. Includes methods for regression of the time-resolved mass spectrometer signals to t=0 ('time zero') for both single- and multi-collector instruments, blank correction, mass fractionation correction, detector intercalibration, decay corrections, interference corrections, interpolation of the irradiation parameter between neutron fluence monitors, and (weighted mean) age calculation. All operations are performed on the logs of the ratios between the different argon isotopes so as to properly treat them as compositional data', sensu Aitchison [1986, The Statistics of Compositional Data, Chapman and Hall].
An ASCII ruler is for measuring text and is especially useful for sequence analysis. Included in this package are methods to create ASCII rulers and associated GenBank sequence blocks, multi-column text displays that make it easy for viewers to locate nucleotides by position.
Parse R code in a given directory for R packages and attempt to install them from CRAN or GitHub. Optionally use a dependencies file for tighter control over which package versions to install.
This package performs the analysis of completely randomized experimental designs (CRD), randomized blocks (RBD) and Latin square (LSD), experiments in double and triple factorial scheme (in CRD and RBD), experiments in subdivided plot scheme (in CRD and RBD), subdivided and joint analysis of experiments in CRD and RBD, linear regression analysis, test for two samples. The package performs analysis of variance, ANOVA assumptions and multiple comparison test of means or regression, according to Pimentel-Gomes (2009, ISBN: 978-85-7133-055-9), nonparametric test (Conover, 1999, ISBN: 0471160687), test for two samples, joint analysis of experiments according to Ferreira (2018, ISBN: 978-85-7269-566-4) and generalized linear model (glm) for binomial and Poisson family in CRD and RBD (Carvalho, FJ (2019), <doi:10.14393/ufu.te.2019.1244>). It can also be used to obtain descriptive measures and graphics, in addition to correlations and creative graphics used in agricultural sciences (Agronomy, Zootechnics, Food Science and related areas). Shimizu, G. D., Marubayashi, R. Y. P., Goncalves, L. S. A. (2025) <doi:10.4025/actasciagron.v47i1.73889>.
The centralized empirical cumulative average deviation function is utilized to develop both Ada-plot and Uda-plot as alternatives to Ad-plot and Ud-plot introduced by the author. Analogous to Ad-plot, Ada-plot can identify symmetry, skewness, and outliers of the data distribution. The Uda-plot is as exceptional as Ud-plot in assessing normality. The d-value that quantifies the degree of proximity between the Uda-plot and the graph of the estimated normal density function helps guide to make decisions on confirmation of normality. Extreme values in the data can be eliminated using the 1.5IQR rule to create its robust version if user demands. Full description of the methodology can be found in the article by Wijesuriya (2025a) <doi:10.1080/03610926.2025.2558108>. Further, the development of Ad-plot and Ud-plot is contained in both article and the adplots R package by Wijesuriya (2025b & 2025c) <doi:10.1080/03610926.2024.2440583> and <doi:10.32614/CRAN.package.adplots>.
This function takes a vector or matrix of data and smooths the data with an improved Savitzky Golay transform. The Savitzky-Golay method for data smoothing and differentiation calculates convolution weights using Gram polynomials that exactly reproduce the results of least-squares polynomial regression. Use of the Savitzky-Golay method requires specification of both filter length and polynomial degree to calculate convolution weights. For maximum smoothing of statistical noise in data, polynomials with low degrees are desirable, while a high polynomial degree is necessary for accurate reproduction of peaks in the data. Extension of the least-squares regression formalism with statistical testing of additional terms of polynomial degree to a heuristically chosen minimum for each data window leads to an adaptive-degree polynomial filter (ADPF). Based on noise reduction for data that consist of pure noise and on signal reproduction for data that is purely signal, ADPF performed nearly as well as the optimally chosen fixed-degree Savitzky-Golay filter and outperformed sub-optimally chosen Savitzky-Golay filters. For synthetic data consisting of noise and signal, ADPF outperformed both optimally chosen and sub-optimally chosen fixed-degree Savitzky-Golay filters. See Barak, P. (1995) <doi:10.1021/ac00113a006> for more information.
Data sets used in Cayuela and De la Cruz (2022, ISBN:978-84-8476-833-3).
Estimates and plots effect estimates from models with all possible combinations of a list of variables. It can be used for assessing treatment effects in clinical trials or risk factors in bio-medical and epidemiological research. Like Stata command confall (Wang Z (2007) <doi:10.1177/1536867X0700700203> ), allestimates calculates and stores all effect estimates, and plots them against p values or Akaike information criterion (AIC) values. It currently has functions for linear regression: all_lm(), logistic and Poisson regression: all_glm(), and Cox proportional hazards regression: all_cox().
Adaptive Gauss Hermite Quadrature for Bayesian inference. The AGHQ method for normalizing posterior distributions and making Bayesian inferences based on them. Functions are provided for doing quadrature and marginal Laplace approximations, and summary methods are provided for making inferences based on the results. See Stringer (2021). "Implementing Adaptive Quadrature for Bayesian Inference: the aghq Package" <arXiv:2101.04468>.
This package contains data and functions that can be used to make actuarial life tables. Each function adds a column to the inputted dataset for each intermediate calculation between mortality rate and life expectancy. Users can run any of our functions to complete the life table until that step, or run lifetable() to output a full life table that can be customized to remove optional columns. Methods for creating lifetables are as described in Zedstatistics (2021) <https://www.youtube.com/watch?v=Dfe59glNXAQ>.
Static code compilation of a shiny app given an R function (into ui.R and server.R files or into a shiny app object). See examples at <https://github.com/alekrutkowski/autoshiny>.
Multidimensional scaling models and methods for the visualization and analysis of asymmetric proximity data. An asymmetric data matrix has the same number of rows and columns, and these rows and columns refer to the same set of objects. At least some elements in the upper-triangle are different from the corresponding elements in the lower triangle. An example of an asymmetric matrix is a student migration table, where the rows correspond to the countries of origin of the students and the columns to the destination countries. This package provides algorithms for three multidimensional scaling models, the slide-vector model, a scaling model with unique dimensions and the asymscal model.Furthermore, some other procedures, such as a heat map for skew-symmetric data, and the decomposition of asymmetry are also provided for the exploratory analysis of asymmetric tables.