Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The detection of worrying approximate collinearity in a multiple linear regression model is a problem addressed in all existing statistical packages. However, we have detected deficits regarding to the incorrect treatment of qualitative independent variables and the role of the intercept of the model. The objective of this package is to correct these deficits. In this package will be available detection and treatment techniques traditionally used as the recently developed.
Multiply robust estimation for population mean (Han and Wang 2013) <doi:10.1093/biomet/ass087>, regression analysis (Han 2014) <doi:10.1080/01621459.2014.880058> (Han 2016) <doi:10.1111/sjos.12177> and quantile regression (Han et al. 2019) <doi:10.1111/rssb.12309>.
This package implements operations for Riemannian manifolds, e.g., geodesic distance, Riemannian metric, exponential and logarithm maps, etc. Also incorporates random object generator on the manifolds. See Dai, Lin, and Müller (2021) <doi:10.1111/biom.13385>.
This package provides the ability to perform "Marginal Mediation"--mediation wherein the indirect and direct effects are in terms of the average marginal effects (Bartus, 2005, <https://EconPapers.repec.org/RePEc:tsj:stataj:v:5:y:2005:i:3:p:309-329>). The style of the average marginal effects stems from Thomas Leeper's work on the "margins" package. This framework allows the use of categorical mediators and outcomes with little change in interpretation from the continuous mediators/outcomes. See <doi:10.13140/RG.2.2.18465.92001> for more details on the method.
Difference scaling is a method for scaling perceived supra-threshold differences. The package contains functions that allow the user to design and run a difference scaling experiment, to fit the resulting data by maximum likelihood and test the internal validity of the estimated scale.
Tests of comparison of two or more survival curves. Allows for comparison of more than two survival curves whether the proportional hazards hypothesis is verified or not.
Generates Raven like matrices according to different rules and the response list associated to the matrix. The package can generate matrices composed of 4 or 9 cells, along with a response list of 11 elements (the correct response + 10 incorrect responses). The matrices can be generated according to both logical rules (i.e., the relationships between the elements in the matrix are manipulated to create the matrix) and visual-spatial rules (i.e., the visual or spatial characteristics of the elements are manipulated to generate the matrix). The graphical elements of this package are based on the DescTools package. This package has been developed within the PRIN2020 Project (Prot. 20209WKCLL) titled "Computerized, Adaptive and Personalized Assessment of Executive Functions and Fluid Intelligence" and founded by the Italian Ministry of Education and Research.
Perform multi-trait rare-variant association tests using the summary statistics and adjust for possible sample overlap. Package is based on "Multi-Trait Analysis of Rare-Variant Association Summary Statistics using MTAR" by Luo, L., Shen, J., Zhang, H., Chhibber, A. Mehrotra, D.V., Tang, Z., 2019 (submitted).
It implements a new procedure of variable selection in the context of redundancy between explanatory variables, which holds true with high dimensional data (Grimonprez et al. (2023) <doi:10.18637/jss.v106.i03>).
This package performs the execution of the main procedures of multiple comparisons in the literature, Scott-Knott (1974) <http://www.jstor.org/stable/2529204>, Batista (2016) <http://repositorio.ufla.br/jspui/handle/1/11466>, including graphic representations and export to different extensions of its results. An additional part of the package is the presence of the performance evaluation of the tests (Type I error per experiment and the power). This will assist the user in making the decision for the chosen test.
Uses multiple AUCs to select a combination of predictors when the outcome has multiple (ordered) levels and the focus is discriminating one particular level from the others. This method is most naturally applied to settings where the outcome has three levels. (Meisner, A, Parikh, CR, and Kerr, KF (2017) <http://biostats.bepress.com/uwbiostat/paper423/>.).
Automatically segments a 3D array of voxels into mutually exclusive morphological elements. This package extends existing work for segmenting 2D binary raster data. A paper documenting this approach has been accepted for publication in the journal Landscape Ecology. Detailed references will be updated here once those are known.
Estimate diagnostic classification models (also called cognitive diagnostic models) with Stan'. Diagnostic classification models are confirmatory latent class models, as described by Rupp et al. (2010, ISBN: 978-1-60623-527-0). Automatically generate Stan code for the general loglinear cognitive diagnostic diagnostic model proposed by Henson et al. (2009) <doi:10.1007/s11336-008-9089-5> and other subtypes that introduce additional model constraints. Using the generated Stan code, estimate the model evaluate the model's performance using model fit indices, information criteria, and reliability metrics.
An R interface to the MinIO Client. The MinIO Client ('mc') provides a modern alternative to UNIX commands like ls', cat', cp', mirror', diff', find etc. It supports filesystems and Amazon "S3" compatible cloud storage service ("AWS" Signature v2 and v4). This package provides convenience functions for installing the MinIO client and running any operations, as described in the official documentation, <https://min.io/docs/minio/linux/reference/minio-mc.html?ref=docs-redirect>. This package provides a flexible and high-performance alternative to aws.s3'.
Fit flexible (excess) hazard regression models with the possibility of including non-proportional effects of covariables and of adding a random effect at the cluster level (corresponding to a shared frailty). A detailed description of the package functionalities is provided in Charvat and Belot (2021) <doi: 10.18637/jss.v098.i14>.
This package provides a collection of functions for the analysis of archaeological mortality data (on the topic see e.g. Chamberlain 2006 <https://books.google.de/books?id=nG5FoO_becAC&lpg=PA27&ots=LG0b_xrx6O&dq=life%20table%20archaeology&pg=PA27#v=onepage&q&f=false>). It takes demographic data in different formats and displays the result in a standard life table as well as plots the relevant indices (percentage of deaths, survivorship, probability of death, life expectancy, percentage of population). It also checks for possible biases in the age structure and applies corrections to life tables.
This package provides ensemble samplers for affine-invariant Monte Carlo Markov Chain, which allow a faster convergence for badly scaled estimation problems. Two samplers are proposed: the differential.evolution sampler from ter Braak and Vrugt (2008) <doi:10.1007/s11222-008-9104-9> and the stretch sampler from Goodman and Weare (2010) <doi:10.2140/camcos.2010.5.65>.
Compute correlation and other association matrices from small to high-dimensional datasets with relative simple functions and sensible defaults. Includes options for shrinkage and robustness to improve results in noisy or high-dimensional settings (p >= n), plus convenient print/plot methods for inspection. Implemented with optimised C++ backends using BLAS/OpenMP and memory-aware symmetric updates. Works with base matrices and data frames, returning standard R objects via a consistent S3 interface. Useful across genomics, agriculture, and machine-learning workflows. Supports Pearson, Spearman, Kendall, distance correlation, partial correlation, and robust biweight mid-correlation; Blandâ Altman analyses and Lin's concordance correlation coefficient (including repeated-measures extensions). Methods based on Ledoit and Wolf (2004) <doi:10.1016/S0047-259X(03)00096-4>; Schäfer and Strimmer (2005) <doi:10.2202/1544-6115.1175>; Lin (1989) <doi:10.2307/2532051>.
Grey model is commonly used in time series forecasting when statistical assumptions are violated with a limited number of data points. The minimum number of data points required to fit a grey model is four observations. This package fits Grey model of First order and One Variable, i.e., GM (1,1) for multivariate time series data and returns the parameters of the model, model evaluation criteria and h-step ahead forecast values for each of the time series variables. For method details see, Akay, D. and Atak, M. (2007) <DOI:10.1016/j.energy.2006.11.014>, Hsu, L. and Wang, C. (2007).<DOI:10.1016/j.techfore.2006.02.005>.
Extract cross sections from long bone meshes at specified intervals along the diaphysis. Calculate two and three-dimensional morphometric maps, cross-sectional geometric parameters, and semilandmarks on the periosteal and endosteal contours of each cross section.
An implementation of the cross-validated difference in means (CVDM) test by Desmarais and Harden (2014) <doi:10.1007/s11135-013-9884-7> (see also Harden and Desmarais, 2011 <doi:10.1177/1532440011408929>) and the cross-validated median fit (CVMF) test by Desmarais and Harden (2012) <doi:10.1093/pan/mpr042>. These tests use leave-one-out cross-validated log-likelihoods to assist in selecting among model estimations. You can also utilize data from Golder (2010) <doi:10.1177/0010414009341714> and Joshi & Mason (2008) <doi:10.1177/0022343308096155> that are included to facilitate examples from real-world analysis.
Read a table of fixed width formatted data of different types into a data.frame for each type.
This package provides sampling and density functions for matrix variate normal, t, and inverted t distributions; ML estimation for matrix variate normal and t distributions using the EM algorithm, including some restrictions on the parameters; and classification by linear and quadratic discriminant analysis for matrix variate normal and t distributions described in Thompson et al. (2019) <doi:10.1080/10618600.2019.1696208>. Performs clustering with matrix variate normal and t mixture models.
Recursively calculates mass properties (mass, center of mass, moments and products of inertia, and optionally, their uncertainties) for arbitrary decomposition trees. R. L. Zimmerman, J. H. Nakai. (2005) <https://www.sawe.org/product/paper-3360/>).