Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Hierarchical multistate models are considered to perform the analysis of independent/clustered semi-competing risks data. The package allows to choose the specification for model components from a range of options giving users substantial flexibility, including: accelerated failure time or proportional hazards regression models; parametric or non-parametric specifications for baseline survival functions and cluster-specific random effects distribution; a Markov or semi-Markov specification for terminal event following non-terminal event. While estimation is mainly performed within the Bayesian paradigm, the package also provides the maximum likelihood estimation approach for several parametric models. The package also includes functions for univariate survival analysis as complementary analysis tools.
This package provides a facility to generate sliced (orthogonal) Latin hypercube designs with four and five slices. For details about sliced and orthogonal Latin hypercube designs, see Yang, J. F., Lin, C. D., Qian, P. Z., and Lin, D. K. (2013). "Construction of sliced orthogonal Latin hypercube designs". Statistica Sinica, 1117-1130, <doi:10.5705/ss.2012.037>.
This package provides a pilot matching design to automatically stratify and match large datasets. The manual_stratify() function allows users to manually stratify a dataset based on categorical variables of interest, while the auto_stratify() function does automatically by allocating a held-aside (pilot) data set, fitting a prognostic score (see Hansen (2008) <doi:10.1093/biomet/asn004>) on the pilot set, and stratifying the data set based on prognostic score quantiles. The strata_match() function then does optimal matching of the data set in parallel within strata.
Proposes an original instrument for measuring stakeholder influence on the development of an infrastructure project that is carried through by a municipality, drawing on stakeholder classifications (Mitchell, Agle, & Wood, 1997) and input-output modelling (Hester & Adams, 2013). Mitchell R., Agle B.R., & Wood D.J. <doi:10.2307/259247> Hester, P.T., & Adams, K.M. (2013) <doi:10.1016/j.procs.2013.09.282>.
Implementation of the wavelet-based spatial verification method of Buschow and Friederichs "SAD: Verifying the Scale, Anisotropy and Direction of precipitation forecasts" (2020, submitted to QJRMS). Forecasts and Observations are transformed by a decimated or redundant dual-tree complex wavelet transform to analyze the spatial scale, degree of anisotropy and preferred direction in each field. These structural attributes are compared by a series of scores. An experimental algorithm for the correction of these errors is included as well.
Basic functions for dealing with wav files and sound samples.
Easily integrate and control Lottie animations within shiny applications', without the need for idiosyncratic expression or use of JavaScript'. This includes utilities for generating animation instances, controlling playback, manipulating animation properties, and more. For more information on Lottie', see: <https://airbnb.io/lottie/#/>. Additionally, see the official Lottie GitHub repository at <https://github.com/airbnb/lottie>.
This package provides plotting utilities supporting packages in the easystats ecosystem (<https://github.com/easystats/easystats>) and some extra themes, geoms, and scales for ggplot2'. Color scales are based on <https://materialui.co/>. References: Lüdecke et al. (2021) <doi:10.21105/joss.03393>.
Implementations self-normalization (SN) based algorithms for change-points estimation in time series data. This comprises nested local-window algorithms for detecting changes in both univariate and multivariate time series developed in Zhao, Jiang and Shao (2022) <doi:10.1111/rssb.12552>.
This package provides functions that calculate appropriate sample sizes for one-sample t-tests, two-sample t-tests, and F-tests for microarray experiments based on desired power while controlling for false discovery rates. For all tests, the standard deviations (variances) among genes can be assumed fixed or random. This is also true for effect sizes among genes in one-sample and two sample experiments. Functions also output a chart of power versus sample size, a table of power at different sample sizes, and a table of critical test values at different sample sizes.
Sample size requirements calculation using three different Bayesian criteria in the context of designing an experiment to estimate a normal mean or the difference between two normal means. Functions for calculation of required sample sizes for the Average Length Criterion, the Average Coverage Criterion and the Worst Outcome Criterion in the context of normal means are provided. Functions for both the fully Bayesian and the mixed Bayesian/likelihood approaches are provided. For reference see Joseph L. and Bélisle P. (1997) <https://www.jstor.org/stable/2988525>.
Maximum likelihood estimation for stochastic frontier analysis (SFA) of production (profit) and cost functions. The package includes the basic stochastic frontier for cross-sectional or pooled data with several distributions for the one-sided error term (i.e., Rayleigh, gamma, Weibull, lognormal, uniform, generalized exponential and truncated skewed Laplace), the latent class stochastic frontier model (LCM) as described in Dakpo et al. (2021) <doi:10.1111/1477-9552.12422>, for cross-sectional and pooled data, and the sample selection model as described in Greene (2010) <doi:10.1007/s11123-009-0159-1>, and applied in Dakpo et al. (2021) <doi:10.1111/agec.12683>. Several possibilities in terms of optimization algorithms are proposed.
Takes as input a stable oxygen isotope (d18O) profile measured in growth direction (D) through a shell + uncertainties in both variables (d18O_err & D_err). It then models the seasonality in the d18O record by fitting a combination of a growth and temperature sine wave to year-length chunks of the data (see Judd et al., (2018) <doi:10.1016/j.palaeo.2017.09.034>). This modeling is carried out along a sliding window through the data and yields estimates of the day of the year (Julian Day) and local growth rate for each data point. Uncertainties in both modeling routine and the data itself are propagated and pooled to obtain a confidence envelope around the age of each data point in the shell. The end result is a shell chronology consisting of estimated ages of shell formation relative to the annual cycle with their uncertainties. All formulae in the package serve this purpose, but the user can customize the model (e.g. number of days in a year and the mineralogy of the shell carbonate) through input parameters.
The stress addition approach is an alternative to the traditional concentration addition or effect addition models. It allows the modelling of tri-phasic concentration-response relationships either as single toxicant experiments, in combination with an environmental stressor or as mixtures of two toxicants. See Liess et al. (2019) <doi:10.1038/s41598-019-51645-4> and Liess et al. (2020) <doi:10.1186/s12302-020-00394-7>.
Search contributed R packages, sort by package.
Various tools for semantic vector spaces, such as correspondence analysis (simple, multiple and discriminant), latent semantic analysis, probabilistic latent semantic analysis, non-negative matrix factorization, latent class analysis, EM clustering, logratio analysis and log-multiplicative (association) analysis. Furthermore, there are specialized distance measures, plotting functions and some helper functions.
We visualize the standard deviation of a data set as the size of a prism whose volume equals the total volume of several prisms made from the Empirical Cumulative Distribution Function.
Analysis of risk through liability matrices. Contains a Gibbs sampler for network reconstruction, where only row and column sums of the liabilities matrix as well as some other fixed entries are observed, following the methodology of Gandy&Veraart (2016) <doi:10.1287/mnsc.2016.2546>. It also incorporates models that use a power law distribution on the degree distribution.
This package implements several functions that automates the cleaning and spell-checking of text data. Also converges, finalizes, removes plurals and continuous strings, and puts text data in binary format for semantic network analysis. Uses the SemNetDictionaries package to make the cleaning process more accurate, efficient, and reproducible.
This package provides a framework for performing simulations such as those common in methodological statistics papers. The design principles of this package are described in greater depth in Bien, J. (2016) "The simulator: An Engine to Streamline Simulations," which is available at <arXiv:1607.00021>.
This package provides functions for the Skellam distribution, including: density (pmf), cdf, quantiles and regression.
Allows users to produce diagnostic procedures and graphic tools for the evaluation of Small Area estimators.
This package provides a comprehensive suite of portfolio spanning tests for asset pricing, such as Huberman and Kandel (1987) <doi:10.1111/j.1540-6261.1987.tb03917.x>, Gibbons et al. (1989) <doi:10.2307/1913625>, Kempf and Memmel (2006) <doi:10.1007/BF03396737>, Pesaran and Yamagata (2024) <doi:10.1093/jjfinec/nbad002>, and Gungor and Luger (2016) <doi:10.1080/07350015.2015.1019510>.
Modeling spatial dependencies in dependent variables, extending traditional spatial regression approaches. It allows for the joint modeling of both the mean and the variance of the dependent variable, incorporating semiparametric effects in both models. Based on generalized additive models (GAM), the package enables the inclusion of non-parametric terms while maintaining the classical theoretical framework of spatial regression. Additionally, it implements the Generalized Spatial Autoregression (GSAR) model, which extends classical methods like logistic Spatial Autoregresive Models (SAR), probit Spatial Autoregresive Models (SAR), and Poisson Spatial Autoregresive Models (SAR), offering greater flexibility in modeling spatial dependencies and significantly improving computational efficiency and the statistical properties of the estimators. Related work includes: a) J.D. Toloza-Delgado, Melo O.O., Cruz N.A. (2024). "Joint spatial modeling of mean and non-homogeneous variance combining semiparametric SAR and GAMLSS models for hedonic prices". <doi:10.1016/j.spasta.2024.100864>. b) Cruz, N. A., Toloza-Delgado, J. D., Melo, O. O. (2024). "Generalized spatial autoregressive model". <doi:10.48550/arXiv.2412.00945>.