Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Given the omnipresence of the assumption of elliptical symmetry, it is essential to be able to test whether that assumption actually holds true or not for the data at hand. This package provides several statistical tests for elliptical symmetry that are described in Babic et al. (2021) <arXiv:2011.12560v2>.
Computing economic analysis in civil infrastructure and ecosystem restoration projects is a typical activity. This package contains Standard cost engineering and engineering economics methods that are applied to convert between present, future, and annualized costs. Newnan D. (2020) <ISBN 9780190931919> â Engineering Economic Analysisâ .
Interact with the FRED API, <https://fred.stlouisfed.org/docs/api/fred/>, to fetch observations across economic series; find information about different economic sources, releases, series, etc.; conduct searches by series name, attributes, or tags; and determine the latest updates. Includes functions for creating panels of related variables with minimal effort and datasets containing data sources, releases, and popular FRED tags.
This package performs hypothesis testing for general block designs with empirical likelihood. The core computational routines are implemented using the Eigen C++ library and RcppEigen interface, with OpenMP for parallel computation. Details of the methods are given in Kim, MacEachern, and Peruggia (2023) <doi:10.1080/10485252.2023.2206919>. This work was supported by the U.S. National Science Foundation under Grants No. SES-1921523 and DMS-2015552.
Enables the automation of actions across the pipeline, including initial steps of transforming binocular data and gap repair to event-based processing such as fixations, saccades, and entry/duration in Areas of Interest (AOIs). It also offers visualisation of eye movement and AOI entries. These tools take relatively raw (trial, time, x, and y form) data and can be used to return fixations, saccades, and AOI entries and time spent in AOIs. As the tools rely on this basic data format, the functions can work with data from any eye tracking device. Implements fixation and saccade detection using methods proposed by Salvucci and Goldberg (2000) <doi:10.1145/355017.355028>.
Estimation for high conditional quantiles based on quantile regression.
Descriptive Statistics is essential for publishing articles. This package can perform descriptive statistics according to different data types. If the data is a continuous variable, the mean and standard deviation or median and quartiles are automatically output; if the data is a categorical variable, the number and percentage are automatically output. In addition, if you enter two variables in this package, the two variables will be described and their relationships will be tested automatically according to their data types. For example, if one of the two input variables is a categorical variable, another variable will be described hierarchically based on the categorical variable and the statistical differences between different groups will be compared using appropriate statistical methods. And for groups of more than two, the post hoc test will be applied. For more information on the methods we used, please see the following references: Libiseller, C. and Grimvall, A. (2002) <doi:10.1002/env.507>, Patefield, W. M. (1981) <doi:10.2307/2346669>, Hope, A. C. A. (1968) <doi:10.1111/J.2517-6161.1968.TB00759.X>, Mehta, C. R. and Patel, N. R. (1983) <doi:10.1080/01621459.1983.10477989>, Mehta, C. R. and Patel, N. R. (1986) <doi:10.1145/6497.214326>, Clarkson, D. B., Fan, Y. and Joe, H. (1993) <doi:10.1145/168173.168412>, Cochran, W. G. (1954) <doi:10.2307/3001616>, Armitage, P. (1955) <doi:10.2307/3001775>, Szabo, A. (2016) <doi:10.1080/00031305.2017.1407823>, David, F. B. (1972) <doi:10.1080/01621459.1972.10481279>, Joanes, D. N. and Gill, C. A. (1998) <doi:10.1111/1467-9884.00122>, Dunn, O. J. (1964) <doi:10.1080/00401706.1964.10490181>, Copenhaver, M. D. and Holland, B. S. (1988) <doi:10.1080/00949658808811082>, Chambers, J. M., Freeny, A. and Heiberger, R. M. (1992) <doi:10.1201/9780203738535-5>, Shaffer, J. P. (1995) <doi:10.1146/annurev.ps.46.020195.003021>, Myles, H. and Douglas, A. W. (1973) <doi:10.2307/2063815>, Rahman, M. and Tiwari, R. (2012) <doi:10.4236/health.2012.410139>, Thode, H. J. (2002) <doi:10.1201/9780203910894>, Jonckheere, A. R. (1954) <doi:10.2307/2333011>, Terpstra, T. J. (1952) <doi:10.1016/S1385-7258(52)50043-X>.
Datasets from most recent CCIIO DIY entry in a tidy format. These support the Centers for Medicare and Medicaid Services (CMS) risk adjustment Do-It-Yourself (DIY) process, which allows health insurance issuers to calculate member risk profiles under the Health and Human Services-Hierarchical Condition Categories (HHS-HCC) regression model. This regression model is used to calculate risk adjustment transfers. Risk adjustment is a selection mitigation program implemented under the Patient Protection and Affordable Care Act (ACA or Obamacare) in the USA. Under the ACA, health insurance issuers submit claims data to CMS in order for CMS to calculate a risk score under the HHS-HCC regression model. However, CMS does not inform issuers of their average risk score until after the data submission deadline. These data sets can be used by issuers to calculate their average risk score mid-year. More information about risk adjustment and the HHS-HCC model can be found here: <https://www.cms.gov/mmrr/Articles/A2014/MMRR2014_004_03_a03.html>.
This package performs likelihood-based extreme value inferences with adjustment for the presence of missing values based on Simpson and Northrop (2026) <doi:10.1002/env.70075>. A Generalised Extreme Value distribution is fitted to block maxima using maximum likelihood estimation, with the location and scale parameters reflecting the numbers of non-missing raw values in each block. A Bayesian version is also provided. For the purposes of comparison, there are options to make no adjustment for missing values or to discard any block maximum for which greater than a percentage of the underlying raw values are missing. Example datasets containing missing values are provided.
Errors in data can be located and removed using validation rules from package validate'. See also Van der Loo and De Jonge (2018) <doi:10.1002/9781118897126>, chapter 7.
The main functions are emmreml', and emmremlMultiKernel'. emmreml solves a mixed model with known covariance structure using the EMMA algorithm. emmremlMultiKernel is a wrapper for emmreml to handle multiple random components with known covariance structures. The function emmremlMultivariate solves a multivariate gaussian mixed model with known covariance structure using the ECM algorithm.
Rolling and expanding window approaches to assessing abundance based early warning signals, non-equilibrium resilience measures, and machine learning. See Dakos et al. (2012) <doi:10.1371/journal.pone.0041010>, Deb et al. (2022) <doi:10.1098/rsos.211475>, Drake and Griffen (2010) <doi:10.1038/nature09389>, Ushio et al. (2018) <doi:10.1038/nature25504> and Weinans et al. (2021) <doi:10.1038/s41598-021-87839-y> for methodological details. Graphical presentation of the outputs are also provided for clear and publishable figures. Visit the EWSmethods website for more information, and tutorials.
This package provides a consistent set of functions for enriching and analyzing sovereign-level economic data. Economists, data scientists, and financial professionals can use the package to add standardized identifiers, demographic and macroeconomic indicators, and derived metrics such as gross domestic product per capita or government expenditure shares.
Ensemble Model Output Statistics to create probabilistic forecasts from ensemble forecasts and weather observations.
Bayesian Model Averaging to create probabilistic forecasts from ensemble forecasts and weather observations <https://stat.uw.edu/sites/default/files/files/reports/2007/tr516.pdf>.
Estimates RxC (R by C) vote transfer matrices (ecological contingency tables) from aggregate data using the model described in Forcina et al. (2012), as extension of the model proposed in Brown and Payne (1986). Allows incorporation of covariates. References: Brown, P. and Payne, C. (1986). Aggregate data, ecological regression and voting transitions''. Journal of the American Statistical Association, 81, 453â 460. <DOI:10.1080/01621459.1986.10478290>. Forcina, A., Gnaldi, M. and Bracalente, B. (2012). A revised Brown and Payne model of voting behaviour applied to the 2009 elections in Italy''. Statistical Methods & Applications, 21, 109â 119. <DOI:10.1007/s10260-011-0184-x>.
This package provides functions to perform exploratory factor analysis (EFA) procedures and compare their solutions. The goal is to provide state-of-the-art factor retention methods and a high degree of flexibility in the EFA procedures. This way, for example, implementations from R psych and SPSS can be compared. Moreover, functions for Schmid-Leiman transformation and the computation of omegas are provided. To speed up the analyses, some of the iterative procedures, like principal axis factoring (PAF), are implemented in C++.
Analysis of items and persons in data. To identify and remove person misfit in polytomous item-response data using either mokken or a graded response model (GRM, via mirt'). Provides automatic thresholds, visual diagnostics (2D/3D), and export utilities. Methods build on Mokken scaling as in Mokken (1971, ISBN:9789027968821) and on the graded response model of Samejima (1969) <doi:10.1007/BF03372160>.
Current layout algorithms such as Kamada Kawai do not take into consideration disjoint clusters in a network, often resulting in a high overlap among the clusters, resulting in a visual â hairballâ that often is uninterpretable. The ExplodeLayout algorithm takes as input (1) an edge list of a unipartite or bipartite network, (2) node layout coordinates (x, y) generated by a layout algorithm such as Kamada Kawai, (3) node cluster membership generated from a clustering algorithm such as modularity maximization, and (4) a radius to enable the node clusters to be â explodedâ to reduce their overlap. The algorithm uses these inputs to generate new layout coordinates of the nodes which â explodesâ the clusters apart, such that the edge lengths within the clusters are preserved, while the edge lengths between clusters are recalculated. The modified network layout with nodes and edges are displayed in two dimensions. The user can experiment with different explode radii to generate a layout which has sufficient separation of clusters, while reducing the overall layout size of the network. This package is a basic version of an earlier version called [epl]<https://github.com/UTMB-DIVA-Lab/epl> that searched for an optimal explode radius, and offered multiple ways to separate clusters in a network (Bhavnani et al(2017) <https://pmc.ncbi.nlm.nih.gov/articles/PMC5543384/>). The example dataset is for a bipartite network, but the algorithm can work also for unipartite networks.
Fit, plot and compare several (extreme value) distribution functions. Compute (truncated) distribution quantile estimates and plot return periods on a linear scale. On the fitting method, see Asquith (2011): Distributional Analysis with L-moment Statistics [...] ISBN 1463508417.
Making available in R the complete set of programs accompanying S. Wellek's (2010) monograph Testing Statistical Hypotheses of Equivalence and Noninferiority. Second Edition (Chapman&Hall/CRC).
Fit and plot some nonlinear models.
Estimate the effective reproduction number from wastewater and clinical data sources.
This package implements the conditional estimation procedure of Lee, Sun, Sun and Taylor (2016) <doi:10.1214/15-AOS1371>. This procedure allows hypothesis testing on the mean of a normal random vector subject to linear constraints.