Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An end-to-end framework that enables users to implement various descriptive studies for a given set of target and outcome cohorts for data mapped to the Observational Medical Outcomes Partnership Common Data Model.
This package provides functions to calculate weights, estimates of changes and corresponding variance estimates for panel data with non-response. Partially overlapping samples are handled. Initially, weights are calculated by linear calibration. By default, the survey package is used for this purpose. It is also possible to use ReGenesees, which can be installed from <https://github.com/DiegoZardetto/ReGenesees>. Variances of linear combinations (changes and averages) and ratios are calculated from a covariance matrix based on residuals according to the calibration model. The methodology was presented at the conference, The Use of R in Official Statistics, and is described in Langsrud (2016) <http://www.revistadestatistica.ro/wp-content/uploads/2016/06/RRS2_2016_A021.pdf>.
Computes conditional multivariate t probabilities, random deviates, and densities. It can also be used to create missing values at random in a dataset, resulting in a missing at random (MAR) mechanism. Inbuilt in the package are the Expectation-Maximization (EM), Monte Carlo EM, and Stochastic EM algorithms for imputation of missing values in datasets assuming the multivariate t distribution. See Kinyanjui, Tamba, Orawo, and Okenye (2020)<doi:10.3233/mas-200493>, and Kinyanjui, Tamba, and Okenye(2021)<http://www.ceser.in/ceserp/index.php/ijamas/article/view/6726/0> for more details.
Statistical downscaling and bias correction (model output statistics) method based on cumulative distribution functions (CDF) transformation. See Michelangeli, Vrac, Loukos (2009) Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36, L11708, <doi:10.1029/2009GL038401>. ; and Vrac, Drobinski, Merlo, Herrmann, Lavaysse, Li, Somot (2012) Dynamical and statistical downscaling of the French Mediterranean climate: uncertainty assessment. Nat. Hazards Earth Syst. Sci., 12, 2769-2784, www.nat-hazards-earth-syst-sci.net/12/2769/2012/, <doi:10.5194/nhess-12-2769-2012>.
This package contains generic functions for performing cross validation and for computing diagnostic errors.
Computes a range of scatterplot diagnostics (scagnostics) on pairs of numerical variables in a data set. A range of scagnostics, including graph and association-based scagnostics described by Leland Wilkinson and Graham Wills (2008) <doi:10.1198/106186008X320465> and association-based scagnostics described by Katrin Grimm (2016,ISBN:978-3-8439-3092-5) can be computed. Summary and plotting functions are provided.
This package implements the cross-validation methodology from Pein and Shah (2021) <arXiv:2112.03220>. Can be customised by providing different cross-validation criteria, estimators for the change-point locations and local parameters, and freely chosen folds. Pre-implemented estimators and criteria are available. It also includes our own implementation of the COPPS procedure <doi:10.1214/19-AOS1814>.
Designed for web usage data analysis, it implements tools to process web sequences and identify web browsing profiles through sequential classification. Sequences clusters are identified by using a model-based approach, specifically mixture of discrete time first-order Markov models for categorical web sequences. A Bayesian approach is used to estimate model parameters and identify sequences classification as proposed by Fruehwirth-Schnatter and Pamminger (2010) <doi:10.1214/10-BA606>.
This package performs multiple comparison procedures on curve observations among different treatment groups. The methods are applicable in a variety of situations (such as independent groups with equal or unequal sample sizes, or repeated measures) by using parametric bootstrap. References to these procedures can be found at Konietschke, Gel, and Brunner (2014) <doi:10.1090/conm/622/12431> and Westfall (2011) <doi:10.1080/10543406.2011.607751>.
Color palettes for EPL, MLB, NBA, NHL, and NFL teams.
Estimation of 2-level factor copula-based regression models for clustered data where the response variable can be either discrete or continuous.
Compute price indices using various Hedonic and multilateral methods, including Laspeyres, Paasche, Fisher, and HMTS (Hedonic Multilateral Time series re-estimation with splicing). The central function calculate_price_index() offers a unified interface for running these methods on structured datasets. This package is designed to support index construction workflows for real estate and other domains where quality-adjusted price comparisons over time are essential. The development of this package was funded by Eurostat and Statistics Netherlands (CBS), and carried out by Statistics Netherlands. The HMTS method implemented here is described in Ishaak, Ouwehand and Remøy (2024) <doi:10.1177/0282423X241246617>. For broader methodological context, see Eurostat (2013, ISBN:978-92-79-25984-5, <doi:10.2785/34007>).
This package provides a uniform statistical inferential tool in making individualized treatment decisions, which implements the methods of Ma et al. (2017)<DOI:10.1177/0962280214541724> and Guo et al. (2021)<DOI:10.1080/01621459.2020.1865167>. It uses a flexible semiparametric modeling strategy for heterogeneous treatment effect estimation in high-dimensional settings and can gave valid confidence bands. Based on it, one can find the subgroups of patients that benefit from each treatment, thereby making individualized treatment selection.
An interface for creating new condition generators objects. Generators are special functions that can be saved in registries and linked to other functions. Utilities for documenting your generators, and new conditions is provided for package development.
Integrated, convenient, and uniform access to Canadian Census data and geography retrieved using the CensusMapper API. This package produces analysis-ready tidy data frames and spatial data in multiple formats, as well as convenience functions for working with Census variables, variable hierarchies, and region selection. API keys are freely available with free registration at <https://censusmapper.ca/api>. Census data and boundary geometries are reproduced and distributed on an "as is" basis with the permission of Statistics Canada (Statistics Canada 1996; 2001; 2006; 2011; 2016; 2021).
There are diverse purposes such as biomarker confirmation, novel biomarker discovery, constructing predictive models, model-based prediction, and validation. It handles binary, continuous, and time-to-event outcomes at the sample or patient level. - Biomarker confirmation utilizes established functions like glm() from stats', coxph() from survival', surv_fit(), and ggsurvplot() from survminer'. - Biomarker discovery and variable selection are facilitated by three LASSO-related functions LASSO2(), LASSO_plus(), and LASSO2plus(), leveraging the glmnet R package with additional steps. - Eight versatile modeling functions are offered, each designed for predictive models across various outcomes and data types. 1) LASSO2(), LASSO_plus(), LASSO2plus(), and LASSO2_reg() perform variable selection using LASSO methods and construct predictive models based on selected variables. 2) XGBtraining() employs XGBoost for model building and is the only function not involving variable selection. 3) Functions like LASSO2_XGBtraining(), LASSOplus_XGBtraining(), and LASSO2plus_XGBtraining() combine LASSO-related variable selection with XGBoost for model construction. - All models support prediction and validation, requiring a testing dataset comparable to the training dataset. Additionally, the package introduces XGpred() for risk prediction based on survival data, with the XGpred_predict() function available for predicting risk groups in new datasets. The methodology is based on our new algorithms and various references: - Hastie et al. (1992, ISBN 0 534 16765-9), - Therneau et al. (2000, ISBN 0-387-98784-3), - Kassambara et al. (2021) <https://CRAN.R-project.org/package=survminer>, - Friedman et al. (2010) <doi:10.18637/jss.v033.i01>, - Simon et al. (2011) <doi:10.18637/jss.v039.i05>, - Harrell (2023) <https://CRAN.R-project.org/package=rms>, - Harrell (2023) <https://CRAN.R-project.org/package=Hmisc>, - Chen and Guestrin (2016) <doi:10.48550/arXiv.1603.02754>, - Aoki et al. (2023) <doi:10.1200/JCO.23.01115>.
This package provides tools for extracting occurrences, assessing potential driving factors, predicting occurrences, and quantifying impacts of compound events in hydrology and climatology. Please see Hao Zengchao et al. (2019) <doi:10.1088/1748-9326/ab4df5>.
Splits data into Gaussian type clusters using the Cross-Entropy Clustering ('CEC') method. This method allows for the simultaneous use of various types of Gaussian mixture models, for performing the reduction of unnecessary clusters, and for discovering new clusters by splitting them. CEC is based on the work of Spurek, P. and Tabor, J. (2014) <doi:10.1016/j.patcog.2014.03.006>.
Computes a structural similarity metric (after the style of MS-SSIM for images) for binary and categorical 2D and 3D images. Can be based on accuracy (simple matching), Cohen's kappa, Rand index, adjusted Rand index, Jaccard index, Dice index, normalized mutual information, or adjusted mutual information. In addition, has fast computation of Cohen's kappa, the Rand indices, and the two mutual informations. Implements the methods of Thompson and Maitra (2020) <doi:10.48550/arXiv.2004.09073>.
Ceteris Paribus Profiles (What-If Plots) are designed to present model responses around selected points in a feature space. For example around a single prediction for an interesting observation. Plots are designed to work in a model-agnostic fashion, they are working for any predictive Machine Learning model and allow for model comparisons. Ceteris Paribus Plots supplement the Break Down Plots from breakDown package.
This package implements the expectation-maximization (EM) algorithm as described in Fiksel et al. (2022) <doi:10.1111/biom.13465> for transformation-free linear regression for compositional outcomes and predictors.
This package implements cointegration/co-trending rank selection algorithm in Guo and Shintani (2013) "Consistent co-trending rank selection when both stochastic and nonlinear deterministic trends are present". The Econometrics Journal 16: 473-483 <doi:10.1111/j.1368-423X.2012.00392.x>. Numbered examples correspond to Feb 2011 preprint <http://www.fas.nus.edu.sg/ecs/events/seminar/seminar-papers/05Apr11.pdf>.
Processes survey data and displays estimation results along with the relative standard error in a table, including the number of samples and also uses a t-distribution approach to compute confidence intervals, similar to SPSS (Statistical Package for the Social Sciences) software.
This package provides functions calculating Conley (1999) <doi:10.1016/S0304-4076(98)00084-0> standard errors. The package started by merging and extending multiple packages and other published scripts on this econometric technique. It strongly emphasizes computational optimization. Details are available in the function documentation and in the vignette.