Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The key function get_vintage_data() returns a dataframe and is the window into the Census Bureau API requiring just a dataset name, vintage(year), and vector of variable names for survey estimates/percentages. Other functions assist in searching for available datasets, geographies, group/variable concepts of interest. Also provided are functions to access and layer (via standard piping) displayable geometries for the US, states, counties, blocks/tracts, roads, landmarks, places, and bodies of water. Joining survey data with many of the geometry functions is built-in to produce choropleth maps.
The visualization tool offers a nuanced understanding of regression dynamics, going beyond traditional per-unit interpretation of continuous variables versus categorical ones. It highlights the impact of unit changes as well as larger shifts like interquartile changes, acknowledging the distribution of empirical data. Furthermore, it generates visualizations depicting alterations in Odds Ratios for predictors across minimum, first quartile, median, third quartile, and maximum values, aiding in comprehending predictor-outcome interplay within empirical data distributions, particularly in logistic regression frameworks.
Generate random positions (latitude/longitude), Well-known text ('WKT') points or polygons, or GeoJSON points or polygons.
Interface for loading data from Google Ads API', see <https://developers.google.com/google-ads/api/docs/start>. Package provide function for authorization and loading reports.
Fits measurement error models using Monte Carlo Expectation Maximization (MCEM). For specific details on the methodology, see: Greg C. G. Wei & Martin A. Tanner (1990) A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation Algorithms, Journal of the American Statistical Association, 85:411, 699-704 <doi:10.1080/01621459.1990.10474930> For more examples on measurement error modelling using MCEM, see the RMarkdown vignette: "'refitME R-package tutorial".
An R package for multiple imputation using chained random forests. Implemented methods can handle missing data in mixed types of variables by using prediction-based or node-based conditional distributions constructed using random forests. For prediction-based imputation, the method based on the empirical distribution of out-of-bag prediction errors of random forests and the method based on normality assumption for prediction errors of random forests are provided for imputing continuous variables. And the method based on predicted probabilities is provided for imputing categorical variables. For node-based imputation, the method based on the conditional distribution formed by the predicting nodes of random forests, and the method based on proximity measures of random forests are provided. More details of the statistical methods can be found in Hong et al. (2020) <arXiv:2004.14823>.
Reproducible research tools automates the creation of an analysis directory structure and work flow. There are R markdown skeletons which encapsulate typical analytic work flow steps. Functions will create appropriate modules which may pass data from one step to another.
Diagnostics and data preparation for random effects within estimator, random effects within-idiosyncratic estimator, between-within-idiosyncratic model, and cross-classified between model. Mundlak, Yair (1978) <doi:10.2307/1913646>. Hausman, Jeffrey (1978) <doi:10.2307/1913827>. Allison, Paul (2009) <doi:10.4135/9781412993869>. Neuhaus, J.M., and J. D. Kalbfleisch (1998) <doi:10.2307/3109770>.
Convert REDCap exports into tidy tables for easy handling of REDCap repeat instruments and event arms.
This package provides a set of tools for creation, manipulation, and modeling of tensors with arbitrary number of modes. A tensor in the context of data analysis is a multidimensional array. rTensor does this by providing a S4 class Tensor that wraps around the base array class. rTensor provides common tensor operations as methods, including matrix unfolding, summing/averaging across modes, calculating the Frobenius norm, and taking the inner product between two tensors. Familiar array operations are overloaded, such as index subsetting via [ and element-wise operations. rTensor also implements various tensor decomposition, including CP, GLRAM, MPCA, PVD, and Tucker. For tensors with 3 modes, rTensor also implements transpose, t-product, and t-SVD, as defined in Kilmer et al. (2013). Some auxiliary functions include the Khatri-Rao product, Kronecker product, and the Hadamard product for a list of matrices.
This package provides tools for downloading and analyzing CDC NHANES data, with a focus on analytical laboratory data.
This package provides functions for (1) computing diagnostic test statistics (sensitivity, specificity, etc.) from confusion matrices with adjustment for various base rates or known prevalence based on McCaffrey et al (2003) <doi:10.1007/978-1-4615-0079-7_1>, (2) computing optimal cut-off scores with different criteria including maximizing sensitivity, maximizing specificity, and maximizing the Youden Index from Youden (1950) <doi:10.1002/1097-0142(1950)3:1%3C32::AID-CNCR2820030106%3E3.0.CO;2-3>, and (3) displaying and comparing classification statistics and area under the receiver operating characteristic (ROC) curves or area under the curves (AUC) across consecutive categories for ordinal variables.
Get information (boards, pins and users) from the Pinterest <http://www.pinterest.com> API.
Recursive partitioning for least absolute deviation regression trees. Another algorithm from the 1984 book by Breiman, Friedman, Olshen and Stone in addition to the rpart package (Breiman, Friedman, Olshen, Stone (1984, ISBN:9780412048418).
Connection to the Redis (or Valkey') key/value store using the C-language client library hiredis (included as a fallback) with MsgPack encoding provided via RcppMsgPack headers. It now also includes the pub/sub functions from the rredis package.
This package provides a collection of randomization tests, data sets and examples. The current version focuses on five testing problems and their implementation in empirical work. First, it facilitates the empirical researcher to test for particular hypotheses, such as comparisons of means, medians, and variances from k populations using robust permutation tests, which asymptotic validity holds under very weak assumptions, while retaining the exact rejection probability in finite samples when the underlying distributions are identical. Second, the description and implementation of a permutation test for testing the continuity assumption of the baseline covariates in the sharp regression discontinuity design (RDD) as in Canay and Kamat (2018) <https://goo.gl/UZFqt7>. More specifically, it allows the user to select a set of covariates and test the aforementioned hypothesis using a permutation test based on the Cramer-von Misses test statistic. Graphical inspection of the empirical CDF and histograms for the variables of interest is also supported in the package. Third, it provides the practitioner with an effortless implementation of a permutation test based on the martingale decomposition of the empirical process for testing for heterogeneous treatment effects in the presence of an estimated nuisance parameter as in Chung and Olivares (2021) <doi:10.1016/j.jeconom.2020.09.015>. Fourth, this version considers the two-sample goodness-of-fit testing problem under covariate adaptive randomization and implements a permutation test based on a prepivoted Kolmogorov-Smirnov test statistic. Lastly, it implements an asymptotically valid permutation test based on the quantile process for the hypothesis of constant quantile treatment effects in the presence of an estimated nuisance parameter.
Computes confidence intervals for binomial or Poisson rates and their differences or ratios. Including the rate (or risk) difference ('RD') or rate ratio (or relative risk, RR') for binomial proportions or Poisson rates, and odds ratio ('OR', binomial only). Also confidence intervals for RD, RR or OR for paired binomial data, and estimation of a proportion from clustered binomial data. Includes skewness-corrected asymptotic score ('SCAS') methods, which have been developed in Laud (2017) <doi:10.1002/pst.1813> from Miettinen and Nurminen (1985) <doi:10.1002/sim.4780040211> and Gart and Nam (1988) <doi:10.2307/2531848>, and in Laud (2025, under review) for paired proportions. The same score produces hypothesis tests that are improved versions of the non-inferiority test for binomial RD and RR by Farrington and Manning (1990) <doi:10.1002/sim.4780091208>, or a generalisation of the McNemar test for paired data. The package also includes MOVER methods (Method Of Variance Estimates Recovery) for all contrasts, derived from the Newcombe method but with options to use equal-tailed intervals in place of the Wilson score method, and generalised for Bayesian applications incorporating prior information. So-called exact methods for strictly conservative coverage are approximated using continuity adjustments, and the amount of adjustment can be selected to avoid over-conservative coverage. Also includes methods for stratified calculations (e.g. meta-analysis), either with fixed effect assumption (matching the CMH test) or incorporating stratum heterogeneity.
This package provides functions in this package will import filtered variant call format (VCF) files of SNPs data and generate data sets to detect copy number variants, visualize them and do downstream analyses with copy number variants(e.g. Environmental association analyses).
Calculates tide heights based on tide station harmonics. It includes the harmonics data for 637 US stations. The harmonics data was converted from <https://github.com/poissonconsulting/rtide/blob/main/data-raw/harmonics-dwf-20151227-free.tar.bz2>, NOAA web site data processed by David Flater for XTide'. The code to calculate tide heights from the harmonics is based on XTide'.
This package performs joint selection in Generalized Linear Mixed Models (GLMMs) using penalized likelihood methods. Specifically, the Penalized Quasi-Likelihood (PQL) is used as a loss function, and penalties are then augmented to perform simultaneous fixed and random effects selection. Regularized PQL avoids the need for integration (or approximations such as the Laplace's method) during the estimation process, and so the full solution path for model selection can be constructed relatively quickly.
This package provides a simple rounding function. The default round() function in R uses the IEC 60559 standard and therefore it rounds 0.5 to 0 and rounds -1.5 to -2. The roundx() function accounts for this and helps to round 0.5 up to 1.
Convert a string of text characters to Elder Futhark Runes <https://en.wikipedia.org/wiki/Elder_Futhark>.
Generation of univariate and multivariate data that follow the generalized Poisson distribution. The details of the univariate part are explained in Demirtas (2017) <doi: 10.1080/03610918.2014.968725>, and the multivariate part is an extension of the correlated Poisson data generation routine that was introduced in Yahav and Shmueli (2012) <doi: 10.1002/asmb.901>.
Simulation of random orthonormal matrices from linear and quadratic exponential family distributions on the Stiefel manifold. The most general type of distribution covered is the matrix-variate Bingham-von Mises-Fisher distribution. Most of the simulation methods are presented in Hoff(2009) "Simulation of the Matrix Bingham-von Mises-Fisher Distribution, With Applications to Multivariate and Relational Data" <doi:10.1198/jcgs.2009.07177>. The package also includes functions for optimization on the Stiefel manifold based on algorithms described in Wen and Yin (2013) "A feasible method for optimization with orthogonality constraints" <doi:10.1007/s10107-012-0584-1>.