Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Code generator for robust dependency-free Shiny applications in the form of packages. It includes numerous convenience functions to create modules, include utility functions to create common Bootstrap elements, setup a project from the ground-up, and much more.
Estimate the mean of a Gaussian vector, by choosing among a large collection of estimators, following the method developed by Y. Baraud, C. Giraud and S. Huet (2014) <doi:10.1214/13-AIHP539>. In particular it solves the problem of variable selection by choosing the best predictor among predictors emanating from different methods as lasso, elastic-net, adaptive lasso, pls, randomForest. Moreover, it can be applied for choosing the tuning parameter in a Gauss-lasso procedure.
My PhD supervisor once told me that everyone doing newspaper analysis starts by writing code to read in files from the LexisNexis newspaper archive (retrieved e.g., from <https://www.lexisnexis.com/> or any of the partner sites). However, while this is a nice exercise I do recommend, not everyone has the time. This package takes files downloaded from the newspaper archive of LexisNexis', reads them into R and offers functions for further processing.
This package provides a suite of tools for estimating, assessing model fit, simulating from, and visualizing location dependent marked point processes characterized by regularity in the pattern. You provide a reference marked point process, a set of raster images containing location specific covariates, and select the estimation algorithm and type of mark model. ldmppr estimates the process and mark models and allows you to check the appropriateness of the model using a variety of diagnostic tools. Once a satisfactory model fit is obtained, you can simulate from the model and visualize the results. Documentation for the package ldmppr is available in the form of a vignette.
This package provides functions for fitting a functional principal components logit regression model in four different situations: ordinary and filtered functional principal components of functional predictors, included in the model according to their variability explanation power, and according to their prediction ability by stepwise methods. The proposed methods were developed in Escabias et al (2004) <doi:10.1080/10485250310001624738> and Escabias et al (2005) <doi:10.1016/j.csda.2005.03.011>.
This package implements a local likelihood estimator for the dependence parameter in bivariate conditional copula models. Copula family and local likelihood bandwidth parameters are selected by leave-one-out cross-validation. The models are implemented in TMB', meaning that the local score function is efficiently calculated via automated differentiation (AD), such that quasi-Newton algorithms may be used for parameter estimation.
Adjusted odds ratio conditional on potential confounders can be directly obtained from logistic regression. However, those adjusted odds ratios have been widely incorrectly interpreted as a relative risk. As relative risk is often of interest in public health, we provide a simple code to return adjusted relative risks from logistic regression model under potential confounders.
This package provides the method for computing the local partial autocorrelation function for locally stationary wavelet time series from Killick, Knight, Nason, Eckley (2020) <doi:10.1214/20-EJS1748>.
Estimation of the local false discovery rate using the method of moments.
Persistent reproducible reporting by containerization of R Markdown documents.
Calculates and plots Handedness index (HI), absolute HI, mean HI and z-score which are commonly used indexes for the study of hand preference (laterality) in non-human primates.
Alternate font rendering is useful when rendering text to novel graphics outputs where modern font rendering is not available or where bespoke text positioning is required. Bitmap and vector fonts allow for custom layout and rendering using pixel coordinates and line drawing. Formatted text is created as a data.frame of pixel coordinates (for bitmap fonts) or stroke coordinates (for vector fonts). All text can be easily previewed as a matrix or raster image. A selection of fonts is included with this package.
Access to the Greek New Testament (27 books) and the Old Testament (39 books) and allow users to do textual analysis on the data. The New and Old Testament have been provided in their original languages, Greek and Hebrew, respectively. Additionally, the Revised American Standard Bible is also provided for users who'd rather use a wordâ forâ word modern English translation.
This package implements code to identify lexical competitors in a given list of words. We include many of the standard competitor types used in spoken word recognition research, such as functions to find cohorts, neighbors, and rhymes, amongst many others. The package includes documentation for using a variety of lexicon files, including those with form codes made up of multiple letters (i.e., phoneme codes) and also basic orthographies. Importantly, the code makes use of multiple CPU cores and vectorization when possible, making it extremely fast and able to handle large lexicons. Additionally, the package contains documentation for users to easily write new functions, allowing researchers to examine other relationships within a lexicon. Preprint: <https://osf.io/preprints/psyarxiv/8dyru/>. Open access: <doi:10.3758/s13428-021-01667-6>. Citation: Li, Z., Crinnion, A.M. & Magnuson, J.S. (2021). <doi:10.3758/s13428-021-01667-6>.
This package provides a collection of colour palettes inspired by some of our dearest butterfly species. This package provides continuous and categorical palettes, including some colour blind friendly options.
The first stand-alone R package for computation of latent correlation that takes into account all variable types (continuous/binary/ordinal/zero-inflated), comes with an optimized memory footprint, and is computationally efficient, essentially making latent correlation estimation almost as fast as rank-based correlation estimation. The estimation is based on latent copula Gaussian models. For continuous/binary types, see Fan, J., Liu, H., Ning, Y., and Zou, H. (2017). For ternary type, see Quan X., Booth J.G. and Wells M.T. (2018) <doi:10.48550/arXiv.1809.06255>. For truncated type or zero-inflated type, see Yoon G., Carroll R.J. and Gaynanova I. (2020) <doi:10.1093/biomet/asaa007>. For approximation method of computation, see Yoon G., Müller C.L. and Gaynanova I. (2021) <doi:10.1080/10618600.2021.1882468>. The latter method uses multi-linear interpolation originally implemented in the R package <https://cran.r-project.org/package=chebpol>.
The programs were developed for estimation of parameters and testing exponential versus Pareto distribution during our work on hydrologic extremes. See Kozubowski, T.J., A.K. Panorska, F. Qeadan, and A. Gershunov (2007) <doi:10.1080/03610910802439121>, and Panorska, A.K., A. Gershunov, and T.J. Kozubowski (2007) <doi:10.1007/978-0-387-34918-3_26>.
Interpretable nonparametric modeling of longitudinal data using additive Gaussian process regression. Contains functionality for inferring covariate effects and assessing covariate relevances. Models are specified using a convenient formula syntax, and can include shared, group-specific, non-stationary, heterogeneous and temporally uncertain effects. Bayesian inference for model parameters is performed using Stan'. The modeling approach and methods are described in detail in Timonen et al. (2021) <doi:10.1093/bioinformatics/btab021>.
This package provides drill down functionality for leaflet choropleths in shiny apps.
Latent Class Analysis of phenotypic measurements in pedigrees and model selection based on one of two methods: likelihood-based cross-validation and Bayesian Information Criterion. Computation of individual and triplet child-parents weights in a pedigree is performed using an upward-downward algorithm. The model takes into account the familial dependence defined by the pedigree structure by considering that a class of a child depends on his parents classes via triplet-transition probabilities of the classes. The package handles the case where measurements are available on all subjects and the case where measurements are available only on symptomatic (i.e. affected) subjects. Distributions for discrete (or ordinal) and continuous data are currently implemented. The package can deal with missing data.
LP nonparametric high-dimensional K-sample comparison method that includes (i) confirmatory test, (ii) exploratory analysis, and (iii) options to output a data-driven LP-transformed matrix for classification. The primary reference is Mukhopadhyay, S. and Wang, K. (2020, Biometrika); <arXiv:1810.01724>.
This package provides a set of tools designed to enhance transparency and understanding of date-time manipulation functions from the lubridate package. It provides detailed feedback about the operations performed by lubridate functions, allowing users to better comprehend and debug their code. These insights serve as both a learning tool for newcomers and a debugging aid for programmers working with date-time data.
Extends the functionality of the tourr package by an interactive graphical user interface. The interactivity allows users to effortlessly refine their tourr results by manual intervention, which allows for integration of expert knowledge and aids the interpretation of results. For more information on tourr see Wickham et. al (2011) <doi:10.18637/jss.v040.i02> or <https://github.com/ggobi/tourr>.
Computes log-transformed kernel density estimates for positive data using a variety of kernels. It follows the methods described in Jones, Nguyen and McLachlan (2018) <doi:10.21105/joss.00870>.