Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Analysis of corneal data obtained from a Placido disk corneal topographer with calculation of irregularity indices. This package performs analyses of corneal data obtained from a Placido disk corneal topographer, with the calculation of the Placido irregularity indices and the posterior analysis. The package is intended to be easy to use by a practitioner, providing a simple interface and yielding easily interpretable results. A corneal topographer is an ophthalmic clinical device that obtains measurements in the cornea (the anterior part of the eye). A Placido disk corneal topographer makes use of the Placido disk [Rowsey et al. (1981)]<doi:10.1001/archopht.1981.03930011093022>, which produce a circular pattern of measurement nodes. The raw information measured by such a topographer is used by practitioners to analyze curvatures, to study optical aberrations, or to diagnose specific conditions of the eye (e.g. keratoconus, an important corneal disease). The rPACI package allows the calculation of the corneal irregularity indices described in [Castro-Luna et al. (2020)]<doi:10.1016%2Fj.clae.2019.12.006>, [Ramos-Lopez et al. (2013)]<doi:10.1097%2FOPX.0b013e3182843f2a>, and [Ramos-Lopez et al. (2011)]<doi:10.1097/opx.0b013e3182279ff8>. It provides a simple interface to read corneal topography data files as exported by a typical Placido disk topographer, to compute the irregularity indices mentioned before, and to display summary plots that are easy to interpret for a clinician.
This package contains basic tools for visualizing, interpreting, and building regression models. It has been designed for use with the book Introduction to Regression and Modeling with R by Adam Petrie, Cognella Publishers, ISBN: 978-1-63189-250-9.
This package provides a Tidy implementation of grouping sets', rollup and cube - extensions of the group_by clause that allow for computing multiple group_by clauses in a single statement. For more detailed information on these functions, please refer to "Enhanced Aggregation, Cube, Grouping and Rollup" <https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation%2C+Cube%2C+Grouping+and+Rollup>.
HydroBudget is a spatially distributed groundwater recharge model that computes a superficial water budget on grid cells with outputs aggregated into monthly time steps. It was developed as an accessible and computationally affordable model to simulate groundwater recharge over large areas (thousands of km2, regional-scale watersheds) and for long time periods (decades), in cold and humid climates. Model algorithms are based on the research of Dubois, E. et al. (2021a) <doi:10.5683/SP3/EUDV3H> and Dubois, E. et al. (2021b) <doi:10.5194/hess-25-6567-2021>.
Principal Component Analysis (PCA) is a statistical technique used to reduce the dimensionality of a dataset while preserving as much variability as possible. By transforming the original variables into a new set of uncorrelated variables called principal components, PCA helps in identifying patterns and simplifying the complexity of high-dimensional data. The RankPCA package provides a streamlined workflow for performing PCA on datasets containing both categorical and continuous variables. It facilitates data preprocessing, encoding of categorical variables, and computes PCA to determine the optimal number of principal components based on a specified variance threshold. The package also computes composite indices for ranking observations, which can be useful for various analytical purposes. Garai, S., & Paul, R. K. (2023) <doi:10.1016/j.iswa.2023.200202>.
This package provides a friendly, object oriented API for creating PowerPoint slide decks in R.
Invoke a BUGS model in OpenBUGS or WinBUGS', a class "bugs" for BUGS results and functions to work with that class. Function write.model() allows a BUGS model file to be written. The class and auxiliary functions could be used with other MCMC programs, including JAGS'. The suggested package BRugs (only needed for function openbugs()) is only available from the CRAN archives, see <https://cran.r-project.org/package=BRugs>.
This package implements an objective Bayes intrinsic conditional autoregressive prior. This model provides an objective Bayesian approach for modeling spatially correlated areal data using an intrinsic conditional autoregressive prior on a vector of spatial random effects.
Collection of methods for rating matrix completion, which is a statistical framework for recommender systems. Another relevant application is the imputation of rating-scale survey data in the social and behavioral sciences. Note that matrix completion and imputation are synonymous terms used in different streams of the literature. The main functionality implements robust matrix completion for discrete rating-scale data with a low-rank constraint on a latent continuous matrix (Archimbaud, Alfons, and Wilms (2025) <doi:10.48550/arXiv.2412.20802>). In addition, the package provides wrapper functions for softImpute (Mazumder, Hastie, and Tibshirani, 2010, <https://www.jmlr.org/papers/v11/mazumder10a.html>; Hastie, Mazumder, Lee, Zadeh, 2015, <https://www.jmlr.org/papers/v16/hastie15a.html>) for easy tuning of the regularization parameter, as well as benchmark methods such as median imputation and mode imputation.
Developed to assist researchers with planning analysis, prior to obtaining data from Trusted Research Environments (TREs) also known as safe havens. With functionality to export and import marginal distributions as well as synthesise data, both with and without correlations from these marginal distributions. Using a multivariate cumulative distribution (COPULA). Additionally the International Stroke Trial (IST) is included as an example dataset under ODC-By licence Sandercock et al. (2011) <doi:10.7488/ds/104>, Sandercock et al. (2011) <doi:10.1186/1745-6215-12-101>.
Implementation of JQuery <https://jquery.com> and CSS styles to allow the display of fireworks on a document. Toolkit to easily incorporate celebratory splashes in Rmarkdown and shiny apps.
Fit and deploy rotation forest models ("Rodriguez, J.J., Kuncheva, L.I., 2006. Rotation forest: A new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28, 1619-1630 <doi:10.1109/TPAMI.2006.211>") for binary classification. Rotation forest is an ensemble method where each base classifier (tree) is fit on the principal components of the variables of random partitions of the feature set.
Algorithms for estimating robustly the parameters of a Gaussian, Student, or Laplace Mixture Model.
Data in multidimensional systems is obtained from operational systems and is transformed to adapt it to the new structure. Frequently, the operations to be performed aim to transform a flat table into a ROLAP (Relational On-Line Analytical Processing) star database. The main objective of the package is to allow the definition of these transformations easily. The implementation of the multidimensional database obtained can be exported to work with multidimensional analysis tools on spreadsheets or relational databases.
R interface to the CSDP semidefinite programming library. Installs version 6.1.1 of CSDP from the COIN-OR website if required. An existing installation of CSDP may be used by passing the proper configure arguments to the installation command. See the INSTALL file for further details.
Toolbox with shiny applications for widely used psychometric methods. Those methods include following analysis: Item analysis, item response theory calibration, principal component analysis, confirmatory factor analysis - structural equation modeling, generating simulated data. References: Chalmers (2012, <doi:10.18637/jss.v048.i06>); Revelle (2022, <https://CRAN.R-project.org/package=psych Version = 2.2.9.>); Rosseel (2012, <doi:10.18637/jss.v048.i02>); Magis & Raiche (2012, <doi:10.18637/jss.v048.i08>); Magis & Barrada (2017, <doi:10.18637/jss.v076.c01>).
The Radiant Multivariate menu includes interfaces for perceptual mapping, factor analysis, cluster analysis, and conjoint analysis. The application extends the functionality in radiant.data'.
Makes documents containing plots and tables from a table of R codes. Can make "HTML", "pdf('LaTex')", "docx('MS Word')" and "pptx('MS Powerpoint')" documents with or without R code. In the package, modularized shiny app codes are provided. These modules are intended for reuse across applications.
Exploration of pharmacometrics data involves both general tools (transformation and plotting) and specific techniques (non-compartmental analysis). This kind of exploration is generally accomplished by utilizing different packages. The purpose of ruminate is to create a shiny interface to make these tools more broadly available while creating reproducible results.
Export Rcmdr output to LaTeX or HTML code. The plug-in was originally intended to facilitate exporting Rcmdr output to formats other than ASCII text and to provide R novices with an easy-to-use, easy-to-access reference on exporting R objects to formats suited for printed output. The package documentation contains several pointers on creating reports, either by using conventional word processors or LaTeX/LyX.
The header-only C++ template library FastAD for automatic differentiation <https://github.com/JamesYang007/FastAD> is provided by this package, along with a few illustrative examples that can all be called from R.
Collection of tools to develop options strategies, value option contracts using the Black-Scholes-Merten option pricing model and calculate the option Greeks. Hull, John C. "Options, Futures, and Other Derivatives" (1997, ISBN:0-13-601589-1). Fischer Black, Myron Scholes (1973) "The Pricing of Options and Corporate Liabilities" <doi:10.1086/260062>.
We provide several avenues to predict and account for user-based mortality and tag loss during mark-recapture studies. When planning a study on a target species, the retentionmort_generation() function can be used to produce multiple synthetic mark-recapture datasets to anticipate the error associated with a planned field study to guide method development to reduce error. Similarly, if field data was already collected, the retentionmort() function can be used to predict the error from already generated data to adjust for user-based mortality and tag loss. The test_dataset_retentionmort() function will provide an example dataset of how data should be inputted into the function to run properly. Lastly, the retentionmort_figure() function can be used on any dataset generated from either model function to produce an rmarkdown printout of preliminary analysis associated with the model, including summary statistics and figures. Methods and results pertaining to the formation of this package can be found in McCutcheon et al. (in review, "Predicting tagging-related mortality and tag loss during mark-recapture studies").
Exchange rate for Kenya Shilling against other currencies, US DOLLAR, EURO, STERLING POUND, Tanzania Shilling, Uganda Shilling.