Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Gene annotation of rice (Oryza Sativa L.spp.japonica). The package is based on the annotation file from the website <http://plants.ensembl.org/Oryza_sativa/Info/Index>. Input gene's name then return some information, including the from position, the end position, the position type and the chromosome number.
Clinical care data from 130 U.S. hospitals in the years 1999-2008 adapted from the study Strack et al. (2014) <doi:10.1155/2014/781670>. Each row describes an "encounter" with a patient with diabetes, including variables on demographics, medications, patient history, diagnostics, payment, and readmission.
An implementation to compute an optimal dose escalation rule using deep reinforcement learning in phase I oncology trials (Matsuura et al. (2023) <doi:10.1080/10543406.2023.2170402>). The dose escalation rule can directly optimize the percentages of correct selection (PCS) of the maximum tolerated dose (MTD).
Includes sysdata.rda file for packages of the RobASt - family of packages; is currently used by package RobExtremes only.
Allows the user to view an image in full screen when clicking on it in RMarkdown documents and shiny applications. The package relies on the JavaScript library intense-images'. See <https://tholman.com/intense-images/> for more information.
Floating Percentile Model with additional functions for optimizing inputs and evaluating outputs and assumptions.
This package performs goodness-of-fit tests for capture-recapture models as described by Gimenez et al. (2018) <doi:10.1111/2041-210X.13014>. Also contains several functions to process capture-recapture data.
Allow access to both public and private end points to Coinbase Pro (erstwhile GDAX) cryptocurrency exchange. For authenticated flow, users must have valid api, secret and passphrase to be able to connect.
Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic. With its state-of-the-art Smart Expression Template implementation Blaze combines the elegance and ease of use of a domain-specific language with HPC-grade performance, making it one of the most intuitive and fastest C++ math libraries available. The RcppBlaze package includes the header files from the Blaze library with disabling some functionalities related to link to the thread and system libraries which make RcppBlaze be a header-only library. Therefore, users do not need to install Blaze'.
This package provides a straightforward, easy-to-use and robust parsing package which aims to digest history files from the popular messenger service WhatsApp in all locales and from all devices.
This package provides streamlined functions for summarising and visualising regression models fitted with the rms package, in the preferred format for medical journals. The modelsummary_rms() function produces concise summaries for linear, logistic, and Cox regression models, including automatic handling of models containing restricted cubic spline (RCS) terms. The resulting summary dataframe can be easily converted into publication-ready documents using the flextable and officer packages. The ggrmsMD() function creates clear and customizable plots ('ggplot2 objects) to visualise RCS terms.
Allows work with MyTarget Statistics API v2 <https://target.my.com/adv/api-marketing/doc/stat-v2> and MyTarget Statistics API v3 <https://target.my.com/adv/api-marketing/doc/stat-v2#statisticsv3> load data by ads, campaigns, agency clients and statistic from your ads account.
Generation of Box-Cox based ROC curves and several aspects of inferences and hypothesis testing. Can be used when inferences for one biomarker (Bantis LE, Nakas CT, Reiser B. (2018)<doi:10.1002/bimj.201700107>) are of interest or when comparisons of two correlated biomarkers (Bantis LE, Nakas CT, Reiser B. (2021)<doi:10.1002/bimj.202000128>) are of interest. Provides inferences and comparisons around the AUC, the Youden index, the sensitivity at a given specificity level (and vice versa), the optimal operating point of the ROC curve (in the Youden sense), and the Youden based cutoff.
Validating sub-national statistical typologies, re-coding across standard typologies of sub-national statistics, and making valid aggregate level imputation, re-aggregation, re-weighting and projection down to lower hierarchical levels to create meaningful data panels and time series.
Proper L2-penalized maximum likelihood estimators for precision matrices and supporting functions to employ these estimators in a graphical modeling setting. For details, see Peeters, Bilgrau, & van Wieringen (2022) <doi:10.18637/jss.v102.i04> and associated publications.
Access to Boost Date_Time functionality for dates, durations (both for days and date time objects), time zones, and posix time ('ptime') is provided by using Rcpp modules'. The posix time implementation can support high-resolution of up to nano-second precision by using 96 bits (instead of 64 with R) to present a ptime object (but this needs recompilation with a #define set).
Rho is used to test the generalization of inter rater reliability (IRR) statistics. Calculating rho starts by generating a large number of simulated, fully-coded data sets: a sizable collection of hypothetical populations, all of which have a kappa value below a given threshold -- which indicates unacceptable agreement. Then kappa is calculated on a sample from each of those sets in the collection to see if it is equal to or higher than the kappa in then real sample. If less than five percent of the distribution of samples from the simulated data sets is greater than actual observed kappa, the null hypothesis is rejected and one can conclude that if the two raters had coded the rest of the data, we would have acceptable agreement (kappa above the threshold).
Computes confidence intervals for nonlinear functions of model parameters (e.g., product of k coefficients) in single-level and multilevel structural equation models. Methods include the distribution of the product, Monte Carlo simulation, and bootstrap methods. It also performs the Model-Based Constrained Optimization (MBCO) procedure for hypothesis testing of indirect effects. References: Tofighi, D., and MacKinnon, D. P. (2011). RMediation: An R package for mediation analysis confidence intervals. Behavior Research Methods, 43, 692-700. <doi:10.3758/s13428-011-0076-x>; Tofighi, D., and Kelley, K. (2020). Improved inference in mediation analysis: Introducing the model-based constrained optimization procedure. Psychological Methods, 25(4), 496-515. <doi:10.1037/met0000259>; Tofighi, D. (2020). Bootstrap Model-Based Constrained Optimization Tests of Indirect Effects. Frontiers in Psychology, 10, 2989. <doi:10.3389/fpsyg.2019.02989>.
This package implements the algorithm by Pourahmadi and Wang (2015) <doi:10.1016/j.spl.2015.06.015> for generating a random p x p correlation matrix. Briefly, the idea is to represent the correlation matrix using Cholesky factorization and p(p-1)/2 hyperspherical coordinates (i.e., angles), sample the angles from a particular distribution and then convert to the standard correlation matrix form. The angles are sampled from a distribution with pdf proportional to sin^k(theta) (0 < theta < pi, k >= 1) using the efficient sampling algorithm described in Enes Makalic and Daniel F. Schmidt (2018) <arXiv:1809.05212>.
An interface to iDigBio's search API that allows downloading specimen records. Searches are returned as a data.frame. Other functions such as the metadata end points return lists of information. iDigBio is a US project focused on digitizing and serving museum specimen collections on the web. See <https://www.idigbio.org> for information on iDigBio.
This package implements the pseudo-R2D2 prior for ordinal regression from the paper "Pseudo-R2D2 prior for high-dimensional ordinal regression" by Yanchenko (2025) <doi:10.1007/s11222-025-10667-x>. In particular, it provides code to evaluate the probability distribution function for the cut-points, compute the log-likelihood, calculate the hyper-parameters for the global variance parameter, find the distribution of McFadden's coefficient-of-determination, and fit the model in rstan'. Please cite the paper if you use these codes.
Calculates evaluation metrics for implicit-feedback recommender systems that are based on low-rank matrix factorization models, given the fitted model matrices and data, thus allowing to compare models from a variety of libraries. Metrics include P@K (precision-at-k, for top-K recommendations), R@K (recall at k), AP@K (average precision at k), NDCG@K (normalized discounted cumulative gain at k), Hit@K (from which the Hit Rate is calculated), RR@K (reciprocal rank at k, from which the MRR or mean reciprocal rank is calculated), ROC-AUC (area under the receiver-operating characteristic curve), and PR-AUC (area under the precision-recall curve). These are calculated on a per-user basis according to the ranking of items induced by the model, using efficient multi-threaded routines. Also provides functions for creating train-test splits for model fitting and evaluation.
Facilitating the creation of reproducible statistical report templates. Once created, rapport templates can be exported to various external formats (HTML, LaTeX, PDF, ODT etc.) with pandoc as the converter backend.
This package provides a carbon-water coupled model (TRIPLEX-CW-Flux) is based on two well-established models, TRIPLEX-Flux model and Penmanâ Monteith model, integrates soil water and water vapor pressure deficits into the stomata conductance submodule to estimate net ecosystem production and evapotranspiration in forest ecosystems.<https://github.com/ShulanSun/rTRIPLEX_CW_Flux>.