Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Rare variant association tests: burden tests (Bocher et al. 2019 <doi:10.1002/gepi.22210>) and the Sequence Kernel Association Test (Bocher et al. 2021 <doi:10.1038/s41431-020-00792-8>) in the whole genome; and genetic simulations.
An R Commander "plug-in" extending functionality of linear models and providing an interface to Partial Least Squares Regression and Linear and Quadratic Discriminant analysis. Several statistical summaries are extended, predictions are offered for additional types of analyses, and extra plots, tests and mixed models are available.
This package provides a comprehensive collection of practical and easy-to-use tools for regression analysis of recurrent events, with or without the presence of a (possibly) informative terminal event described in Chiou et al. (2023) <doi:10.18637/jss.v105.i05>. The modeling framework is based on a joint frailty scale-change model, that includes models described in Wang et al. (2001) <doi:10.1198/016214501753209031>, Huang and Wang (2004) <doi:10.1198/016214504000001033>, Xu et al. (2017) <doi:10.1080/01621459.2016.1173557>, and Xu et al. (2019) <doi:10.5705/SS.202018.0224> as special cases. The implemented estimating procedure does not require any parametric assumption on the frailty distribution. The package also allows the users to specify different model forms for both the recurrent event process and the terminal event.
Analyzing the performance of artificial intelligence (AI) systems/algorithms characterized by a search-and-report strategy. Historically observer performance has dealt with measuring radiologists performances in search tasks, e.g., searching for lesions in medical images and reporting them, but the implicit location information has been ignored. The implemented methods apply to analyzing the absolute and relative performances of AI systems, comparing AI performance to a group of human readers or optimizing the reporting threshold of an AI system. In addition to performing historical receiver operating receiver operating characteristic (ROC) analysis (localization information ignored), the software also performs free-response receiver operating characteristic (FROC) analysis, where lesion localization information is used. A book using the software has been published: Chakraborty DP: Observer Performance Methods for Diagnostic Imaging - Foundations, Modeling, and Applications with R-Based Examples, Taylor-Francis LLC; 2017: <https://www.routledge.com/Observer-Performance-Methods-for-Diagnostic-Imaging-Foundations-Modeling/Chakraborty/p/book/9781482214840>. Online updates to this book, which use the software, are at <https://dpc10ster.github.io/RJafrocQuickStart/>, <https://dpc10ster.github.io/RJafrocRocBook/> and at <https://dpc10ster.github.io/RJafrocFrocBook/>. Supported data collection paradigms are the ROC, FROC and the location ROC (LROC). ROC data consists of single ratings per images, where a rating is the perceived confidence level that the image is that of a diseased patient. An ROC curve is a plot of true positive fraction vs. false positive fraction. FROC data consists of a variable number (zero or more) of mark-rating pairs per image, where a mark is the location of a reported suspicious region and the rating is the confidence level that it is a real lesion. LROC data consists of a rating and a location of the most suspicious region, for every image. Four models of observer performance, and curve-fitting software, are implemented: the binormal model (BM), the contaminated binormal model (CBM), the correlated contaminated binormal model (CORCBM), and the radiological search model (RSM). Unlike the binormal model, CBM, CORCBM and RSM predict proper ROC curves that do not inappropriately cross the chance diagonal. Additionally, RSM parameters are related to search performance (not measured in conventional ROC analysis) and classification performance. Search performance refers to finding lesions, i.e., true positives, while simultaneously not finding false positive locations. Classification performance measures the ability to distinguish between true and false positive locations. Knowing these separate performances allows principled optimization of reader or AI system performance. This package supersedes Windows JAFROC (jackknife alternative FROC) software V4.2.1, <https://github.com/dpc10ster/WindowsJafroc>. Package functions are organized as follows. Data file related function names are preceded by Df', curve fitting functions by Fit', included data sets by dataset', plotting functions by Plot', significance testing functions by St', sample size related functions by Ss', data simulation functions by Simulate and utility functions by Util'. Implemented are figures of merit (FOMs) for quantifying performance and functions for visualizing empirical or fitted operating characteristics: e.g., ROC, FROC, alternative FROC (AFROC) and weighted AFROC (wAFROC) curves. For fully crossed study designs significance testing of reader-averaged FOM differences between modalities is implemented via either Dorfman-Berbaum-Metz or the Obuchowski-Rockette methods. Also implemented is single treatment analysis, which allows comparison of performance of a group of radiologists to a specified value, or comparison of AI to a group of radiologists interpreting the same cases. Crossed-modality analysis is implemented wherein there are two crossed treatment factors and the aim is to determined performance in each treatment factor averaged over all levels of the second factor. Sample size estimation tools are provided for ROC and FROC studies; these use estimates of the relevant variances from a pilot study to predict required numbers of readers and cases in a pivotal study to achieve the desired power. Utility and data file manipulation functions allow data to be read in any of the currently used input formats, including Excel, and the results of the analysis can be viewed in text or Excel output files. The methods are illustrated with several included datasets from the author's collaborations. This update includes improvements to the code, some as a result of user-reported bugs and new feature requests, and others discovered during ongoing testing and code simplification.
R access to hundreds of millions data series from DBnomics API (<https://db.nomics.world/>).
Robust tail dependence estimation for bivariate models. This package is based on two papers by the authors:'Robust and bias-corrected estimation of the coefficient of tail dependence and Robust and bias-corrected estimation of probabilities of extreme failure sets'. This work was supported by a research grant (VKR023480) from VILLUM FONDEN and an international project for scientific cooperation (PICS-6416).
Features the multiple polynomial quadratic sieve (MPQS) algorithm for factoring large integers and a vectorized factoring function that returns the complete factorization of an integer. The MPQS is based off of the seminal work of Carl Pomerance (1984) <doi:10.1007/3-540-39757-4_17> along with the modification of multiple polynomials introduced by Peter Montgomery and J. Davis as outlined by Robert D. Silverman (1987) <doi:10.1090/S0025-5718-1987-0866119-8>. Utilizes the C library GMP (GNU Multiple Precision Arithmetic). For smaller integers, a simple Elliptic Curve algorithm is attempted followed by a constrained version of Pollard's rho algorithm. The Pollard's rho algorithm is the same algorithm used by the factorize function in the gmp package.
This package provides four boolean matrix factorization (BMF) methods. BMF has many applications like data mining and categorical data analysis. BMF is also known as boolean matrix decomposition (BMD) and was found to be an NP-hard (non-deterministic polynomial-time) problem. Currently implemented methods are Asso Miettinen, Pauli and others (2008) <doi:10.1109/TKDE.2008.53>, GreConD R. Belohlavek, V. Vychodil (2010) <doi:10.1016/j.jcss.2009.05.002> , GreConDPlus R. Belohlavek, V. Vychodil (2010) <doi:10.1016/j.jcss.2009.05.002> , topFiberM A. Desouki, M. Roeder, A. Ngonga (2019) <arXiv:1903.10326>.
Risk-related information (like the prevalence of conditions, the sensitivity and specificity of diagnostic tests, or the effectiveness of interventions or treatments) can be expressed in terms of frequencies or probabilities. By providing a toolbox of corresponding metrics and representations, riskyr computes, translates, and visualizes risk-related information in a variety of ways. Adopting multiple complementary perspectives provides insights into the interplay between key parameters and renders teaching and training programs on risk literacy more transparent (see <doi:10.3389/fpsyg.2020.567817>, for details).
This package provides functions to manipulate rational functions, including basic arithmetic operators, derivatives, and integrals with EXPLICIT forms.
An R interface for libeemd (Luukko, Helske, Räsänen, 2016) <doi:10.1007/s00180-015-0603-9>, a C library of highly efficient parallelizable functions for performing the ensemble empirical mode decomposition (EEMD), its complete variant (CEEMDAN), the regular empirical mode decomposition (EMD), and bivariate EMD (BEMD). Due to the possible portability issues CRAN version no longer supports OpenMP, but you can install OpenMP-supported version from GitHub: <https://github.com/helske/Rlibeemd/>.
An interface to the BaM (Bayesian Modeling) engine, a Fortran'-based executable aimed at estimating a model with a Bayesian approach and using it for prediction, with a particular focus on uncertainty quantification. Classes are defined for the various building blocks of BaM inference (model, data, error models, Markov Chain Monte Carlo (MCMC) samplers, predictions). The typical usage is as follows: (1) specify the model to be estimated; (2) specify the inference setting (dataset, parameters, error models...); (3) perform Bayesian-MCMC inference; (4) read, analyse and use MCMC samples; (5) perform prediction experiments. Technical details are available (in French) in Renard (2017) <https://hal.science/hal-02606929v1>. Examples of applications include Mansanarez et al. (2019) <doi:10.1029/2018WR023389>, Le Coz et al. (2021) <doi:10.1002/hyp.14169>, Perret et al. (2021) <doi:10.1029/2020WR027745>, Darienzo et al. (2021) <doi:10.1029/2020WR028607> and Perret et al. (2023) <doi:10.1061/JHEND8.HYENG-13101>.
Weave and tangle drivers for Sweave extending the standard drivers. RweaveExtraLatex and RtangleExtra provide options to completely ignore code chunks on weaving, tangling, or both. Chunks ignored on weaving are not parsed, yet are written out verbatim on tangling. Chunks ignored on tangling may be evaluated as usual on weaving, but are completely left out of the tangled scripts. The driver RtangleExtra also provides options to control the separation between code chunks in the tangled script, and to specify the extension of the file name (or remove it entirely) when splitting is selected.
This package contains functions for analysing relative survival data, including nonparametric estimators of net (marginal relative) survival, relative survival ratio, crude mortality, methods for fitting and checking additive and multiplicative regression models, transformation approach, methods for dealing with population mortality tables. Work has been described in Pohar Perme, Pavlic (2018) <doi:10.18637/jss.v087.i08>.
Conduct simulations of the Response Adaptive Block Randomization (RABR) design to evaluate its type I error rate, power and operating characteristics for binary and continuous endpoints. For more details of the proposed method, please refer to Zhan et al. (2021) <doi:10.1002/sim.9104>.
Assists in statistical model building to find optimal and semi-optimal higher order interactions and best subsets. Uses the lm(), glm(), and other R functions to fit models generated from a feasible solution algorithm. Discussed in Subset Selection in Regression, A Miller (2002). Applied and explained for least median of squares in Hawkins (1993) <doi:10.1016/0167-9473(93)90246-P>. The feasible solution algorithm comes up with model forms of a specific type that can have fixed variables, higher order interactions and their lower order terms.
The R equivalent of nodemon'. Watches specified directories for file changes and reruns a designated R script when changes are detected. It's designed to automate the process of reloading your R applications during development, similar to nodemon for Node.js'.
This package provides a programmatic interface to the Web Service methods provided by the Global Biodiversity Information Facility (GBIF; <https://www.gbif.org/developer/summary>). GBIF is a database of species occurrence records from sources all over the globe. rgbif includes functions for searching for taxonomic names, retrieving information on data providers, getting species occurrence records, getting counts of occurrence records, and using the GBIF tile map service to make rasters summarizing huge amounts of data.
Seamless extraction of river networks from digital elevation models data. The package allows analysis of digital elevation models that can be either externally provided or downloaded from open source repositories (thus interfacing with the elevatr package). Extraction is performed via the D8 flow direction algorithm of TauDEM (Terrain Analysis Using Digital Elevation Models), thus interfacing with the traudem package. Resulting river networks are compatible with functions from the OCNet package. See Carraro (2023) <doi:10.5194/hess-27-3733-2023> for a presentation of the package.
Supports concordances in R Markdown documents. This currently allows the original source location in the .Rmd file of errors detected by HTML tidy to be found more easily, and potentially allows forward and reverse search in HTML and LaTeX documents produced from R Markdown'. The LaTeX support has been included in the most recent development version of the patchDVI package.
Function to read and write the Stata file format.
The quantitative measurement and detection of molecules in HPLC should be carried out by an accurate description of chromatographic peaks. In this package non-linear fitting using a modified Gaussian model with a parabolic variance (PVMG) has been implemented to obtain the retention time and height at the peak maximum. This package also includes the traditional Van Deemter approach and two alternatives approaches to characterize chromatographic column.
Bayes estimation of probit choice models in cross-sectional and panel settings. The package can analyze binary, multivariate, ordered, and ranked choices, as well as heterogeneity of choice behavior among deciders. The main functionality includes model fitting via Gibbs sampling, tools for convergence diagnostic, choice data simulation, in-sample and out-of-sample choice prediction, and model selection using information criteria and Bayes factors. The latent class model extension facilitates preference-based decider classification, where the number of latent classes can be inferred via the Dirichlet process or a weight-based updating heuristic. This allows for flexible modeling of choice behavior without the need to impose structural constraints. For a reference on the method, see Oelschlaeger and Bauer (2021) <https://trid.trb.org/view/1759753>.
Encrypt R objects to a raw vector or file using modern cryptographic techniques. Password-based key derivation is with Argon2 (<https://en.wikipedia.org/wiki/Argon2>). Objects are serialized and then encrypted using XChaCha20-Poly1305 (<https://en.wikipedia.org/wiki/ChaCha20-Poly1305>) which follows RFC 8439 for authenticated encryption (<https://en.wikipedia.org/wiki/Authenticated_encryption>). Cryptographic functions are provided by the included monocypher C library (<https://monocypher.org>).