Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Construct an explainable nomogram for a machine learning (ML) model to improve availability of an ML prediction model in addition to a computer application, particularly in a situation where a computer, a mobile phone, an internet connection, or the application accessibility are unreliable. This package enables a nomogram creation for any ML prediction models, which is conventionally limited to only a linear/logistic regression model. This nomogram may indicate the explainability value per feature, e.g., the Shapley additive explanation value, for each individual. However, this package only allows a nomogram creation for a model using categorical without or with single numerical predictors. Detailed methodologies and examples are documented in our vignette, available at <https://htmlpreview.github.io/?https://github.com/herdiantrisufriyana/rmlnomogram/blob/master/doc/ml_nomogram_exemplar.html>.
Generate utils::globalVariables() from roxygen2 @global and @autoglobal tags.
An integrated solution to perform a series of text mining tasks such as importing and cleaning a corpus, and analyses like terms and documents counts, lexical summary, terms co-occurrences and documents similarity measures, graphs of terms, correspondence analysis and hierarchical clustering. Corpora can be imported from spreadsheet-like files, directories of raw text files, as well as from Dow Jones Factiva', LexisNexis', Europresse and Alceste files.
Create plots to visualize the alignment of a corporate lending financial portfolio to climate change scenarios based on climate indicators (production and emission intensities) across key climate relevant sectors of the PACTA methodology (Paris Agreement Capital Transition Assessment; <https://www.transitionmonitor.com/>). Financial institutions use PACTA to study how their capital allocation decisions align with climate change mitigation goals.
An algorithm which can be used to determine an objective threshold for signal-noise separation in large random matrices (correlation matrices, mutual information matrices, network adjacency matrices) is provided. The package makes use of the results of Random Matrix Theory (RMT). The algorithm increments a suppositional threshold monotonically, thereby recording the eigenvalue spacing distribution of the matrix. According to RMT, that distribution undergoes a characteristic change when the threshold properly separates signal from noise. By using the algorithm, the modular structure of a matrix - or of the corresponding network - can be unraveled.
The Nearest Neighbor Descent method for finding approximate nearest neighbors by Dong and co-workers (2010) <doi:10.1145/1963405.1963487>. Based on the Python package PyNNDescent <https://github.com/lmcinnes/pynndescent>.
This package provides a collection of methods for the robust analysis of univariate and multivariate functional data, possibly in high-dimensional cases, and hence with attention to computational efficiency and simplicity of use. See the R Journal publication of Ieva et al. (2019) <doi:10.32614/RJ-2019-032> for an in-depth presentation of the roahd package. See Aleman-Gomez et al. (2021) <arXiv:2103.08874> for details about the concept of depthgram.
This package provides efficient functions for detecting multiple change points in multidimensional time series. The models can be piecewise constant or polynomial. Adaptive threshold selection methods are available, see Fan and Wu (2024) <arXiv:2403.00600>.
Convert REDCap exports into tidy tables for easy handling of REDCap repeat instruments and event arms.
Rcpp bindings to the native C++ implementation of MS Numpress, that provides two compression schemes for numeric data from mass spectrometers. The library provides implementations of 3 different algorithms, 1 designed to compress first order smooth data like retention time or M/Z arrays, and 2 for compressing non smooth data with lower requirements on precision like ion count arrays. Refer to the publication (Teleman et al., (2014) <doi:10.1074/mcp.O114.037879>) for more details.
R interface to the LTP'-Cloud service for Natural Language Processing in Chinese (http://www.ltp-cloud.com/).
This package provides a novel ensemble method employing Support Vector Machines (SVMs) as base learners. This powerful ensemble model is designed for both classification (Ara A., et. al, 2021) <doi:10.6339/21-JDS1014>, and regression (Ara A., et. al, 2021) <doi:10.1016/j.eswa.2022.117107> problems, offering versatility and robust performance across different datasets and compared with other consolidated methods as Random Forests (Maia M, et. al, 2021) <doi:10.6339/21-JDS1025>.
Drift-Diffusion Model (DDM) has been widely used to model binary decision-making tasks, and many research studies the relationship between DDM parameters and other characteristics of the subject. This package uses RStan to perform generalized liner regression analysis over DDM parameters via a single Bayesian Hierarchical model. Compared to estimating DDM parameters followed by a separate regression model, RegDDM reduces bias and improves statistical power.
Rectangle packing is a packing problem where rectangles are placed into a larger rectangular region (without overlapping) in order to maximise the use space. Rectangles are packed using the skyline heuristic as discussed in Lijun et al (2011) A Skyline-Based Heuristic for the 2D Rectangular Strip Packing Problem <doi:10.1007/978-3-642-21827-9_29>. A function is also included for determining a good small-sized box for containing a given set of rectangles.
ViennaCL is a free open-source linear algebra library for computations on many-core architectures (GPUs, MIC) and multi-core CPUs. The library is written in C++ and supports CUDA', OpenCL', and OpenMP (including switches at runtime). I have placed these libraries in this package as a more efficient distribution system for CRAN. The idea is that you can write a package that depends on the ViennaCL library and yet you do not need to distribute a copy of this code with your package.
This package contains functions for analysing relative survival data, including nonparametric estimators of net (marginal relative) survival, relative survival ratio, crude mortality, methods for fitting and checking additive and multiplicative regression models, transformation approach, methods for dealing with population mortality tables. Work has been described in Pohar Perme, Pavlic (2018) <doi:10.18637/jss.v087.i08>.
Designed to create and display complex tables with R, the rtables R package allows cells in an rtables object to contain any high-dimensional data structure, which can then be displayed with cell-specific formatting instructions. Additionally, the rtables.officer package supports export formats related to the Microsoft Office software suite, including Microsoft Word ('docx') and Microsoft PowerPoint ('pptx').
Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Candes, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis?. Journal of the ACM (JACM), 58(3), 11. prove that we can recover each component individually under some suitable assumptions. It is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This package implements this decomposition algorithm resulting with Robust PCA approach.
Encrypt R objects to a raw vector or file using modern cryptographic techniques. Password-based key derivation is with Argon2 (<https://en.wikipedia.org/wiki/Argon2>). Objects are serialized and then encrypted using XChaCha20-Poly1305 (<https://en.wikipedia.org/wiki/ChaCha20-Poly1305>) which follows RFC 8439 for authenticated encryption (<https://en.wikipedia.org/wiki/Authenticated_encryption>). Cryptographic functions are provided by the included monocypher C library (<https://monocypher.org>).
External jars required for package RWeka'.
Adaptation of the Matlab tsEVA toolbox developed by Lorenzo Mentaschi available here: <https://github.com/menta78/tsEva>. It contains an implementation of the Transformed-Stationary (TS) methodology for non-stationary extreme value Analysis (EVA) as described in Mentaschi et al. (2016) <doi:10.5194/hess-20-3527-2016>. In synthesis this approach consists in: (i) transforming a non-stationary time series into a stationary one to which the stationary extreme value theory can be applied; and (ii) reverse-transforming the result into a non-stationary extreme value distribution. RtsEva offers several options for trend estimation (mean, extremes, seasonal) and contains multiple plotting functions displaying different aspects of the non-stationarity of extremes.
Finds the k nearest neighbours for every point in a given dataset using Jose Luis nanoflann library. There is support for exact searches, fixed radius searches with kd trees and two distances, the Euclidean and Manhattan'. For more information see <https://github.com/jlblancoc/nanoflann>. Also, the nanoflann library is exported and ready to be used via the linking to mechanism.
This package provides a tool for multiply imputing missing data using MIDAS', a deep learning method based on denoising autoencoder neural networks (see Lall and Robinson, 2022; <doi:10.1017/pan.2020.49>). This algorithm offers significant accuracy and efficiency advantages over other multiple imputation strategies, particularly when applied to large datasets with complex features. Alongside interfacing with Python to run the core algorithm, this package contains functions for processing data before and after model training, running imputation model diagnostics, generating multiple completed datasets, and estimating regression models on these datasets. For more information see Lall and Robinson (2023) <doi:10.18637/jss.v107.i09>.
Collection of functions to evaluate sequences, decode hidden states and estimate parameters from a single or multiple sequences of a discrete time Hidden Markov Model. The observed values can be modeled by a multinomial distribution for categorical/labeled emissions, a mixture of Gaussians for continuous data and also a mixture of Poissons for discrete values. It includes functions for random initialization, simulation, backward or forward sequence evaluation, Viterbi or forward-backward decoding and parameter estimation using an Expectation-Maximization approach.