Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Emulation of an application originally created by Paul Pukite. Computer Aided Rate Modeling and Simulation. Jan Pukite and Paul Pukite, (1998, ISBN 978-0-7803-3482), William J. Stewart, (1994, ISBN: 0-691-03699-3).
This package implements a changepoint-aware ensemble forecasting algorithm that combines Theta, TBATS (Trigonometric, Box-Cox transformation, ARMA errors, Trend, Seasonal components), and ARFIMA (AutoRegressive, Fractionally Integrated, Moving Average) using a product-of-experts approach for robust probabilistic prediction.
This package implements non-parametric analyses for clustered binary and multinomial data. The elements of the cluster are assumed exchangeable, and identical joint distribution (also known as marginal compatibility, or reproducibility) is assumed for clusters of different sizes. A trend test based on stochastic ordering is implemented. Szabo A, George EO. (2010) <doi:10.1093/biomet/asp077>; George EO, Cheon K, Yuan Y, Szabo A (2016) <doi:10.1093/biomet/asw009>.
Implement tableGrob object as a clickable image map. The clickableImageMap package is designed to be more convenient and more configurable than the edit() function. Limitations that I have encountered with edit() are cannot control (1) positioning (2) size (3) appearance and formatting of fonts In contrast, when the table is implemented as a tableGrob', all of these features are controllable. In particular, the ggplot2 grid system allows exact positioning of the table relative to other graphics etc.
Clustering methods, which (if asked) can provide step-by-step explanations of the algorithms used, as described in Ezugwu et. al., (2022) <doi:10.1016/j.engappai.2022.104743>; and datasets to test them on, which highlight the strengths and weaknesses of each technique, as presented in the clustering section of scikit-learn (Pedregosa et al., 2011) <https://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html>.
This package performs Correspondence Analysis on the given dataframe and plots the results in a scatterplot that emphasizes the geometric interpretation aspect of the analysis, following Borg-Groenen (2005) and Yelland (2010). It is particularly useful for highlighting the relationships between a selected row (or column) category and the column (or row) categories. See Borg-Groenen (2005, ISBN:978-0-387-28981-6); Yelland (2010) <doi:10.3888/tmj.12-4>.
This package provides functions for computing the density and the log-likelihood function of closed-skew normal variates, and for generating random vectors sampled from this distribution. See Gonzalez-Farias, G., Dominguez-Molina, J., and Gupta, A. (2004). The closed skew normal distribution, Skew-elliptical distributions and their applications: a journey beyond normality, Chapman and Hall/CRC, Boca Raton, FL, pp. 25-42.
Seek the significant cutoff value for a continuous variable, which will be transformed into a classification, for linear regression, logistic regression, logrank analysis and cox regression. First of all, all combinations will be gotten by combn() function. Then n.per argument, abbreviated of total number percentage, will be used to remove the combination of smaller data group. In logistic, Cox regression and logrank analysis, we will also use p.per argument, patient percentage, to filter the lower proportion of patients in each group. Finally, p value in regression results will be used to get the significant combinations and output relevant parameters. In this package, there is no limit to the number of cutoff points, which can be 1, 2, 3 or more. Still, we provide 2 methods, typical Bonferroni and Duglas G (1994) <doi: 10.1093/jnci/86.11.829>, to adjust the p value, Missing values will be deleted by na.omit() function before analysis.
This package provides function declarations and inline function definitions that facilitate communication between R and the Eigen C++ library for linear algebra and scientific computing.
This package provides functions and a workflow to easily and powerfully calculating specificity, sensitivity and ROC curves of biomarkers combinations. Allows to rank and select multi-markers signatures as well as to find the best performing sub-signatures, now also from single-cell RNA-seq datasets. The method used was first published as a Shiny app and described in Mazzara et al. (2017) <doi:10.1038/srep45477> and further described in Bombaci & Rossi (2019) <doi:10.1007/978-1-4939-9164-8_16>, and widely expanded as a package as presented in the bioRxiv pre print Ferrari et al. <doi:10.1101/2022.01.17.476603>.
Classification using Richard A. Harshman's Parallel Factor Analysis-1 (Parafac) model or Parallel Factor Analysis-2 (Parafac2) model fit to a three-way or four-way data array. See Harshman and Lundy (1994): <doi:10.1016/0167-9473(94)90132-5>. Uses component weights from one mode of a Parafac or Parafac2 model as features to tune parameters for one or more classification methods via a k-fold cross-validation procedure. Allows for constraints on different tensor modes. Supports penalized logistic regression, support vector machine, random forest, feed-forward neural network, regularized discriminant analysis, and gradient boosting machine. Supports binary and multiclass classification. Predicts class labels or class probabilities and calculates multiple classification performance measures. Implements parallel computing via the parallel', doParallel', and doRNG packages.
This is an opinionated wrapper around the python-chess package. It allows users to read and write PGN files as well as create and explore game trees such as the ones seen in chess books.
Draws systematic samples from a population that follows linear trend. The function returns a matrix comprising of the required samples as its column vectors. The samples produced are highly efficient and the inter sampling variance is minimum. The scheme will be useful in various field like Bioinformatics where the samples are expensive and must be precise in reflecting the population by possessing least sampling variance.
DNA copy number data evaluation using both their initial form (copy number as a noisy function of genomic position) and their approximation by a piecewise-constant function (segmentation), for the purpose of identifying genomic regions where the copy number differs from the norm.
This package provides functions to perform matching algorithms for causal inference with clustered data, as described in B. Arpino and M. Cannas (2016) <doi:10.1002/sim.6880>. Pure within-cluster and preferential within-cluster matching are implemented. Both algorithms provide causal estimates with cluster-adjusted estimates of standard errors.
It is assumed that psychological distances between the categories are equal for the measurement instruments consisted of polytomously scored items. According to Muraki, this assumption must be tested. In the examination process of this assumption, the fit indexes are obtained and evaluated. This package provides that this assumption is removed. By with this package, the converted scale values of all items in a measurement instrument can be calculated by estimating a category parameter set for each item. Thus, the calculations can be made without any need to usage of the common category parameter set. Through this package, the psychological distances of the items are scaled. The scaling of a category parameter set for each item cause differentiation of score of the categories will be got from items. Also, the total measurement instrument score of an individual can be calculated according to the scaling of item score categories by with this package.This package provides that the place of individuals related to the structure to be measured with a measurement instrument consisted of polytomously scored items can be reveal more accurately. In this way, it is thought that the results obtained about individuals can be made more sensitive, and the differences between individuals can be revealed more accurately. On the other hand, it can be argued that more accurate evidences can be obtained regarding the psychometric properties of the measurement instruments.
Estimates tree crown scorch from terrestrial lidar scans collected with a RIEGL vz400i. The methods follow those described in Cannon et al. (2025, Fire Ecology 21:71, <doi:10.1186/s42408-025-00420-0>).
This package provides functions to calculate the relative crystallinity of starch by X-ray Diffraction (XRD) and Infrared Spectroscopy (FTIR). Starch is biosynthesized by plants in the form of granules semicrystalline. For XRD, the relative crystallinity is obtained by separating the crystalline peaks from the amorphous scattering region. For FTIR, the relative crystallinity is achieved by setting of a Gaussian holocrystalline-peak in the 800-1300 cm-1 region of FTIR spectrum of starch which is divided into amorphous region and crystalline region. The relative crystallinity of native starch granules varies from 14 of 45 percent. This package was supported by FONDECYT 3150630 and CIPA Conicyt-Regional R08C1002 is gratefully acknowledged.
Analysis of network community objects with applications to neuroimaging data. There are two main components to this package. The first is the hierarchical multimodal spinglass (HMS) algorithm, which is a novel community detection algorithm specifically tailored to the unique issues within brain connectivity. The other is a suite of semiparametric kernel machine methods that allow for statistical inference to be performed to test for potential associations between these community structures and an outcome of interest (binary or continuous).
This package provides a minimum set of functions to perform compositional data analysis using the log-ratio approach introduced by John Aitchison (1982). Main functions have been implemented in c++ for better performance.
Perform additional multiple testing procedure methods to p.adjust(), such as weighted Hochberg (Tamhane, A. C., & Liu, L., 2008) <doi:10.1093/biomet/asn018>, ICC adjusted Bonferroni method (Shi, Q., Pavey, E. S., & Carter, R. E., 2012) <doi:10.1002/pst.1514> and a new correlation corrected weighted Hochberg for correlated endpoints.
High dimensional discriminant analysis with compositional data is performed. The compositional data are first transformed using the alpha-transformation of Tsagris M., Preston S. and Wood A.T.A. (2011) <doi:10.48550/arXiv.1106.1451>, and then the High Dimensional Discriminant Analysis (HDDA) algorithm of Bouveyron C. Girard S. and Schmid C. (2007) <doi:10.1080/03610920701271095> is applied.
Set of generalised tools for the flexible computation of climate related indicators defined by the user. Each method represents a specific mathematical approach which is combined with the possibility to select an arbitrary time period to define the indicator. This enables a wide range of possibilities to tailor the most suitable indicator for each particular climate service application (agriculture, food security, energy, water management, health...). This package is intended for sub-seasonal, seasonal and decadal climate predictions, but its methods are also applicable to other time-scales, provided the dimensional structure of the input is maintained. Additionally, the outputs of the functions in this package are compatible with CSTools'. This package is described in Pérez-Zanón et al. (2023) <doi:10.1016/j.cliser.2023.100393> and it was developed in the context of H2020 MED-GOLD (776467) and S2S4E (776787) projects. See Lledó et al. (2019) <doi:10.1016/j.renene.2019.04.135> and Chou et al., 2023 <doi:10.1016/j.cliser.2023.100345> for details.
Broken adaptive ridge estimator for censored data is used to select variables and estimate their coefficients in the semi-parametric accelerated failure time model for right-censored survival data.