Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions to calculate several ecological indices of individual and population niche width (Araujo's E, clustering and pairwise similarity among individuals, IS, Petraitis W, and Roughgarden's WIC/TNW) to assess individual specialization based on data of resource use. Resource use can be quantified by counts of categories, measures of mass or length, or proportions. Monte Carlo resampling procedures are available for hypothesis testing against multinomial null models. Details are provided in Zaccarelli et al. (2013) <doi:10.1111/2041-210X.12079> and associated references.
Conversion between attitude representations: DCM, Euler angles, Quaternions, and Euler vectors. Plus conversion between 2 Euler angle set types (xyx, yzy, zxz, xzx, yxy, zyz, xyz, yzx, zxy, xzy, yxz, zyx). Fully vectorized code, with warnings/errors for Euler angles (singularity, out of range, invalid angle order), DCM (orthogonality, not proper, exceeded tolerance to unity determinant) and Euler vectors(not unity). Also quaternion and other useful functions. Based on SpinCalc by John Fuller and SpinConv by Paolo de Leva.
Ensemble model, for classification, regression and unsupervised learning, based on a forest of unpruned and randomized binary decision trees. Each tree is grown by sampling, with replacement, a set of variables at each node. Each cut-point is generated randomly, according to the continuous Uniform distribution. For each tree, data are either bootstrapped or subsampled. The unsupervised mode introduces clustering, dimension reduction and variable importance, using a three-layer engine. Random Uniform Forests are mainly aimed to lower correlation between trees (or trees residuals), to provide a deep analysis of variable importance and to allow native distributed and incremental learning.
This package provides a checkbox group input for usage in a Shiny application. The checkbox group has a head checkbox allowing to check or uncheck all the checkboxes in the group. The checkboxes are customizable.
This package provides popular sampling distributions C++ routines based in armadillo through a header file approach.
Download the lyrics of your favorite songs in text and table formats. Also search for related songs or song information. More information: <https://docs.genius.com/> .
This package provides a set of functions to generate, access and analyze standard data products from archival tagging data.
Using the efficient implementation in the Boost C++ library, functions are provided to generate vectors of Universally Unique Identifiers (UUID) from R supporting random (version 4), name (version 5) and time (version 7) UUIDs'. The initial repository was at <https://gitlab.com/artemklevtsov/rcppuuid>.
Estimating repeatability (intra-class correlation) from Gaussian, binary, proportion and Poisson data.
An implementation of Bayesian model-averaged t-tests that allows users to draw inferences about the presence versus absence of an effect, variance heterogeneity, and potential outliers. The RoBTT package estimates ensembles of models created by combining competing hypotheses and applies Bayesian model averaging using posterior model probabilities. Users can obtain model-averaged posterior distributions and inclusion Bayes factors, accounting for uncertainty in the data-generating process (Maier et al., 2024, <doi:10.3758/s13423-024-02590-5>). The package also provides a truncated likelihood version of the model-averaged t-test, enabling users to exclude potential outliers without introducing bias (Godmann et al., 2024, <doi:10.31234/osf.io/j9f3s>). Users can specify a wide range of informative priors for all parameters of interest. The package offers convenient functions for summary, visualization, and fit diagnostics.
Much as roxygen2 allows one to document functions in the same file as the function itself, roxut allows one to write the unit tests in the same file as the function. Once processed, the unit tests are moved to the appropriate directory. Currently supports testthat and tinytest frameworks. The roxygen2 package provides much of the infrastructure.
This package provides tools for RFM (recency, frequency and monetary value) analysis. Generate RFM score from both transaction and customer level data. Visualize the relationship between recency, frequency and monetary value using heatmap, histograms, bar charts and scatter plots. Includes a shiny app for interactive segmentation. References: i. Blattberg R.C., Kim BD., Neslin S.A (2008) <doi:10.1007/978-0-387-72579-6_12>.
This package provides functions to read and write ImageJ (<https://imagej.net>) Region of Interest (ROI) files, to plot the ROIs and to convert them to spatstat (<https://spatstat.org/>) spatial patterns.
This package provides XML parsing capability through the Rapidxml C++ header-only library.
An R package for multiple imputation using chained random forests. Implemented methods can handle missing data in mixed types of variables by using prediction-based or node-based conditional distributions constructed using random forests. For prediction-based imputation, the method based on the empirical distribution of out-of-bag prediction errors of random forests and the method based on normality assumption for prediction errors of random forests are provided for imputing continuous variables. And the method based on predicted probabilities is provided for imputing categorical variables. For node-based imputation, the method based on the conditional distribution formed by the predicting nodes of random forests, and the method based on proximity measures of random forests are provided. More details of the statistical methods can be found in Hong et al. (2020) <arXiv:2004.14823>.
The R commander plug-in for robust principal component analysis. The Graphical User Interface for Principal Component Analysis (PCA) with Hubert Algorithm method.
Network-based regularization has achieved success in variable selection for high-dimensional biological data due to its ability to incorporate correlations among genomic features. This package provides procedures of network-based variable selection for generalized linear models (Ren et al. (2017) <doi:10.1186/s12863-017-0495-5> and Ren et al.(2019) <doi:10.1002/gepi.22194>). Continuous, binary, and survival response are supported. Robust network-based methods are available for continuous and survival responses.
This package provides a collection of efficient and effective tools and algorithms for subgroup discovery and analytics. The package integrates an R interface to the org.vikamine.kernel library of the VIKAMINE system <http://www.vikamine.org> implementing subgroup discovery, pattern mining and analytics in Java.
This package provides a robust Partial Least-Squares (PLS) method is implemented that is robust to outliers in the residuals as well as to leverage points. A specific weighting scheme is applied which avoids iterations, and leads to a highly efficient robust PLS estimator.
Regularized calibrated estimation for causal inference and missing-data problems with high-dimensional data, based on Tan (2020a) <doi:10.1093/biomet/asz059>, Tan (2020b) <doi:10.1214/19-AOS1824> and Sun and Tan (2020) <arXiv:2009.09286>.
We provide an Rcmdr plug-in based on the depthTools package, which implements different robust statistical tools for the description and analysis of gene expression data based on the Modified Band Depth, namely, the scale curves for visualizing the dispersion of one or various groups of samples (e.g. types of tumors), a rank test to decide whether two groups of samples come from a single distribution and two methods of supervised classification techniques, the DS and TAD methods.
Defines the underlying pipeline structure for reproducible neuroscience, adopted by RAVE (reproducible analysis and visualization of intracranial electroencephalography); provides high-level class definition to build, compile, set, execute, and share analysis pipelines. Both R and Python are supported, with Markdown and shiny dashboard templates for extending and building customized pipelines. See the full documentations at <https://rave.wiki>; to cite us, check out our paper by Magnotti, Wang, and Beauchamp (2020, <doi:10.1016/j.neuroimage.2020.117341>), or run citation("ravepipeline") for details.
This package provides a collection of efficient implementations of popular offline change-point detection algorithms, featuring a consistent, object-oriented interface for practical use.
Provide a simple interface to Bloomberg's OpenFIGI API. Please see <https://openfigi.com> for API details and registration. You may be eligible to have an API key to accelerate your loading process.