Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Visualizes variables from descriptive tables produced by descsuppR::buildDescrTbl() using ggstatsplot'. It automatically maps each variable to a suitable ggstatsplot plotting function based on the applied or suggested statistical test. Users can override the automatic mapping via a named list of plot specifications. The package supports grouped and ungrouped tables, and forwards additional arguments to the underlying ggstatsplot functions, providing quick, reproducible, and customizable default visualizations for descriptive summaries.
This package provides functions are provided that facilitate the analysis of SNP (single nucleotide polymorphism) data to answer questions regarding captive breeding and relatedness between individuals. dartR.captive is part of the dartRverse suit of packages. Gruber et al. (2018) <doi:10.1111/1755-0998.12745>. Mijangos et al. (2022) <doi:10.1111/2041-210X.13918>.
The natural increase in the complexity of current research experiments and data demands better tools to enhance productivity in Data Analytics. The package is a framework designed to address the modern challenges in data analytics workflows. The package is inspired by Experiment Line concepts. It aims to provide seamless support for users in developing their data mining workflows by offering a uniform data model and method API. It enables the integration of various data mining activities, including data preprocessing, classification, regression, clustering, and time series prediction. It also offers options for hyper-parameter tuning and supports integration with existing libraries and languages. Overall, the package provides researchers with a comprehensive set of functionalities for data science, promoting ease of use, extensibility, and integration with various tools and libraries. Information on Experiment Line is based on Ogasawara et al. (2009) <doi:10.1007/978-3-642-02279-1_20>.
This package provides a direct approach to optimal designs for copula models based on the Fisher information. Provides flexible functions for building joint PDFs, evaluating the Fisher information and finding optimal designs. It includes an extensible solution to summation and integration called nint', functions for transforming, plotting and comparing designs, as well as a set of tools for common low-level tasks.
This package implements a generalized linear model approach for detecting differentially expressed genes across treatment groups in count data. The package supports both quasi-Poisson and negative binomial models to handle over-dispersion, ensuring robust identification of differential expression. It allows for the inclusion of treatment effects and gene-wise covariates, as well as normalization factors for accurate scaling across samples. Additionally, it incorporates statistical significance testing with options for p-value adjustment and log2 fold range thresholds, making it suitable for RNA-seq analysis as described in by Xu et al., (2024) <doi:10.1371/journal.pone.0300565>.
Direction analysis is a set of tools designed to identify combinatorial effects of multiple treatments/conditions on pathways and kinases profiled by microarray, RNA-seq, proteomics, or phosphoproteomics data. See Yang P et al (2014) <doi:10.1093/bioinformatics/btt616>; and Yang P et al. (2016) <doi:10.1002/pmic.201600068>.
This package provides methods for reading, displaying, processing and writing files originally arranged for the DSSAT-CSM fixed width format. The DSSAT-CSM cropping system model is described at J.W. Jones, G. Hoogenboomb, C.H. Porter, K.J. Boote, W.D. Batchelor, L.A. Hunt, P.W. Wilkens, U. Singh, A.J. Gijsman, J.T. Ritchie (2003) <doi:10.1016/S1161-0301(02)00107-7>.
This package provides functions to impute large gaps within multivariate time series based on Dynamic Time Warping methods. Gaps of size 1 or inferior to a defined threshold are filled using simple average and weighted moving average respectively. Larger gaps are filled using the methodology provided by Phan et al. (2017) <DOI:10.1109/MLSP.2017.8168165>: a query is built immediately before/after a gap and a moving window is used to find the most similar sequence to this query using Dynamic Time Warping. To lower the calculation time, similar sequences are pre-selected using global features. Contrary to the univariate method (package DTWBI'), these global features are not estimated over the sequence containing the gap(s), but a feature matrix is built to summarize general features of the whole multivariate signal. Once the most similar sequence to the query has been identified, the adjacent sequence to this window is used to fill the gap considered. This function can deal with multiple gaps over all the sequences componing the input multivariate signal. However, for better consistency, large gaps at the same location over all sequences should be avoided.
Probability mass function, distribution function, quantile function, random generation and estimation for the skew discrete Laplace distributions.
This package creates survey designs for distance sampling surveys. These designs can be assessed for various effort and coverage statistics. Once the user is satisfied with the design characteristics they can generate a set of transects to use in their distance sampling survey. Many of the designs implemented in this R package were first made available in our Distance for Windows software and are detailed in Chapter 7 of Advanced Distance Sampling, Buckland et. al. (2008, ISBN-13: 978-0199225873). Find out more about estimating animal/plant abundance with distance sampling at <https://distancesampling.org/>.
This package performs the drifting Markov models (DMM) which are non-homogeneous Markov models designed for modeling the heterogeneities of sequences in a more flexible way than homogeneous Markov chains or even hidden Markov models. In this context, we developed an R package dedicated to the estimation, simulation and the exact computation of associated reliability of drifting Markov models. The implemented methods are described in Vergne, N. (2008), <doi:10.2202/1544-6115.1326> and Barbu, V.S., Vergne, N. (2019) <doi:10.1007/s11009-018-9682-8> .
Dynamic CUR (dCUR) boosts the CUR decomposition (Mahoney MW., Drineas P. (2009) <doi:10.1073/pnas.0803205106>) varying the k, the number of columns and rows used, and its final purposes to help find the stage, which minimizes the relative error to reduce matrix dimension. The goal of CUR Decomposition is to give a better interpretation of the matrix decomposition employing proper variable selection in the data matrix, in a way that yields a simplified structure. Its origins come from analysis in genetics. The goal of this package is to show an alternative to variable selection (columns) or individuals (rows). The idea proposed consists of adjusting the probability distributions to the leverage scores and selecting the best columns and rows that minimize the reconstruction error of the matrix approximation ||A-CUR||. It also includes a method that recalibrates the relative importance of the leverage scores according to an external variable of the user's interest.
Detection of differential item functioning (DIF) among dichotomously scored items and differential distractor functioning (DDF) among unscored items with non-linear regression procedures based on generalized logistic regression models (Hladka & Martinkova, 2020, <doi:10.32614/RJ-2020-014>).
Doubly censored data, as described in Chang and Yang (1987) <doi: 10.1214/aos/1176350608>), are commonly seen in many fields. We use EM algorithm to compute the non-parametric MLE (NPMLE) of the cummulative probability function/survival function and the two censoring distributions. One can also specify a constraint F(T)=C, it will return the constrained NPMLE and the -2 log empirical likelihood ratio for this constraint. This can be used to test the hypothesis about the constraint and, by inverting the test, find confidence intervals for probability or quantile via empirical likelihood ratio theorem. Influence functions of hat F may also be calculated, but currently, the it may be slow.
An R DataBase Interface ('DBI') compatible interface to various database platforms ('PostgreSQL', Oracle', Microsoft SQL Server', Amazon Redshift', Microsoft Parallel Database Warehouse', IBM Netezza', Apache Impala', Google BigQuery', Snowflake', Spark', SQLite', and InterSystems IRIS'). Also includes support for fetching data as Andromeda objects. Uses either Java Database Connectivity ('JDBC') or other DBI drivers to connect to databases.
This function provides an interface between Matlab and R in facilitating fast processing for reading and saving DICOM images.
Finds regular and chaotic intervals in the data using the 0-1 test for chaos proposed by Gottwald and Melbourne (2004) <DOI:10.1137/080718851>.
This package provides a `.` object which can be used for unpacking assignments. For example, `.[rows, columns] <- dim(cars)` could be used to pull the number of rows and number of columns from `dim(cars)` into individual variables `rows` and `columns` in a single step.
Access and manage the application programming interface (API) of the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) ReliefWeb disaster events at <https://reliefweb.int/disasters>. The package requires a minimal number of dependencies. It offers functionality to retrieve a user-defined sample of disaster events from ReliefWeb, providing an easy alternative to scraping the ReliefWeb website. It enables a seamless integration of regular data updates into the research work flow.
This package provides a toolkit for parsing dice notation, analyzing rolls, calculating success probabilities, and plotting outcome distributions.
Motifs within biological sequences show a significant role. This package utilizes a user-defined threshold value (window size and similarity) to create consensus segments or motifs through local alignment of dynamic programming with gap and it calculates the frequency of each identified motif, offering a detailed view of their prevalence within the dataset. It allows for thorough exploration and understanding of sequence patterns and their biological importance.
Dual Scaling, developed by Professor Shizuhiko Nishisato (1994, ISBN: 0-9691785-3-6), is a fundamental technique in multivariate analysis used for data scaling and correspondence analysis. Its utility lies in its ability to represent multidimensional data in a lower-dimensional space, making it easier to visualize and understand underlying patterns in complex data. This technique has been implemented to handle various types of data, including Contingency and Frequency data (CF), Multiple-Choice data (MC), Sorting data (SO), Paired-Comparison data (PC), and Rank-Order data (RO), providing users with a powerful tool to explore relationships between variables and observations in various fields, from sociology to ecology, enabling deeper and more efficient analysis of multivariate datasets.
Gives you the ability to use arbitrary Docker images (including custom ones) to process Rmarkdown code chunks.
This package provides methods for distance covariance and distance correlation (Szekely, et al. (2007) <doi:10.1214/009053607000000505>), generalized version thereof (Sejdinovic, et al. (2013) <doi:10.1214/13-AOS1140>) and corresponding tests (Berschneider, Bottcher (2018) <doi:10.48550/arXiv.1808.07280>. Distance standard deviation methods (Edelmann, et al. (2020) <doi:10.1214/19-AOS1935>) and distance correlation methods for survival endpoints (Edelmann, et al. (2021) <doi:10.1111/biom.13470>) are also included.