Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Bayesian (and some likelihoodist) functions as alternatives to hypothesis-testing functions in R base using a user interface patterned after those of R's hypothesis testing functions. See McElreath (2016, ISBN: 978-1-4822-5344-3), Gelman and Hill (2007, ISBN: 0-521-68689-X) (new edition in preparation) and Albert (2009, ISBN: 978-0-387-71384-7) for good introductions to Bayesian analysis and Pawitan (2002, ISBN: 0-19-850765-8) for the Likelihood approach. The functions in the package also make extensive use of graphical displays for data exploration and model comparison.
This package implements species distribution modeling and ecological niche modeling, including: bias correction, spatial cross-validation, model evaluation, raster interpolation, biotic "velocity" (speed and direction of movement of a "mass" represented by a raster), interpolating across a time series of rasters, and use of spatially imprecise records. The heart of the package is a set of "training" functions which automatically optimize model complexity based number of available occurrences. These algorithms include MaxEnt, MaxNet, boosted regression trees/gradient boosting machines, generalized additive models, generalized linear models, natural splines, and random forests. To enhance interoperability with other modeling packages, no new classes are created. The package works with PROJ6 geodetic objects and coordinate reference systems.
Enables users to incorporate expert opinion with parametric survival analysis using a Bayesian or frequentist approach. Expert Opinion can be provided on the survival probabilities at certain time-point(s) or for the difference in mean survival between two treatment arms. Please reference it's use as Cooney, P., White, A. (2023) <doi:10.1177/0272989X221150212>.
In agricultural, post-harvest and processing, engineering and industrial experiments factors are often differentiated with ease with which they can change from experimental run to experimental run. This is due to the fact that one or more factors may be expensive or time consuming to change i.e. hard-to-change factors. These factors restrict the use of complete randomization as it may make the experiment expensive and time consuming. Split plot designs can be used for such situations. In general model estimation of split plot designs require the use of generalized least squares (GLS). However for some split-plot designs ordinary least squares (OLS) estimates are equivalent to generalized least squares (GLS) estimates. These types of designs are known in literature as equivalent-estimation split-plot design. For method details see, Macharia, H. and Goos, P.(2010) <doi:10.1080/00224065.2010.11917833>.Balanced split plot designs are designs which have an equal number of subplots within every whole plot. This package used to construct equivalent estimation balanced split plot designs for different experimental set ups along with different statistical criteria to measure the performance of these designs. It consist of the function equivalent_BSPD().
An ensemble method for the statistical detection of a rare class in two-class classification problems. The method uses an ensemble of classifiers where the constituent models of the ensemble use disjoint subsets (phalanxes) of explanatory variables. We provide an implementation of the phalanx-formation algorithm. Please see Tomal et al. (2015) <doi:10.1214/14-AOAS778>, Tomal et al. (2016) <doi:10.1021/acs.jcim.5b00663>, and Tomal et al. (2019) <arXiv:1706.06971> for more details.
Use SQLite3 as a database system via a complete SQL free R interface, treating the data as if it was a single spreadsheet.
Computes various effect sizes of the difference, their variance, and confidence interval. This package treats Cohen's d, Hedges d, biased/unbiased c (an effect size between a mean and a constant) and e (an effect size between means without assuming the variance equality).
This package provides functions to read and write files from Egnyte cloud storage using the Egnyte API <https://developers.egnyte.com/docs>. Supports both API key and OAuth 2.0 authentication for file transfer operations.
Fast and easy computation of Euclidean Minimum Spanning Trees (EMST) from data, relying on the R API for mlpack - the C++ Machine Learning Library (Curtin et. al., 2013). emstreeR uses the Dual-Tree Boruvka (March, Ram, Gray, 2010, <doi:10.1145/1835804.1835882>), which is theoretically and empirically the fastest algorithm for computing an EMST. This package also provides functions and an S3 method for readily visualizing Minimum Spanning Trees (MST) using either the style of the base', scatterplot3d', or ggplot2 libraries; and functions to export the MST output to shapefiles.
Training and prediction functions are provided for the Extreme Learning Machine algorithm (ELM). The ELM use a Single Hidden Layer Feedforward Neural Network (SLFN) with random generated weights and no gradient-based backpropagation. The training time is very short and the online version allows to update the model using small chunk of the training set at each iteration. The only parameter to tune is the hidden layer size and the learning function.
Small toolbox for data analyses in environmental chemistry and ecotoxicology. Provides, for example, calibration() to calculate calibration curves and corresponding limits of detection (LODs) and limits of quantification (LOQs) according to German DIN 32645 (2008). texture() makes it easy to estimate soil particle size distributions from hydrometer measurements (ASTM D422-63, 2007).
Routines for combining causal effect estimates and study diagnostics across multiple data sites in a distributed study, without sharing patient-level data. Allows for normal and non-normal approximations of the data-site likelihood of the effect parameter.
This package provides a simple interface to search and retrieve scientific articles from the SciELO (Scientific Electronic Library Online) database <https://scielo.org>. It allows querying, filtering, and visualizing results in an interactive table.
Given the scores from decision makers, the analytic hierarchy process can be conducted easily.
Chat with large language models from a range of providers including Claude <https://claude.ai>, OpenAI <https://chatgpt.com>, and more. Supports streaming, asynchronous calls, tool calling, and structured data extraction.
This package provides unsupervised selection and clustering of microarray data using mixture models. Following the methods described in McLachlan, Bean and Peel (2002) <doi:10.1093/bioinformatics/18.3.413> a subset of genes are selected based one the likelihood ratio statistic for the test of one versus two components when fitting mixtures of t-distributions to the expression data for each gene. The dimensionality of this gene subset is further reduced through the use of mixtures of factor analyzers, allowing the tissue samples to be clustered by fitting mixtures of normal distributions.
Three functional modules, including genetic features, differential expression analysis and non-additive expression analysis were integrated into the package. And the package is suitable for RNA-seq and small RNA sequencing data. Besides, two methods of non-additive expression analysis were provided. One is the calculation of the additive (a) and dominant (d), the other is the evaluation of expression level dominance by comparing the total expression of the gene in hybrid offspring with the expression level in parents. For non-additive expression analysis of RNA-seq data, it is only applicable to hybrid offspring (including two sub-genomes) species for the time being.
Gene information from Ensembl genome builds GRCh38.p14 and GRCh37.p13 to use with the topr package. The datasets were originally downloaded from <https://ftp.ensembl.org/pub/current/gtf/homo_sapiens/Homo_sapiens.GRCh38.111.gtf.gz> and <https://ftp.ensembl.org/pub/grch37/current/gtf/homo_sapiens/Homo_sapiens.GRCh37.87.gtf.gz> and converted into the format required by the topr package. See <https://github.com/totajuliusd/topr?tab=readme-ov-file#how-to-use-topr-with-other-species-than-human> to see the required format.
This package performs analysis of regression in simple designs with quantitative treatments, including mixed models and non linear models.
The equality of a large number k of densities is tested by measuring the L2 distance between the corresponding kernel density estimators and the one based on the pooled sample. The test even works for sample sizes as small as 2.
This package provides functions to test for gene x gene interactions in a bi-parental population of inbred lines. The data are fitted with the mixed linear model described in Rio et al. (2022) <doi:10.1101/2022.12.18.520958>, that accounts for gene x gene interactions at both the fixed effect and variance levels. The package also provides graphical tools to display the gene x gene interaction trend at the mean level and the variance component analysis.
Illustrates the concepts developed in Sarkar and Rashid (2019, ISSN:0025-5742) <http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiH4deL3q3xAhWX73MBHR_wDaYQFnoECAUQAw&url=https%3A%2F%2Fwww.indianmathsociety.org.in%2Fmathstudent-part-2-2019.pdf&usg=AOvVaw3SY--3T6UAWUnH5-Nj6bSc>. This package helps a user guess four things (mean, MD, scaled MSD, and RMSD) before they get the SD. 1) The package displays the Empirical Cumulative Distribution Function (ECDF) of the given data. The user must choose the value of the mean by equating the areas of two colored (blue and green) regions. The package gives feedback to improve the choice until it is correct. Alternatively, the reader may continue with a different guess for the center (not necessarily the mean). 2) The user chooses the values of the Mean Deviation (MD) based on the ECDF of the deviations by equating the areas of two newly colored (blue and green) regions, with feedback from the package until the user guesses correctly. 3) The user chooses the Scaled Mean Squared Deviation (MSD) based on the ECDF of the scaled square deviations by equating the areas of two newly colored (blue and green) regions, with feedback from the package until the user guesses correctly. 4) The user chooses the Root Mean Squared Deviation (RMSD) by ensuring that its intersection with the ECDF of the deviations is at the same height as the intersection between the scaled MSD and the ECDF of the scaled squared deviations. Additionally, the intersection of two blue lines (the green dot) should fall on the vertical line at the maximum deviation. 5) Finally, if the mean is chosen correctly, only then the user can view the population SD (the same as the RMSD) and the sample SD (sqrt(n/(n-1))*RMSD) by clicking the respective buttons. If the mean is chosen incorrectly, the user is asked to correct it.
It allows structuring electoral data of different size and structure to calculate various indicators frequently used in the studies of electoral systems and party systems. Indicators of electoral volatility, electoral disproportionality, party nationalization and the effective number of parties are included.
An alternative to Exploratory Factor Analysis (EFA) for metrical data in R. Drawing on characteristics of classical test theory, Exploratory Likert Scaling (ELiS) supports the user exploring multiple one-dimensional data structures. In common research practice, however, EFA remains the go-to method to uncover the (underlying) structure of a data set. Orthogonal dimensions and the potential of overextraction are often accepted as side effects. As described in Müller-Schneider (2001) <doi:10.1515/zfsoz-2001-0404>), ELiS confronts these problems. As a result, elisr provides the platform to fully exploit the exploratory potential of the multiple scaling approach itself.