Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a simple and effective tool for computing and visualizing statistical power for meta-analysis, including power analysis of main effects (Jackson & Turner, 2017)<doi:10.1002/jrsm.1240>, test of homogeneity (Pigott, 2012)<doi:10.1007/978-1-4614-2278-5>, subgroup analysis, and categorical moderator analysis (Hedges & Pigott, 2004)<doi:10.1037/1082-989X.9.4.426>.
This package provides various functions for parameter estimation of one-dimensional stable distributions and their mixtures. It implements a diverse set of estimation methods, including quantile-based approaches, regression methods based on the empirical characteristic function (empirical, kernel, and recursive), and maximum likelihood estimation. For mixture models, it provides stochastic expectationâ maximization (SEM) algorithms and Bayesian estimation methods using sampling and importance sampling to overcome the long burn-in period of Markov Chain Monte Carlo (MCMC) strategies. The package also includes tools and statistical tests for analyzing whether a dataset follows a stable distribution. Some of the implemented methods are described in Hajjaji, O., Manou-Abi, S. M., and Slaoui, Y. (2024) <doi:10.1080/02664763.2024.2434627>.
Some basic math calculators for finding angles for triangles and for finding the greatest common divisor of two numbers and so on.
Various utilities to manipulate multivariate polynomials. The package is almost completely superceded by the spray and mvp packages, which are much more efficient.
Extends the mlr3 package with a backend to transparently work with databases such as SQLite', DuckDB', MySQL', MariaDB', or PostgreSQL'. The package provides three additional backends: DataBackendDplyr relies on the abstraction of package dbplyr to interact with most DBMS. DataBackendDuckDB operates on DuckDB data bases and also on Apache Parquet files. DataBackendPolars operates on Polars data frames.
This package provides a simple function, mwsApp(), that runs a shiny app spanning multiple, connected windows. This uses all standard shiny conventions, and depends only on the shiny package.
This package offers three important components: (1) to construct a use-defined linear mixed model, (2) to employ one of linear mixed model approaches: minimum norm quadratic unbiased estimation (MINQUE) (Rao, 1971) for variance component estimation and random effect prediction; and (3) to employ a jackknife resampling technique to conduct various statistical tests. In addition, this package provides the function for model or data evaluations.This R package offers fast computations for large data sets analyses for various irregular data structures.
Sentiment analysis is a popular technique in text mining that attempts to determine the emotional state of some text. We provide a new implementation of a common method for computing sentiment, whereby words are scored as positive or negative according to a dictionary lookup. Then the sum of those scores is returned for the document. We use the Hu and Liu sentiment dictionary ('Hu and Liu', 2004) <doi:10.1145/1014052.1014073> for determining sentiment. The scoring function is vectorized by document, and scores for multiple documents are computed in parallel via OpenMP'.
Combination of either p-values or modified effect sizes from different studies to find differentially expressed genes.
Simulate a (bivariate) multivariate renewal Hawkes (MRHawkes) self-exciting process, with given immigrant hazard rate functions and offspring density function. Calculate the likelihood of a MRHawkes process with given hazard rate functions and offspring density function for an (increasing) sequence of event times. Calculate the Rosenblatt residuals of the event times. Predict future event times based on observed event times up to a given time. For details see Stindl and Chen (2018) <doi:10.1016/j.csda.2018.01.021>.
Calculate Krippendorff's alpha for multi-valued data using the methods introduced by Krippendorff and Craggs (2016) <doi:10.1080/19312458.2016.1228863>. Nominal, ordinal, interval, and ratio data types are supported, with options to create bootstrapped estimates of alpha and/or parallelize calculations.
R Client for the Microsoft Cognitive Services Text Analytics REST API, including Sentiment Analysis, Topic Detection, Language Detection, and Key Phrase Extraction. An account MUST be registered at the Microsoft Cognitive Services website <https://www.microsoft.com/cognitive-services/> in order to obtain a (free) API key. Without an API key, this package will not work properly.
Spontaneous adverse event reports have a high potential for detecting adverse drug reactions. However, due to their dimension, the analysis of such databases requires statistical methods. We propose to use a logistic regression whose sparsity is viewed as a model selection challenge. Since the model space is huge, a Metropolis-Hastings algorithm carries out the model selection by maximizing the BIC criterion.
This package provides functions and S4 methods to create and manage discrete time Markov chains more easily. In addition functions to perform statistical (fitting and drawing random variates) and probabilistic (analysis of their structural proprieties) analysis are provided. See Spedicato (2017) <doi:10.32614/RJ-2017-036>. Some functions for continuous times Markov chains depend on the suggested ctmcd package.
This package provides a flexible framework for estimating the variance-covariance matrix of estimated parameters. Estimation relies on unbiased estimating functions to compute the empirical sandwich variance. (i.e., M-estimation in the vein of Tsiatis et al. (2019) <doi:10.1201/9780429192692>.
An implementation of Multi-Task Logistic Regression (MTLR) for R. This package is based on the method proposed by Yu et al. (2011) which utilized MTLR for generating individual survival curves by learning feature weights which vary across time. This model was further extended to account for left and interval censored data.
This package provides some function to perform posterior estimation for some distribution, with emphasis to extreme value distributions. It contains some extreme datasets, and functions that perform the runs of posterior points of the GPD and GEV distribution. The package calculate some important extreme measures like return level for each t periods of time, and some plots as the predictive distribution, and return level plots.
Highly variable gene selection methods, including popular public available methods, and also the mixture of multiple highly variable gene selection methods, <https://github.com/RuzhangZhao/mixhvg>. Reference: <doi:10.1101/2024.08.25.608519>.
Machine learning algorithms have been used for performing single missing data imputation and most recently, multiple imputations. However, this is the first attempt for using automated machine learning algorithms for performing both single and multiple imputation. Automated machine learning is a procedure for fine-tuning the model automatic, performing a random search for a model that results in less error, without overfitting the data. The main idea is to allow the model to set its own parameters for imputing each variable separately instead of setting fixed predefined parameters to impute all variables of the dataset. Using automated machine learning, the package fine-tunes an Elastic Net (default) or Gradient Boosting, Random Forest, Deep Learning, Extreme Gradient Boosting, or Stacked Ensemble machine learning model (from one or a combination of other supported algorithms) for imputing the missing observations. This procedure has been implemented for the first time by this package and is expected to outperform other packages for imputing missing data that do not fine-tune their models. The multiple imputation is implemented via bootstrapping without letting the duplicated observations to harm the cross-validation procedure, which is the way imputed variables are evaluated. Most notably, the package implements automated procedure for handling imputing imbalanced data (class rarity problem), which happens when a factor variable has a level that is far more prevalent than the other(s). This is known to result in biased predictions, hence, biased imputation of missing data. However, the autobalancing procedure ensures that instead of focusing on maximizing accuracy (classification error) in imputing factor variables, a fairer procedure and imputation method is practiced.
Effect sizes, diagnostics and performance metrics for multilevel and mixed effects models. Includes marginal and conditional R2 estimates for linear mixed effects models based on Johnson (2014) <doi:10.1111/2041-210X.12225>.
Dataset and functions from the meta-analysis published in Medicine & Science in Sports & Exercise. It contains all the data and functions to reproduce the analysis. "Effectiveness of HIIE versus MICT in Improving Cardiometabolic Risk Factors in Health and Disease: A Meta-analysis". Felipe Mattioni Maturana, Peter Martus, Stephan Zipfel, Andreas M Nieà (2020) <doi:10.1249/MSS.0000000000002506>.
This package contains functions for converting existing HTML/JavaScript source into equivalent shiny functions. Bootstraps the process of making new shiny functions by allowing us to turn HTML snippets directly into R functions.
Facilitates the incorporation of biological processes in biogeographical analyses. It offers conveniences in fitting, comparing and extrapolating models of biological processes such as physiology and phenology. These spatial extrapolations can be informative by themselves, but also complement traditional correlative species distribution models, by mixing environmental and process-based predictors. Caetano et al (2020) <doi:10.1111/oik.07123>.
An implementation of matrix mathematics wherein operations are performed "by name.".