Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Power analysis is used in the estimation of sample sizes for experimental designs. Most programs and R packages will only output the highest recommended sample size to the user. Often the user input can be complicated and computing multiple power analyses for different treatment comparisons can be time consuming. This package simplifies the user input and allows the user to view all of the sample size recommendations or just the ones they want to see. The calculations used to calculate the recommended sample sizes are from the pwr package.
Implementations of the expected shortfall backtests of Bayer and Dimitriadis (2020) <doi:10.1093/jjfinec/nbaa013> as well as other well known backtests from the literature. Can be used to assess the correctness of forecasts of the expected shortfall risk measure which is e.g. used in the banking and finance industry for quantifying the market risk of investments. A special feature of the backtests of Bayer and Dimitriadis (2020) <doi:10.1093/jjfinec/nbaa013> is that they only require forecasts of the expected shortfall, which is in striking contrast to all other existing backtests, making them particularly attractive for practitioners.
Extension of testthat package to make unit tests on empirical distributions of estimators and functions for diagnostics of their finite-sample performance.
Three semi-parametric methods for detection of outliers in environmental data based on kernel regression and subsequent analysis of smoothing residuals. The first method (Campulova, Michalek, Mikuska and Bokal (2018) <DOI: 10.1002/cem.2997>) analyzes the residuals using changepoint analysis, the second method is based on control charts (Campulova, Veselik and Michalek (2017) <DOI: 10.1016/j.apr.2017.01.004>) and the third method (Holesovsky, Campulova and Michalek (2018) <DOI: 10.1016/j.apr.2017.06.005>) analyzes the residuals using extreme value theory (Holesovsky, Campulova and Michalek (2018) <DOI: 10.1016/j.apr.2017.06.005>).
Routines for performing empirical calibration of observational study estimates. By using a set of negative control hypotheses we can estimate the empirical null distribution of a particular observational study setup. This empirical null distribution can be used to compute a calibrated p-value, which reflects the probability of observing an estimated effect size when the null hypothesis is true taking both random and systematic error into account. A similar approach can be used to calibrate confidence intervals, using both negative and positive controls. For more details, see Schuemie et al. (2013) <doi:10.1002/sim.5925> and Schuemie et al. (2018) <doi:10.1073/pnas.1708282114>.
This package provides a consistent set of functions for enriching and analyzing sovereign-level economic data. Economists, data scientists, and financial professionals can use the package to add standardized identifiers, demographic and macroeconomic indicators, and derived metrics such as gross domestic product per capita or government expenditure shares.
This package provides functions for eleven procedures for determining the number of factors, including functions for parallel analysis and the minimum average partial test. There are also functions for conducting principal components analysis, principal axis factor analysis, maximum likelihood factor analysis, image factor analysis, and extension factor analysis, all of which can take raw data or correlation matrices as input and with options for conducting the analyses using Pearson correlations, Kendall correlations, Spearman correlations, gamma correlations, or polychoric correlations. Varimax rotation, promax rotation, and Procrustes rotations can be performed. Additional functions focus on the factorability of a correlation matrix, the congruences between factors from different datasets, the assessment of local independence, the assessment of factor solution complexity, and internal consistency. Auerswald & Moshagen (2019, ISSN:1939-1463); Field, Miles, & Field (2012, ISBN:978-1-4462-0045-2); Mulaik (2010, ISBN:978-1-4200-9981-2); O'Connor (2000, <doi:10.3758/bf03200807>); O'Connor (2001, ISSN:0146-6216).
This package provides a flexible framework for calculating Elo ratings and resulting rankings of any two-team-per-matchup system (chess, sports leagues, Go', etc.). This implementation is capable of evaluating a variety of matchups, Elo rating updates, and win probabilities, all based on the basic Elo rating system. It also includes methods to benchmark performance, including logistic regression and Markov chain models.
The purpose of this library is to compute the optimal charging cost function for a electric vehicle (EV). It is well known that the charging function of a EV is a concave function that can be approximated by a piece-wise linear function, so bigger the state of charge, slower the charging process is. Moreover, the other important function is the one that gives the electricity price. This function is usually step-wise, since depending on the time of the day, the price of the electricity is different. Then, the problem of charging an EV to a certain state of charge is not trivial. This library implements an algorithm to compute the optimal charging cost function, that is, it plots for a given state of charge r (between 0 and 1) the minimum cost we need to pay in order to charge the EV to that state of charge r. The details of the algorithm are described in González-Rodrà guez et at (2023) <https://inria.hal.science/hal-04362876v1>.
This package performs frequentist inference for the extremal index of a stationary time series. Two types of methodology are used. One type is based on a model that relates the distribution of block maxima to the marginal distribution of series and leads to the semiparametric maxima estimators described in Northrop (2015) <doi:10.1007/s10687-015-0221-5> and Berghaus and Bucher (2018) <doi:10.1214/17-AOS1621>. Sliding block maxima are used to increase precision of estimation. A graphical block size diagnostic is provided. The other type of methodology uses a model for the distribution of threshold inter-exceedance times (Ferro and Segers (2003) <doi:10.1111/1467-9868.00401>). Three versions of this type of approach are provided: the iterated weight least squares approach of Suveges (2007) <doi:10.1007/s10687-007-0034-2>, the K-gaps model of Suveges and Davison (2010) <doi:10.1214/09-AOAS292> and a similar approach of Holesovsky and Fusek (2020) <doi:10.1007/s10687-020-00374-3> that we refer to as D-gaps. For the K-gaps and D-gaps models this package allows missing values in the data, can accommodate independent subsets of data, such as monthly or seasonal time series from different years, and can incorporate information from right-censored inter-exceedance times. Graphical diagnostics for the threshold level and the respective tuning parameters K and D are provided.
This package provides a set of functions for organising and analysing datasets from experiments run using Eyelink eye-trackers. Organising functions help to clean and prepare eye-tracking datasets for analysis, and mark up key events such as display changes and responses made by participants. Analysing functions help to create means for a wide range of standard measures (such as mean fixation durations'), which can then be fed into the appropriate statistical analyses and graphing packages as necessary.
Parametric proportional hazards fitting with left truncation and right censoring for common families of distributions, piecewise constant hazards, and discrete models. Parametric accelerated failure time models for left truncated and right censored data. Proportional hazards models for tabular and register data. Sampling of risk sets in Cox regression, selections in the Lexis diagram, bootstrapping. Broström (2022) <doi:10.1201/9780429503764>.
The summation notation suggested by Einstein (1916) <doi:10.1002/andp.19163540702> is a concise mathematical notation that implicitly sums over repeated indices of n-dimensional arrays. Many ordinary matrix operations (e.g. transpose, matrix multiplication, scalar product, diag()', trace etc.) can be written using Einstein notation. The notation is particularly convenient for expressing operations on arrays with more than two dimensions because the respective operators ('tensor products') might not have a standardized name.
An implementation of sequential testing that uses evidence ratios computed from the weights of a set of models. These weights correspond either to Akaike weights computed from the Akaike Information Criterion (AIC) or the Bayesian Information Criterion (BIC) and following Burnham & Anderson (2004, <doi:10.1177/0049124104268644>) recommendations, or to pseudo-BMA weights computed from the WAIC or the LOO-IC of models fitted with brms and following Yao et al. (2017, <arXiv:1704.02030v3>).
Speed up common tasks, particularly logical or relational comparisons and routine follow up tasks such as finding the indices and subsetting. Inspired by mathematics, where something like: 3 < x < 6 is a standard, elegant and clear way to assert that x is both greater than 3 and less than 6 (see for example <https://en.wikipedia.org/wiki/Relational_operator>), a chaining operator is implemented. The chaining operator, %c%, allows multiple relational operations to be used in quotes on the right hand side for the same object, on the left hand side. The %e% operator allows something like set-builder notation (see for example <https://en.wikipedia.org/wiki/Set-builder_notation>) to be used on the right hand side. All operators have built in prefixes defined for all, subset, and which to reduce the amount of code needed for common tasks, such as return those values that are true.
Computes empirical Bayes confidence estimators and confidence intervals in a normal means model. The intervals are robust in the sense that they achieve correct coverage regardless of the distribution of the means. If the means are treated as fixed, the intervals have an average coverage guarantee. The implementation is based on Armstrong, Kolesár and Plagborg-Møller (2020) <arXiv:2004.03448>.
We provide the main R functions to compute the posterior interval for the noncentrality parameter of the chi-squared distribution. The skewness estimate of the posterior distribution is also available to improve the coverage rate of posterior intervals. Details can be found in Du and Hu (2020) <doi:10.1080/01621459.2020.1777137>.
Fit models of modularity to morphological landmarks. Perform model selection on results. Fit models with a single within-module correlation or with separate within-module correlations fitted to each module.
Estimate prior variable weights for Bayesian Additive Regression Trees (BART). These weights correspond to the probabilities of the variables being selected in the splitting rules of the sum-of-trees. Weights are estimated using empirical Bayes and external information on the explanatory variables (co-data). BART models are fitted using the dbarts R package. See Goedhart and others (2023) <doi:10.1002/sim.70004> for details.
This package provides tools for calculating evolvability parameters from estimated G-matrices as defined in Hansen and Houle (2008) <doi:10.1111/j.1420-9101.2008.01573.x> and fits phylogenetic comparative models that link the rate of evolution of a trait to the state of another evolving trait (see Hansen et al. 2021 Systematic Biology <doi:10.1093/sysbio/syab079>). The package was released with Bolstad et al. (2014) <doi:10.1098/rstb.2013.0255>, which contains some examples of use.
DNA methylation (6mA) is a major epigenetic process by which alteration in gene expression took place without changing the DNA sequence. Predicting these sites in-vitro is laborious, time consuming as well as costly. This EpiSemble package is an in-silico pipeline for predicting DNA sequences containing the 6mA sites. It uses an ensemble-based machine learning approach by combining Support Vector Machine (SVM), Random Forest (RF) and Gradient Boosting approach to predict the sequences with 6mA sites in it. This package has been developed by using the concept of Chen et al. (2019) <doi:10.1093/bioinformatics/btz015>.
This package provides an R interface to the Evolution API <https://evoapicloud.com>, enabling sending and receiving WhatsApp messages directly from R'. Functions include sending text, images, documents, stickers, geographic locations, and interactive messages (lists). Also includes webhook parsing utilities and channel health checks.
Prints out information about the R working environment (system, R version,loaded and attached packages and versions) from a single function "env_doc()". Optionally adds information on git repository, tags, commits and remotes (if available).
Functions, data sets and shiny apps for "Epidemics: Models and Data in R" by Ottar N. Bjornstad (ISBN 978-3-319-97487-3) <https://www.springer.com/gp/book/9783319974866>. The package contains functions to study the S(E)IR model, spatial and age-structured SIR models; time-series SIR and chain-binomial stochastic models; catalytic disease models; coupled map lattice models of spatial transmission and network models for social spread of infection. The package is also an advanced quantitative companion to the coursera Epidemics Massive Online Open Course <https://www.coursera.org/learn/epidemics>.