Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Analyzing censored variables usually requires the use of optimization algorithms. This package provides an alternative algebraic approach to the task of determining the expected value of a random censored variable with a known censoring point. Likewise this approach allows for the determination of the censoring point if the expected value is known. These results are derived under the assumption that the variable follows an Epanechnikov kernel distribution with known mean and range prior to censoring. Statistical functions related to the uncensored Epanechnikov distribution are also provided by this package.
The purpose of this package is to generate trees and validate unverified code. Trees are made by parsing a statement into a verification tree data structure. This will make it easy to port the statement into another language. Safe statement evaluations are done by executing the verification trees.
This package contains a set of clustering methods and evaluation metrics to select the best number of the clusters based on clustering stability. Two references describe the methodology: Fahimeh Nezhadmoghadam, and Jose Tamez-Pena (2021)<doi:10.1016/j.compbiomed.2021.104753>, and Fahimeh Nezhadmoghadam, et al.(2021)<doi:10.2174/1567205018666210831145825>.
This package provides a methodology simple and trustworthy for the analysis of extreme values and multiple threshold tests for a generalized Pareto distribution, together with an automatic threshold selection algorithm. See del Castillo, J, Daoudi, J and Lockhart, R (2014) <doi:10.1111/sjos.12037>.
This package implements two estimations related to the foundations of info metrics applied to ecological inference. These methodologies assess the lack of disaggregated data and provide an approach to obtaining disaggregated territorial-level data. For more details, see the following references: Fernández-Vázquez, E., Dà az-Dapena, A., Rubiera-Morollón, F. et al. (2020) "Spatial Disaggregation of Social Indicators: An Info-Metrics Approach." <doi:10.1007/s11205-020-02455-z>. Dà az-Dapena, A., Fernández-Vázquez, E., Rubiera-Morollón, F., & Vinuela, A. (2021) "Mapping poverty at the local level in Europe: A consistent spatial disaggregation of the AROPE indicator for France, Spain, Portugal and the United Kingdom." <doi:10.1111/rsp3.12379>.
The production of certified reference materials (CRMs) requires various statistical tests depending on the task and recorded data to ensure that reported values of CRMs are appropriate. Often these tests are performed according to the procedures described in ISO GUIDE 35:2017'. The eCerto package contains a Shiny app which provides functionality to load, process, report and backup data recorded during CRM production and facilitates following the recommended procedures. It is described in Lisec et al (2023) <doi:10.1007/s00216-023-05099-3> and can also be accessed online <https://apps.bam.de/shn00/eCerto/> without package installation.
Programmatic interface to the European Centre for Medium-Range Weather Forecasts dataset web services (ECMWF; <https://www.ecmwf.int/>) and Copernicus's Data Stores. Allows for easy downloads of weather forecasts and climate reanalysis data in R. Data stores covered include the Climate Data Store (CDS; <https://cds.climate.copernicus.eu>), Atmosphere Data Store (ADS; <https://ads.atmosphere.copernicus.eu>) and Early Warning Data Store (CEMS; <https://ewds.climate.copernicus.eu>).
Evaluates the performance of binary classifiers. Computes confusion measures (TP, TN, FP, FN), derived measures (TPR, FDR, accuracy, F1, DOR, ..), and area under the curve. Outputs are well suited for nested dataframes.
Compute the empirical likelihood ratio, -2LogLikRatio (Wilks) statistics, based on current status data for the hypothesis about the parameters of mean or probability or weighted cumulative hazard.
Automatic generation of quizzes or individual questions for learnr tutorials based on R/exams exercises.
Fit model for datasets with easy-to-interpret Gaussian process modeling, predict responses for new inputs. The input variables of the datasets can be quantitative, qualitative/categorical or mixed. The output variable of the datasets is a scalar (quantitative). The optimization of the likelihood function can be chosen by the users (see the documentation of EzGP_fit()). The modeling method is published in "EzGP: Easy-to-Interpret Gaussian Process Models for Computer Experiments with Both Quantitative and Qualitative Factors" by Qian Xiao, Abhyuday Mandal, C. Devon Lin, and Xinwei Deng (2022) <doi:10.1137/19M1288462>.
Tailored explicitly for Experience Sampling Method (ESM) data, it contains a suite of functions designed to simplify preprocessing steps and create subsequent reporting. It empowers users with capabilities to extract critical insights during preprocessing, conducts thorough data quality assessments (e.g., design and sampling scheme checks, compliance rate, careless responses), and generates visualizations and concise summary tables tailored specifically for ESM data. Additionally, it streamlines the creation of informative and interactive preprocessing reports, enabling researchers to transparently share their dataset preprocessing methodologies. Finally, it is part of a larger ecosystem which includes a framework and a web gallery (<https://preprocess.esmtools.com/>).
An implementation of the algorithm described in "Efficient Large- Scale Internet Media Selection Optimization for Online Display Advertising" by Paulson, Luo, and James (Journal of Marketing Research 2018; see URL below for journal text/citation and <http://faculty.marshall.usc.edu/gareth-james/Research/ELMSO.pdf> for a full-text version of the paper). The algorithm here is designed to allocate budget across a set of online advertising opportunities using a coordinate-descent approach, but it can be used in any resource-allocation problem with a matrix of visitation (in the case of the paper, website page- views) and channels (in the paper, websites). The package contains allocation functions both in the presence of bidding, when allocation is dependent on channel-specific cost curves, and when advertising costs are fixed at each channel.
Two classifiers for open set recognition and novelty detection based on extreme value theory. The first classifier is based on the generalized Pareto distribution (GPD) and the second classifier is based on the generalized extreme value (GEV) distribution. For details, see Vignotto, E., & Engelke, S. (2018) <arXiv:1808.09902>.
Saturation of ionic substances in urine is calculated based on sodium, potassium, calcium, magnesium, ammonia, chloride, phosphate, sulfate, oxalate, citrate, ph, and urate. This program is intended for research use, only. The code within is translated from EQUIL2 Visual Basic code based on Werness, et al (1985) "EQUIL2: a BASIC computer program for the calculation of urinary saturation" <doi:10.1016/s0022-5347(17)47703-2> to R. The Visual Basic code was kindly provided by Dr. John Lieske of the Mayo Clinic.
This package provides functions to perform exploratory factor analysis (EFA) procedures and compare their solutions. The goal is to provide state-of-the-art factor retention methods and a high degree of flexibility in the EFA procedures. This way, for example, implementations from R psych and SPSS can be compared. Moreover, functions for Schmid-Leiman transformation and the computation of omegas are provided. To speed up the analyses, some of the iterative procedures, like principal axis factoring (PAF), are implemented in C++.
Estimate a total causal effect from observational data under linearity and causal sufficiency. The observational data is supposed to be generated from a linear structural equation model (SEM) with independent and additive noise. The underlying causal DAG associated the SEM is required to be known up to a maximally oriented partially directed graph (MPDAG), which is a general class of graphs consisting of both directed and undirected edges, including CPDAGs (i.e., essential graphs) and DAGs. Such graphs are usually obtained with structure learning algorithms with added background knowledge. The program is able to estimate every identified effect, including single and multiple treatment variables. Moreover, the resulting estimate has the minimal asymptotic covariance (and hence shortest confidence intervals) among all estimators that are based on the sample covariance.
Estimates power by simulation for multivariate abundance data to be used for sample size estimates. Multivariate equivalence testing by simulation from a Gaussian copula model. The package also provides functions for parameterising multivariate effect sizes and simulating multivariate abundance data jointly. The discrete Gaussian copula approach is described in Popovic et al. (2018) <doi:10.1016/j.jmva.2017.12.002>.
Correlation chart of two set (x and y) of data. Using Quantiles. Visualize the effect of factor.
Fit the hierarchical and non-hierarchical Bayesian measurement models proposed by Bullock, Imai, and Shapiro (2011) <DOI:10.1093/pan/mpr031> to analyze endorsement experiments. Endorsement experiments are a survey methodology for eliciting truthful responses to sensitive questions. This methodology is helpful when measuring support for socially sensitive political actors such as militant groups. The model is fitted with a Markov chain Monte Carlo algorithm and produces the output containing draws from the posterior distribution.
This package provides a non-parametric framework based on estimation statistics principle. Its main purpose is to infer orders of empirical distributions from different categories based on a probability of finding a value in one distribution that is greater than an expectation of another distribution. Given a set of ordered-pair of real-category values the framework is capable of 1) inferring orders of domination of categories and representing orders in the form of a graph; 2) estimating magnitude of difference between a pair of categories in forms of mean-difference confidence intervals; and 3) visualizing domination orders and magnitudes of difference of categories. The publication of this package is at Chainarong Amornbunchornvej, Navaporn Surasvadi, Anon Plangprasopchok, and Suttipong Thajchayapong (2020) <doi:10.1016/j.heliyon.2020.e05435>.
Researchers often use the bootstrap to understand a sample drawn from a population with unknown distribution. The exact bootstrap method is a practical tool for exploring the distribution of small sample size data. For a sample of size n, the exact bootstrap method generates the entire space of n to the power of n resamples and calculates all realizations of the selected statistic. The exactamente package includes functions for implementing two bootstrap methods, the exact bootstrap and the regular bootstrap. The exact_bootstrap() function applies the exact bootstrap method following methodologies outlined in Kisielinska (2013) <doi:10.1007/s00180-012-0350-0>. The regular_bootstrap() function offers a more traditional bootstrap approach, where users can determine the number of resamples. The e_vs_r() function allows users to directly compare results from these bootstrap methods. To augment user experience, exactamente includes the function exactamente_app() which launches an interactive shiny web application. This application facilitates exploration and comparison of the bootstrap methods, providing options for modifying various parameters and visualizing results.
Tests the equality of two covariance matrices, used in paper "Two sample tests for high dimensional covariance matrices." Li and Chen (2012) <arXiv:1206.0917>.
This package provides API access to data from the U.S. Energy Information Administration ('EIA') <https://www.eia.gov/>. Use of the EIA's API and this package requires a free API key obtainable at <https://www.eia.gov/opendata/register.php>. This package includes functions for searching the EIA data directory and returning time series and geoset time series datasets. Datasets returned by these functions are provided by default in a tidy format, or alternatively, in more raw formats. It also offers helper functions for working with EIA date strings and time formats and for inspecting different summaries of series metadata. The package also provides control over API key storage and caching of API request results.