Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
These are my collection of R Markdown templates, mostly for compilation to PDF. These are useful for all things academic and professional, if you are using R Markdown for things like your CV or your articles and manuscripts.
Sample size calculation to detect dynamic treatment regime (DTR) effects based on change in clinical attachment level (CAL) outcomes from a non-surgical chronic periodontitis treatments study. The experiment is performed under a Sequential Multiple Assignment Randomized Trial (SMART) design. The clustered tooth (sub-unit) level CAL outcomes are skewed, spatially-referenced, and non-randomly missing. The implemented algorithm is available in Xu et al. (2019+) <arXiv:1902.09386>.
The nature of working with structured query language ('SQL') scripts efficiently often requires the creation of temporary tables and there are few clean and simple R SQL execution approaches that allow you to complete this kind of work with the R environment. This package seeks to give SQL implementations in R a little love by deploying functions that allow you to deploy complex SQL scripts within a typical R workflow.
This package provides a set of tools to assist statistical programmers in validating Study Data Tabulation Model (SDTM) domain data sets. Statistical programmers are required to validate that a SDTM data set domain has been programmed correctly, per the SDTM Implementation Guide (SDTMIG) by CDISC (<https://www.cdisc.org/standards/foundational/sdtmig>), study specification, and study protocol using a process called double programming. Double programming involves two different programmers independently converting the raw electronic data cut (EDC) data into a SDTM domain data table and comparing their results to ensure accurate standardization of the data. One of these attempts is termed production and the other validation'. Generally, production runs are the official programs for submittals and these are written in SAS'. Validation runs can be programmed in another language, in this case R'.
Proxy forward modelling for sediment archived climate proxies such as Mg/Ca, d18O or Alkenones. The user provides a hypothesised "true" past climate, such as output from a climate model, and details of the sedimentation rate and sampling scheme of a sediment core. Sedproxy returns simulated proxy records. Implements the methods described in Dolman and Laepple (2018) <doi:10.5194/cp-14-1851-2018>.
Providing convenience functions to connect R with the Spotify application programming interface ('API'). At first it aims to help setting up the OAuth2.0 Authentication flow. The default output of the get_*() functions is tidy, but optionally the functions could return the raw response from the API as well. The search_*() and get_*() functions can be combined. See the vignette for more information and examples and the official Spotify for Developers website <https://developer.spotify.com/documentation/web-api/> for information about the Web API'.
Create a skeleton shiny application with create_template() that is reproducible, can be saved and meets academic standards for attribution. Forked from wallace'. Code is split into modules that are loaded and linked together automatically and each call one function. Guidance pages explain modules to users and flexible logging informs them of any errors. Options enable asynchronous operations, viewing of source code, interactive maps and data tables. Use to create complex analytical applications, following best practices in open science and software development. Includes functions for automating repetitive development tasks and an example application at run_shinyscholar() that requires install.packages("shinyscholar", dependencies = TRUE). A guide to developing applications can be found on the package website.
Sampling procedures from the book Stichproben - Methoden und praktische Umsetzung mit R by Goeran Kauermann and Helmut Kuechenhoff (2010).
Implement K-nearest neighbor classifier, weighted nearest neighbor classifier, bagged nearest neighbor classifier, optimal weighted nearest neighbor classifier and stabilized nearest neighbor classifier, and perform model selection via 5 fold cross-validation for them. This package also provides functions for computing the classification error and classification instability of a classification procedure.
This package provides methods for sensory discrimination methods; duotrio, tetrad, triangle, 2-AFC, 3-AFC, A-not A, same-different, 2-AC and degree-of-difference. This enables the calculation of d-primes, standard errors of d-primes, sample size and power computations, and comparisons of different d-primes. Methods for profile likelihood confidence intervals and plotting are included. Most methods are described in Brockhoff, P.B. and Christensen, R.H.B. (2010) <doi:10.1016/j.foodqual.2009.04.003>.
This package implements the s-values proposed by Ed. Leamer. It provides a context-minimal approach for sensitivity analysis using extreme bounds to assess the sturdiness of regression coefficients.
Various functions for creating spherical coordinate system plots via extensions to rgl.
Automatically replaces "misspelled" words in a character vector based on their string distance from a list of words sorted by their frequency in a corpus. The default word list provided in the package comes from the Corpus of Contemporary American English. Uses the Jaro-Winkler distance metric for string similarity as implemented in van der Loo (2014) <doi:10.32614/RJ-2014-011>. The word frequency data is derived from Davies (2008-) "The Corpus of Contemporary American English (COCA)" <https://www.english-corpora.org/coca/>.
This package provides a set of functions allowing to implement the SpiceFP approach which is iterative. It involves transformation of functional predictors into several candidate explanatory matrices (based on contingency tables), to which relative edge matrices with contiguity constraints are associated. Generalized Fused Lasso regression are performed in order to identify the best candidate matrix, the best class intervals and related coefficients at each iteration. The approach is stopped when the maximal number of iterations is reached or when retained coefficients are zeros. Supplementary functions allow to get coefficients of any candidate matrix or mean of coefficients of many candidates. The methods in this package are describing in Girault Gnanguenon Guesse, Patrice Loisel, Bénedicte Fontez, Thierry Simonneau, Nadine Hilgert (2021) "An exploratory penalized regression to identify combined effects of functional variables -Application to agri-environmental issues" <https://hal.archives-ouvertes.fr/hal-03298977>.
Balancing computational and statistical efficiency, subsampling techniques offer a practical solution for handling large-scale data analysis. Subsampling methods enhance statistical modeling for massive datasets by efficiently drawing representative subsamples from full dataset based on tailored sampling probabilities. These probabilities are optimized for specific goals, such as minimizing the variance of coefficient estimates or reducing prediction error.
Simulates regression models, including both simple regression and generalized linear mixed models with up to three level of nesting. Power simulations that are flexible allowing the specification of missing data, unbalanced designs, and different random error distributions are built into the package.
Simulation tools for closed-loop simulation are provided for the MSEtool operating model to inform data-rich fisheries. SAMtool provides a conditioning model, assessment models of varying complexity with standardized reporting, model-based management procedures, and diagnostic tools for evaluating assessments inside closed-loop simulation.
Generate objects that simulate survival times. Random values for the distributions are generated using the method described by Bender (2003) <https://epub.ub.uni-muenchen.de/id/eprint/1716> and Leemis (1987) in Operations Research, 35(6), 892â 894.
An interface to explore trends in Twitter data using the Storywrangler Application Programming Interface (API), which can be found here: <https://github.com/janeadams/storywrangler>.
This package implements a three-dimensional stochastic model of cancer growth and mutation similar to the one described in Waclaw et al. (2015) <doi:10.1038/nature14971>. Allows for interactive 3D visualizations of the simulated tumor. Provides a comprehensive summary of the spatial distribution of mutants within the tumor. Contains functions which create synthetic sequencing datasets from the generated tumor.
Many packages use htmlwidgets <https://CRAN.R-project.org/package=htmlwidgets> for interactive plotting of spatial data. This package provides functions for converting R objects, such as simple features, into structures suitable for use in htmlwidgets mapping libraries.
This package implements an approach aimed at assessing the accuracy and effectiveness of raw scores obtained in scales that contain locally dependent items. The program uses as input the calibration (structural) item estimates obtained from fitting extended unidimensional factor-analytic solutions in which the existing local dependencies are included. Measures of reliability (Omega) and information are proposed at three levels: (a) total score, (b) bivariate-doublet, and (c) item-by-item deletion, and are compared to those that would be obtained if all the items had been locally independent. All the implemented procedures can be obtained from: (a) linear factor-analytic solutions in which the item scores are treated as approximately continuous, and (b) non-linear solutions in which the item scores are treated as ordered-categorical. A detailed guide can be obtained at the following url.
In practice, it is difficult to determine the number of decomposition modes, K, for Variational Mode Decomposition (VMD). To overcome this issue, this study offers Spearman Variational Mode Decomposition (SVMD), a method that uses the Spearman correlation coefficient to calculate the ideal mode number. Unlike the Pearson correlation coefficient, which only returns a perfect value when X and Y are linearly connected, the Spearman correlation can be calculated without knowing the probability distributions of X and Y. The Spearman correlation coefficient, also called Spearman's rank correlation coefficient, is a subset of a wider correlation coefficient. As VMD decomposes a signal, the Spearman correlation coefficient between the reconstructed and original sequences rises as the mode number K increases. Once the signal has been fully decomposed, subsequent increases in K cause the correlation to gradually level off. When the correlation reaches a specific level, VMD is said to have adequately decomposed the signal. Numerous experiments revealed that a threshold of 0.997 produces the best denoising effect, so the threshold is set at 0.997. This package has been developed using concept of Yang et al. (2021)<doi:10.1016/j.aej.2021.01.055>.
Extracts and summarizes metadata from data frames, including variable names, labels, types, and missing values. Computes compact descriptive statistics, frequency tables, and cross-tabulations to assist with efficient data exploration. Includes an interactive and exportable codebook generator for documenting variable metadata. Facilitates the identification of missing data patterns and structural issues in datasets. Designed to streamline initial data management and exploratory analysis workflows within R'.