Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
ETS stands for Error, Trend, and Seasonality, and it is a popular time series forecasting method. Wavelet decomposition can be used for denoising, compression, and feature extraction of signals. By removing the high-frequency components, wavelet decomposition can remove noise from the data while preserving important features. A hybrid Wavelet ETS (Error Trend-Seasonality) model has been developed for time series forecasting using algorithm of Anjoy and Paul (2017) <DOI:10.1007/s00521-017-3289-9>.
Within-subject mediation analysis using structural equation modeling. Examine how changes in an outcome variable between two conditions are mediated through one or more variables. Supports within-subject mediation analysis using the lavaan package by Rosseel (2012) <doi:10.18637/jss.v048.i02>, and extends Monte Carlo confidence interval estimation to missing data scenarios using the semmcci package by Pesigan and Cheung (2023) <doi:10.3758/s13428-023-02114-4>.
Allows form managers to download entries from their respondents using Wufoo JSON API (<https://www.wufoo.com>). Additionally, the Wufoo reports - when public - can be also acquired programmatically. Note that building new forms within this package is not supported.
This package performs Wilcoxon-Mann-Whitney test in the presence of missing data with controlled Type I error regardless of the values of missing data.
Client for World Register of Marine Species (<https://www.marinespecies.org/>). Includes functions for each of the API methods, including searching for names by name, date and common names, searching using external identifiers, fetching synonyms, as well as fetching taxonomic children and taxonomic classification.
It generates the roster of turn for an outlet which is flowing (water) 24X7 or 168 hours towards the area under command or agricutural area (to be irrigated). The area under command is differentially owned by different individual farmers. The Outlet runs for free of cost to irrigate the area under command 24X7. So, flow time of the outlet has to be divided based on an area owned by an individual farmer and the location of his land or farm. This roster is known as warabandi and its generation in agriculture practices is a very tedious task. Calculations of time in microseconds are more error-prone, especially whenever it is performed by hands. That division of flow time for an individual farmer can be calculated by warabandi'. However, it generates a full publishable report for an outlet and all the farmers who have farms subjected to be irrigated. It reduces error risk and makes a more reproducible roster. For more details about warabandi system you can found elsewhere in Bandaragoda DJ(1995) <https://publications.iwmi.org/pdf/H_17571i.pdf>.
This package provides access to various word embedding methods (GloVe, fasttext and word2vec) to extract word vectors using a unified framework to increase reproducibility and correctness.
Additional options for making graphics in the context of analyzing high-throughput data are available here. This includes automatic segmenting of the current device (eg window) to accommodate multiple new plots, automatic checking for optimal location of legends in plots, small histograms to insert as legends, histograms re-transforming axis labels to linear when plotting log2-transformed data, a violin-plot <doi:10.1080/00031305.1998.10480559> function for a wide variety of input-formats, principal components analysis (PCA) <doi:10.1080/14786440109462720> with bag-plots <doi:10.1080/00031305.1999.10474494> to highlight and compare the center areas for groups of samples, generic MA-plots (differential- versus average-value plots) <doi:10.1093/nar/30.4.e15>, staggered count plots and generation of mouse-over interactive html pages.
This package provides functions that allow for accessing domains and a number of search engines.
Lets you temporarily execute an expression or a local block with a different here() root in the here package. This is useful for sourcing code in other projects which expect the root directory of here() to be the project directory of those projects. This may be the case with git submodules for example.
This is a collection of tools for conducting both basic and advanced statistical power analysis including correlation, proportion, t-test, one-way ANOVA, two-way ANOVA, linear regression, logistic regression, Poisson regression, mediation analysis, longitudinal data analysis, structural equation modeling and multilevel modeling. It also serves as the engine for conducting power analysis online at <https://webpower.psychstat.org>.
This package implements Weighted-Average Least Squares model averaging for negative binomial regression models of Huynh (2024) <doi:10.48550/arXiv.2404.11324>, generalized linear models of De Luca, Magnus, Peracchi (2018) <doi:10.1016/j.jeconom.2017.12.007> and linear regression models of Magnus, Powell, Pruefer (2010) <doi:10.1016/j.jeconom.2009.07.004>, see also Magnus, De Luca (2016) <doi:10.1111/joes.12094>. Weighted-Average Least Squares for the linear regression model is based on the original MATLAB code by Magnus and De Luca <https://www.janmagnus.nl/items/WALS.pdf>, see also Kumar, Magnus (2013) <doi:10.1007/s13571-013-0060-9> and De Luca, Magnus (2011) <doi:10.1177/1536867X1201100402>.
Easily override the default visual choices in ggplot2 to make your time series plots look more like the Wall Street Journal. Specific theme design choices include omitting x-axis grid lines and displaying sparse light grey y-axis grid lines. Additionally, this allows to label the y-axis scales with your units only displayed on the top-most number, while also removing the bottom most number (unless specifically overridden). The goal is visual simplicity, because who has time to waste looking at a cluttered graph?
Calculate the win statistics (win ratio, net benefit and win odds) for prioritized multiple endpoints, plot the win statistics and win proportions over study time if at least one time-to-event endpoint is analyzed, and simulate datasets with dependent endpoints. The package can handle any type of outcomes (continuous, ordinal, binary, time-to-event) and allow users to perform stratified analysis, inverse probability of censoring weighting (IPCW) and inverse probability of treatment weighting (IPTW) analysis.
Shinohara (2014) <doi:10.1016/j.nicl.2014.08.008> introduced WhiteStripe', an intensity-based normalization of T1 and T2 images, where normal appearing white matter performs well, but requires segmentation. This method performs white matter mean and standard deviation estimates on data that has been rigidly-registered to the MNI template and uses histogram-based methods.
This package provides a comprehensive data analysis framework for NIH-funded research that streamlines workflows for both data cleaning and preparing NIH Data Archive ('NDA') submission templates. Provides unified access to multiple data sources ('REDCap', MongoDB', Qualtrics', SQL', ORACLE') through interfaces to their APIs, with specialized functions for data cleaning, filtering, merging, and parsing. Features automatic validation, field harmonization, and memory-aware processing to enhance reproducibility in multi-site collaborative research as described in Mittal et al. (2021) <doi:10.20900/jpbs.20210011>.
Encapsulates the pattern of untidying data into a wide matrix, performing some processing, then turning it back into a tidy form. This is useful for several operations such as co-occurrence counts, correlations, or clustering that are mathematically convenient on wide matrices.
Data structures and methods to work with web tracking data. The functions cover data preprocessing steps, enriching web tracking data with external information and methods for the analysis of digital behavior as used in several academic papers (e.g., Clemm von Hohenberg et al., 2023 <doi:10.17605/OSF.IO/M3U9P>; Stier et al., 2022 <doi:10.1017/S0003055421001222>).
This package provides a set of wrappers intended to check, read and download information from the Wikimedia sources. It is specifically created to work with names of celebrities, in which case their information and statistics can be downloaded. Additionally, it also builds links and snippets to use in combination with the function gallery() in netCoin package.
The efficient treatment and convenient analysis of experimental high-throughput (omics) data gets facilitated through this collection of diverse functions. Several functions address advanced object-conversions, like manipulating lists of lists or lists of arrays, reorganizing lists to arrays or into separate vectors, merging of multiple entries, etc. Another set of functions provides speed-optimized calculation of standard deviation (sd), coefficient of variance (CV) or standard error of the mean (SEM) for data in matrixes or means per line with respect to additional grouping (eg n groups of replicates). A group of functions facilitate dealing with non-redundant information, by indexing unique, adding counters to redundant or eliminating lines with respect redundancy in a given reference-column, etc. Help is provided to identify very closely matching numeric values to generate (partial) distance matrixes for very big data in a memory efficient manner or to reduce the complexity of large data-sets by combining very close values. Other functions help aligning a matrix or data.frame to a reference using partial matching or to mine an experimental setup to extract patterns of replicate samples. Many times large experimental datasets need some additional filtering, adequate functions are provided. Convenient data normalization is supported in various different modes, parameter estimation via permutations or boot-strap as well as flexible testing of multiple pair-wise combinations using the framework of limma is provided, too. Batch reading (or writing) of sets of files and combining data to arrays is supported, too.
Implementation of the methodologies described in 1) Alexander Petersen, Xi Liu and Afshin A. Divani (2021) <doi:10.1214/20-aos1971>, including global F tests, partial F tests, intrinsic Wasserstein-infinity bands and Wasserstein density bands, and 2) Chao Zhang, Piotr Kokoszka and Alexander Petersen (2022) <doi:10.1111/jtsa.12590>, including estimation, prediction, and inference of the Wasserstein autoregressive models.
Allows to turn standard R code into offensive programming code. Provides code instrumentation to ease this change and tools to assist and accelerate code production and tuning while using offensive programming code technics. Should improve code robustness and quality. Function calls can be easily verified on-demand or in batch mode to assess parameter types and length conformities. Should improve coders productivity as offensive programming reduces the code size due to reduced number of controls all along the call chain. Should speed up processing as many checks will be reduced to one single check.
Logging of scripts suitable for clinical trials using Quarto to create nice human readable logs. whirl enables execution of scripts in batch, while simultaneously creating logs for the execution of each script, and providing an overview summary log of the entire batch execution.
Calculate the win ratio for prioritized outcomes and the 95% confidence interval based on Bebu and Lachin (2016) <doi:10.1093/biostatistics/kxv032>. Three type of outcomes can be analyzed: survival "failure-time" events, repeated survival "failure-time" events and continuous or ordinal "non-failure time" events that are captured at specific time-points in the study.