Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements the Welch-Satterthwaite approximation for differences of non-standardized t-distributed random variables in both univariate and multivariate settings. The package provides methods for computing effective degrees of freedom and scale parameters, as well as distribution functions for the approximated difference distribution. The methodology extends the classical Welch-Satterthwaite framework from variance combinations to t-distribution differences through careful moment matching. Methods build on the classical Welch-Satterthwaite approach described in Welch (1947) <doi:10.1093/biomet/34.1-2.28> and Satterthwaite (1946) <doi:10.2307/3002019>.
This package provides a weighted selection probability to locate rare variants associated with multiple phenotypes.
Collect multichannel marketing data from sources such as Google analytics, Facebook Ads, and many others using the Windsor.ai API <https://www.windsor.ai/api-fields/>.
The main aim of this package is to combine the advantage of wavelet and support vector machine models for time series forecasting. This package also gives the accuracy measurements in terms of RMSE and MAPE. This package fits the hybrid Wavelet SVR model for time series forecasting The main aim of this package is to combine the advantage of wavelet and Support Vector Regression (SVR) models for time series forecasting. This package also gives the accuracy measurements in terms of Root Mean Square Error (RMSE) and Mean Absolute Prediction Error (MAPE). This package is based on the algorithm of Raimundo and Okamoto (2018) <DOI: 10.1109/INFOCT.2018.8356851>.
Encapsulates the pattern of untidying data into a wide matrix, performing some processing, then turning it back into a tidy form. This is useful for several operations such as co-occurrence counts, correlations, or clustering that are mathematically convenient on wide matrices.
This package provides a set of wrappers intended to check, read and download information from the Wikimedia sources. It is specifically created to work with names of celebrities, in which case their information and statistics can be downloaded. Additionally, it also builds links and snippets to use in combination with the function gallery() in netCoin package.
This package implements inferential and graphic procedures for the semiparametric proportional means regression of weighted composite endpoint of recurrent event and death (Mao and Lin, 2016, <doi:10.1093/biostatistics/kxv050>).
This package implements a functional approximation of the four panel cointegration tests developed by Westerlund (2007) <doi:10.1111/j.1468-0084.2007.00477.x>. The tests are based on structural rather than residual dynamics and allow for heterogeneity in both the long-run cointegrating relationship and the short-run dynamics. The package includes logic for automated lag and lead selection via AIC, Bartlett kernel long-run variance estimation, and a bootstrap procedure to handle cross-sectional dependence. It also includes a bootstrapping distribution visualization function for diagnostic purposes.
R clients to the Web of Science and InCites <https://clarivate.com/products/data-integration/> APIs, which allow you to programmatically download publication and citation data indexed in the Web of Science and InCites databases.
Weighted descriptive statistics is the discipline of quantitatively describing the main features of real-valued fuzzy data which usually given from a fuzzy population. One can summarize this special kind of fuzzy data numerically or graphically using this package. To interpret some of the properties of one or several sets of real-valued fuzzy data, numerically summarize is possible by some weighted statistics which are designed in this package such as mean, variance, covariance and correlation coefficent. Also, graphically interpretation can be given by weighted histogram and weighted scatter plot using this package to describe properties of real-valued fuzzy data set.
This package implements the Whale Optimization Algorithm(WOA) for k-medoids clustering, providing tools for effective and efficient cluster analysis in various data sets. The methodology is based on "The Whale Optimization Algorithm" by Mirjalili and Lewis (2016) <doi:10.1016/j.advengsoft.2016.01.008>.
This package provides Water Year Hydrologic Classification Indices based on measured unimpaired runoff (in million acre-feet). Data is provided by California Department of Water Resources and subject to revision.
Computes the Weighted Topological Overlap with positive and negative signs (wTO) networks given a data frame containing the mRNA count/ expression/ abundance per sample, and a vector containing the interested nodes of interaction (a subset of the elements of the full data frame). It also computes the cut-off threshold or p-value based on the individuals bootstrap or the values reshuffle per individual. It also allows the construction of a consensus network, based on multiple wTO networks. The package includes a visualization tool for the networks. More about the methodology can be found at <doi:10.1186/s12859-018-2351-7>.
Easily collect walk scores, bike scores, and transit scores (where available) from the Walk Score API <https://www.walkscore.com/professional/api.php>, a proprietary API that assigns locations a walkability score between 0 and 100.
Calculate the win statistics (win ratio, net benefit and win odds) for prioritized multiple endpoints, plot the win statistics and win proportions over study time if at least one time-to-event endpoint is analyzed, and simulate datasets with dependent endpoints. The package can handle any type of outcomes (continuous, ordinal, binary, time-to-event) and allow users to perform stratified analysis, inverse probability of censoring weighting (IPCW) and inverse probability of treatment weighting (IPTW) analysis.
This package provides methods for retrieving United States Geological Survey (USGS) water data using sequential and parallel processing (Bengtsson, 2022 <doi:10.32614/RJ-2021-048>). In addition to parallel methods, data wrangling and additional statistical attributes are provided.
Process GPS and accelerometry data to generate walk bouts. A walk bout is a period of activity with accelerometer movement matching the patterns of walking with corresponding GPS measurements that confirm travel. The inputs of the walkboutr package are individual-level accelerometry and GPS data. The outputs of the model are walk bouts with corresponding times, duration, and summary statistics on the sample population, which collapse all personally identifying information. These bouts can be used to measure walking both as an outcome of a change to the built environment or as a predictor of health outcomes such as a cardioprotective behavior. Kang B, Moudon AV, Hurvitz PM, Saelens BE (2017) <doi:10.1016/j.trd.2017.09.026>.
Efficient Bayesian generalized linear models with time-varying coefficients as in Helske (2022, <doi:10.1016/j.softx.2022.101016>). Gaussian, Poisson, and binomial observations are supported. The Markov chain Monte Carlo (MCMC) computations are done using Hamiltonian Monte Carlo provided by Stan, using a state space representation of the model in order to marginalise over the coefficients for efficient sampling. For non-Gaussian models, the package uses the importance sampling type estimators based on approximate marginal MCMC as in Vihola, Helske, Franks (2020, <doi:10.1111/sjos.12492>).
This package provides a toolkit to set up an R data package in a consistent structure. Automates tasks like tidy data export, data dictionary documentation, README and website creation, and citation management.
Monetary valuation of wood in German forests (stumpage values), including estimations of harvest quantities, wood revenues, and harvest costs. The functions are sensitive to tree species, mean diameter of the harvested trees, stand quality, and logging method. The functions include estimations for the consequences of disturbances on revenues and costs. The underlying assortment tables are taken from Offer and Staupendahl (2018) with corresponding functions for salable and skidded volume derived in Fuchs et al. (2023). Wood revenue and harvest cost functions were taken from v. Bodelschwingh (2018). The consequences of disturbances refer to Dieter (2001), Moellmann and Moehring (2017), and Fuchs et al. (2022a, 2022b). For the full references see documentation of the functions, package README, and Fuchs et al. (2023). Apart from Dieter (2001) and Moellmann and Moehring (2017), all functions and factors are based on data from HessenForst, the forest administration of the Federal State of Hesse in Germany.
Logging of scripts suitable for clinical trials using Quarto to create nice human readable logs. whirl enables execution of scripts in batch, while simultaneously creating logs for the execution of each script, and providing an overview summary log of the entire batch execution.
Calculate the win ratio for prioritized outcomes and the 95% confidence interval based on Bebu and Lachin (2016) <doi:10.1093/biostatistics/kxv032>. Three type of outcomes can be analyzed: survival "failure-time" events, repeated survival "failure-time" events and continuous or ordinal "non-failure time" events that are captured at specific time-points in the study.
This package provides functions for determining the effect of data weights on the variance of survey data: users will load a data set which has a weights column, and the package will calculate the design effect (DEFF), weighting loss, root design effect (DEFT), effective sample size (ESS), and/or weighted margin of error.
Allows to turn standard R code into offensive programming code. Provides code instrumentation to ease this change and tools to assist and accelerate code production and tuning while using offensive programming code technics. Should improve code robustness and quality. Function calls can be easily verified on-demand or in batch mode to assess parameter types and length conformities. Should improve coders productivity as offensive programming reduces the code size due to reduced number of controls all along the call chain. Should speed up processing as many checks will be reduced to one single check.