Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Wavelet decomposition method is very useful for modelling noisy time series data. Wavelet decomposition using haar algorithm has been implemented to developed hybrid Wavelet GBM (Gradient Boosting Method) model for time series forecasting using algorithm by Anjoy and Paul (2017) <DOI:10.1007/s00521-017-3289-9>.
This package provides a multi-visit clinical trial may collect participant responses on an ordinal scale and may utilize a stratified design, such as randomization within centers, to assess treatment efficacy across multiple visits. Baseline characteristics may be strongly associated with the outcome, and adjustment for them can improve power. The win ratio (ignores ties) and the win odds (accounts for ties) can be useful when analyzing these types of data from randomized controlled trials. This package provides straightforward functions for adjustment of the win ratio and win odds for stratification and baseline covariates, facilitating the comparison of test and control treatments in multi-visit clinical trials. For additional information concerning the methodologies and applied examples within this package, please refer to the following publications: 1. Weideman, A.M.K., Kowalewski, E.K., & Koch, G.G. (2024). â Randomization-based covariance adjustment of win ratios and win odds for randomized multi-visit studies with ordinal outcomes.â Journal of Statistical Research, 58(1), 33â 48. <doi:10.3329/jsr.v58i1.75411>. 2. Kowalewski, E.K., Weideman, A.M.K., & Koch, G.G. (2023). â SAS macro for randomization-based methods for covariance and stratified adjustment of win ratios and win odds for ordinal outcomes.â SESUG 2023 Proceedings, Paper 139-2023.
This package provides a framework for developing n-gram models for text prediction. It provides data cleaning, data sampling, extracting tokens from text, model generation, model evaluation and word prediction. For information on how n-gram models work we referred to: "Speech and Language Processing" <https://web.archive.org/web/20240919222934/https%3A%2F%2Fweb.stanford.edu%2F~jurafsky%2Fslp3%2F3.pdf>. For optimizing R code and using R6 classes we referred to "Advanced R" <https://adv-r.hadley.nz/r6.html>. For writing R extensions we referred to "R Packages", <https://r-pkgs.org/index.html>.
Post-construction fatality monitoring studies at wind facilities are based on data from searches for bird and bat carcasses in plots beneath turbines. Bird and bat carcasses can fall outside of the search plot. Bird and bat carcasses from wind turbines often fall outside of the searched area. To compensate, area correction (AC) estimations are calculated to estimate the percentage of fatalities that fall within the searched area versus those that fall outside of it. This package provides two likelihood based methods and one physics based method (Hull and Muir (2010) <doi:10.1080/14486563.2010.9725253>, Huso and Dalthorp (2014) <doi:10.1002/jwmg.663>) to estimate the carcass fall distribution. There are also functions for calculating the proportion of area searched within one unit annuli, log logistic distribution functions, and truncated distribution functions.
Create plots and tables in a consistent style with WaSHI (Washington Soil Health Initiative) branding. Use washi to easily style your ggplot2 plots and flextable tables.
This package implements the Whale Optimization Algorithm(WOA) for k-medoids clustering, providing tools for effective and efficient cluster analysis in various data sets. The methodology is based on "The Whale Optimization Algorithm" by Mirjalili and Lewis (2016) <doi:10.1016/j.advengsoft.2016.01.008>.
Entropy weighted k-means (ewkm) by Liping Jing, Michael K. Ng and Joshua Zhexue Huang (2007) <doi:10.1109/TKDE.2007.1048> is a weighted subspace clustering algorithm that is well suited to very high dimensional data. Weights are calculated as the importance of a variable with regard to cluster membership. The two-level variable weighting clustering algorithm tw-k-means (twkm) by Xiaojun Chen, Xiaofei Xu, Joshua Zhexue Huang and Yunming Ye (2013) <doi:10.1109/TKDE.2011.262> introduces two types of weights, the weights on individual variables and the weights on variable groups, and they are calculated during the clustering process. The feature group weighted k-means (fgkm) by Xiaojun Chen, Yunminng Ye, Xiaofei Xu and Joshua Zhexue Huang (2012) <doi:10.1016/j.patcog.2011.06.004> extends this concept by grouping features and weighting the group in addition to weighting individual features.
This package provides a collection of functions to perform the Application Programming Interface (API) calls associated with the Walk Score website (www.walkscore.com) within the R environment. These functions can be used to query the Walk Score and Transit Score database for a wide variety of information using R scripts. This package includes the simple Walk Score and Transit Score API calls, which return the scores associated with an input location, as well as calls which return some data used to calculate the scores. These functions are especially useful for mass data collection and gathering Walk Score and Transit Score values for large lists of locations.
This package provides functions to create factor variables with contrasts based on weighted effect coding, and their interactions. In weighted effect coding the estimates from a first order regression model show the deviations per group from the sample mean. This is especially useful when a researcher has no directional hypotheses and uses a sample from a population in which the number of observation per group is different.
This package implements the diagnostic "theta" developed in Poetscher and Preinerstorfer (2020) "How Reliable are Bootstrap-based Heteroskedasticity Robust Tests?" <arXiv:2005.04089>. This diagnostic can be used to detect and weed out bootstrap-based procedures that provably have size equal to one for a given testing problem. The implementation covers a large variety of bootstrap-based procedures, cf. the above mentioned article for details. A function for computing bootstrap p-values is provided.
K-means clustering, hierarchical clustering, and PCA with observational weights and/or variable weights. It also includes the corresponding functions for data nuggets which serve as representative samples of large datasets. Cherasia et al., (2022) <doi:10.1007/978-3-031-22687-8_20>. Amaratunga et al., (2009) <doi:10.1002/9780470317129>.
Efficient Bayesian generalized linear models with time-varying coefficients as in Helske (2022, <doi:10.1016/j.softx.2022.101016>). Gaussian, Poisson, and binomial observations are supported. The Markov chain Monte Carlo (MCMC) computations are done using Hamiltonian Monte Carlo provided by Stan, using a state space representation of the model in order to marginalise over the coefficients for efficient sampling. For non-Gaussian models, the package uses the importance sampling type estimators based on approximate marginal MCMC as in Vihola, Helske, Franks (2020, <doi:10.1111/sjos.12492>).
Top-Down mass spectrometry aims to identify entire proteins as well as their (post-translational) modifications or ions bound (eg Chen et al (2018) <doi:10.1021/acs.analchem.7b04747>). The pattern of internal fragments (Haverland et al (2017) <doi:10.1007/s13361-017-1635-x>) may reveal important information about the original structure of the proteins studied (Skinner et al (2018) <doi:10.1038/nchembio.2515> and Li et al (2018) <doi:10.1038/nchem.2908>). However, the number of possible internal fragments gets huge with longer proteins and subsequent identification of internal fragments remains challenging, in particular since the the accuracy of measurements with current mass spectrometers represents a limiting factor. This package attempts to deal with the complexity of internal fragments and allows identification of terminal and internal fragments from deconvoluted mass-spectrometry data.
Several analysis-related functions for the book entitled "Web-based Analysis without R in Your Computer"(written in Korean, ISBN 978-89-5566-185-9) by Keon-Woong Moon. The main function plot.htest() shows the distribution of statistic for the object of class htest'.
This package provides a fast visualization tool for creating wordcloud by using wordcloud2.js'. wordcloud2.js is a JavaScript library to create wordle presentation on 2D canvas or HTML <https://timdream.org/wordcloud2.js/>.
Implementation of Johansen's general formulation of Welch-James's statistic with Approximate Degrees of Freedom, which makes it suitable for testing any linear hypothesis concerning cell means in univariate and multivariate mixed model designs when the data pose non-normality and non-homogeneous variance. Some improvements, namely trimmed means and Winsorized variances, and bootstrapping for calculating an empirical critical value, have been added to the classical formulation. The code departs from a previous SAS implementation by L.M. Lix and H.J. Keselman, available at <http://supp.apa.org/psycarticles/supplemental/met_13_2_110/SAS_Program.pdf> and published in Keselman, H.J., Wilcox, R.R., and Lix, L.M. (2003) <DOI:10.1111/1469-8986.00060>.
Data structures and methods to work with web tracking data. The functions cover data preprocessing steps, enriching web tracking data with external information and methods for the analysis of digital behavior as used in several academic papers (e.g., Clemm von Hohenberg et al., 2023 <doi:10.17605/OSF.IO/M3U9P>; Stier et al., 2022 <doi:10.1017/S0003055421001222>).
Generates random data sets including: data.frames, lists, and vectors.
Calculate the win statistics (win ratio, net benefit and win odds) for prioritized multiple endpoints, plot the win statistics and win proportions over study time if at least one time-to-event endpoint is analyzed, and simulate datasets with dependent endpoints. The package can handle any type of outcomes (continuous, ordinal, binary, time-to-event) and allow users to perform stratified analysis, inverse probability of censoring weighting (IPCW) and inverse probability of treatment weighting (IPTW) analysis.
This estimates precise weaning ages for a given skeletal population by analyzing the stable nitrogen isotope ratios of them. Bone collagen turnover rates estimated anew and the approximate Bayesian computation (ABC) were adopted in this package.
Read from, interogate, and write to Wikidata <https://www.wikidata.org> - the multilingual, interdisciplinary, semantic knowledgebase. Includes functions to: read from wikidata (single items, properties, or properties); query wikidata (retrieving all items that match a set of criterial via Wikidata SPARQL query service); write to Wikidata (adding new items or statements via QuickStatements); and handle and manipulate Wikidata objects (as lists and tibbles). Uses the Wikidata and Quickstatements APIs.
Select data analysis plots, under a standardized calling interface implemented on top of ggplot2 and plotly'. Plots of interest include: ROC', gain curve, scatter plot with marginal distributions, conditioned scatter plot with marginal densities, box and stem with matching theoretical distribution, and density with matching theoretical distribution.
Obtain information on peak flow data from the National River Flow Archive (NRFA) in the United Kingdom, either from the Peak Flow Dataset files <https://nrfa.ceh.ac.uk/data/peak-flow-dataset> once these have been downloaded to the user's computer or using the NRFA's API. These files are in a format suitable for direct use in the WINFAP software, hence the name of the package.
Implementation of the methodologies described in 1) Alexander Petersen, Xi Liu and Afshin A. Divani (2021) <doi:10.1214/20-aos1971>, including global F tests, partial F tests, intrinsic Wasserstein-infinity bands and Wasserstein density bands, and 2) Chao Zhang, Piotr Kokoszka and Alexander Petersen (2022) <doi:10.1111/jtsa.12590>, including estimation, prediction, and inference of the Wasserstein autoregressive models.