Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. A calibration plot provides a simple, yet useful, way of assessing the calibration assumption. The Wally plot consists of a sequence of usual calibration plots. Among the plots contained within the sequence, one is the actual calibration plot which has been obtained from the data and the others are obtained from similar simulated data under the calibration assumption. It provides the investigator with a direct visual understanding of the shape and sampling variability that are common under the calibration assumption. The original calibration plot from the data is included randomly among the simulated calibration plots, similarly to a police lineup. If the original calibration plot is not easily identified then the calibration assumption is not contradicted by the data. The method handles the common situations in which the data contain censored observations and occurrences of competing events.
An implementation of the 1-Sample Wilcoxon Sign rank test for medians. It includes 2 functions, W_stat(), which computes the exact probabilities of the Wilcoxon Sign Rank Test Statistic, W. The second function, Wilcox.m.test() allows the user to conduct the 1-Sample Wilcoxon Sign Rank hypothesis test for medians, this also allows the user to conduct the hypothesis test for the normal approximation, based on the techniques of Bickel and Doksum (1973, ISBN:013850363X).
Entropy weighted k-means (ewkm) by Liping Jing, Michael K. Ng and Joshua Zhexue Huang (2007) <doi:10.1109/TKDE.2007.1048> is a weighted subspace clustering algorithm that is well suited to very high dimensional data. Weights are calculated as the importance of a variable with regard to cluster membership. The two-level variable weighting clustering algorithm tw-k-means (twkm) by Xiaojun Chen, Xiaofei Xu, Joshua Zhexue Huang and Yunming Ye (2013) <doi:10.1109/TKDE.2011.262> introduces two types of weights, the weights on individual variables and the weights on variable groups, and they are calculated during the clustering process. The feature group weighted k-means (fgkm) by Xiaojun Chen, Yunminng Ye, Xiaofei Xu and Joshua Zhexue Huang (2012) <doi:10.1016/j.patcog.2011.06.004> extends this concept by grouping features and weighting the group in addition to weighting individual features.
This package provides statistical methods and visualizations that are often used in reliability engineering. Comprises a compact and easily accessible set of methods and visualization tools that make the examination and adjustment as well as the analysis and interpretation of field data (and bench tests) as simple as possible. Non-parametric estimators like Median Ranks, Kaplan-Meier (Abernethy, 2006, <ISBN:978-0-9653062-3-2>), Johnson (Johnson, 1964, <ISBN:978-0444403223>), and Nelson-Aalen for failure probability estimation within samples that contain failures as well as censored data are included. The package supports methods like Maximum Likelihood and Rank Regression, (Genschel and Meeker, 2010, <DOI:10.1080/08982112.2010.503447>) for the estimation of multiple parametric lifetime distributions, as well as the computation of confidence intervals of quantiles and probabilities using the delta method related to Fisher's confidence intervals (Meeker and Escobar, 1998, <ISBN:9780471673279>) and the beta-binomial confidence bounds. If desired, mixture model analysis can be done with segmented regression and the EM algorithm. Besides the well-known Weibull analysis, the package also contains Monte Carlo methods for the correction and completion of imprecisely recorded or unknown lifetime characteristics. (Verband der Automobilindustrie e.V. (VDA), 2016, <ISSN:0943-9412>). Plots are created statically ('ggplot2') or interactively ('plotly') and can be customized with functions of the respective visualization package. The graphical technique of probability plotting as well as the addition of regression lines and confidence bounds to existing plots are supported.
Enables interaction with the National Weather Service application programming web-interface for fetching of real-time and forecast meteorological data. Users can provide latitude and longitude, Automated Surface Observing System identifier, or Automated Weather Observing System identifier to fetch recent weather observations and recent forecasts for the given location or station. Additionally, auxiliary functions exist to identify stations nearest to a point, convert wind direction from character to degrees, and fetch active warnings. Results are returned as simple feature objects whenever possible.
High-level tools to attach gridded weather data from the NASA POWER Project to event-based datasets. The package plans efficient spatio-temporal API calls via the nasapower R package, caches downloaded segments locally, and joins weather variables back to the input table using exact or rolling joins. This package is not affiliated with or endorsed by NASA.
An easy-to-use interface for interacting with WebDAV servers, including OwnCloud'. It simplifies the use of WebDAV methods such as COPY, MOVE, DELETE and others. With built-in authentication and request handling, it allows for easy management of files and directories over the WebDAV protocol.
Converts pathways from WikiPathways GPML format or KEGG KGML format into igraph objects. Includes tools to find all cycles in the resulting graphs and determine which ones involve negative feedback (inhibition).
This package provides a set of utility function to prevent the spread of utility scripts in W4M (Workflow4Metabolomics) tools, and centralize them in a single package. To note, some are meant to be replaced by the use of dedicated packages in the future, like the parse_args() function: it is here only to prepare the ground for more global changes in W4M scripts and tools. This package is used by part of the W4M Galaxy modules, some of them being available on the community-maintained GitHub repository for Metabolomics Galaxy tools <https://github.com/workflow4metabolomics/tools-metabolomics>. See Delporte et al (2025) <doi:10.1002/cpz1.70095> for more details.
Retrieve geographical information for airports using their IATA or ICAO codes.
This package provides data to be used by the wordpiece algorithm in order to tokenize text into somewhat meaningful chunks. Included vocabularies were retrieved from <https://huggingface.co/bert-base-cased/resolve/main/vocab.txt> and <https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt> and parsed into an R-friendly format.
This package provides functions to calculate the Water Deficit Index (WDI) and the Evaporative Fraction (EF) using geospatial raster data such as fractional vegetation cover (FVC) and surface-air temperature difference (TS-TA). The package automates regression-based edge fitting and produces continuous spatial maps of surface moisture and evaporative dynamics.
Implementation of Johansen's general formulation of Welch-James's statistic with Approximate Degrees of Freedom, which makes it suitable for testing any linear hypothesis concerning cell means in univariate and multivariate mixed model designs when the data pose non-normality and non-homogeneous variance. Some improvements, namely trimmed means and Winsorized variances, and bootstrapping for calculating an empirical critical value, have been added to the classical formulation. The code departs from a previous SAS implementation by L.M. Lix and H.J. Keselman, available at <http://supp.apa.org/psycarticles/supplemental/met_13_2_110/SAS_Program.pdf> and published in Keselman, H.J., Wilcox, R.R., and Lix, L.M. (2003) <DOI:10.1111/1469-8986.00060>.
This package performs Wasserstein projections from the predictive distributions of any model into the space of predictive distributions of linear models. We utilize L1 penalties to also reduce the complexity of the model space. This package employs the methods as described in Dunipace, Eric and Lorenzo Trippa (2020) <doi:10.48550/arXiv.2012.09999>.
This package provides a parallel implementation of Weighted Subspace Random Forest. The Weighted Subspace Random Forest algorithm was proposed in the International Journal of Data Warehousing and Mining by Baoxun Xu, Joshua Zhexue Huang, Graham Williams, Qiang Wang, and Yunming Ye (2012) <DOI:10.4018/jdwm.2012040103>. The algorithm can classify very high-dimensional data with random forests built using small subspaces. A novel variable weighting method is used for variable subspace selection in place of the traditional random variable sampling.This new approach is particularly useful in building models from high-dimensional data.
Formal implementation of White test of heteroskedasticity and a bootstrapped version of it, developed under the methodology of Jeong, J., Lee, K. (1999) <https://yonsei.pure.elsevier.com/en/publications/bootstrapped-whites-test-for-heteroskedasticity-in-regression-mod>.
Calculates the minimal sample size for the Wilcoxon-Mann-Whitney test that is needed for a given power and two sided type I error rate. The method works for metric data with and without ties, count data, ordered categorical data, and even dichotomous data. But data is needed for the reference group to generate synthetic data for the treatment group based on a relevant effect. See Happ et al. (2019, <doi:10.1002/sim.7983>) for details.
This package provides a set of tools for processing and analyzing data developed in the context of the "Who Has Eaten the Planet" (WHEP) project, funded by the European Research Council (ERC). For more details on multi-regional inputâ output model "Food and Agriculture Biomass Inputâ Output" (FABIO) see Bruckner et al. (2019) <doi:10.1021/acs.est.9b03554>.
Easily plot heat maps of the world, based on continuous or categorical data. Country labels can also be added to the map.
Introduce weights into Ordered Weighted Averages and extend bivariate means based on n-ary tree construction. Please refer to the following: G. Beliakov, H. Bustince, and T. Calvo (2016, ISBN: 978-3-319-24753-3), G. Beliakov(2018) <doi:10.1002/int.21913>, G. Beliakov, J.J. Dujmovic (2016) <doi:10.1016/j.ins.2015.10.040>, J.J. Dujmovic and G. Beliakov (2017) <doi:10.1002/int.21828>.
This package provides inference for the Wilcoxon-Mann-Whitney test under the null hypothesis H0: AUC = 0.5 for continuous, discrete or mixed random variables. Traditional implementations test H0: F = G, which is inappropriately broad and leads to erroneous inferences. Methods are described in M. Grendar (2025) "Wilcoxon-Mann-Whitney Test of No Group Discrimination" <doi:10.48550/arXiv.2511.20308>.
This package provides a collection of color palettes that were extracted from various books on my sons(Wren) bookshelf. Also included are a number of functions and wrappers to utilize them, as well as to subset the palettes to desired number/specific colors.
This package provides a collection of white noise hypothesis tests for functional time series and related visualizations. These include tests based on the norms of autocovariance operators that are built under both strong and weak white noise assumptions. Additionally, tests based on the spectral density operator and on principal component dimensional reduction are included, which are built under strong white noise assumptions. Also, this package provides goodness-of-fit tests for functional autoregressive of order 1 models. These methods are described in Kokoszka et al. (2017) <doi:10.1016/j.jmva.2017.08.004>, Characiejus and Rice (2019) <doi:10.1016/j.ecosta.2019.01.003>, Gabrys and Kokoszka (2007) <doi:10.1198/016214507000001111>, and Kim et al. (2023) <doi: 10.1214/23-SS143> respectively.
For multivariate datasets, this function enables the estimation of missing data using the Weighted AVERage of all possible Regressions using the data available.