Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel search send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a wavelet-based LSTM model is a type of neural network architecture that uses wavelet technique to pre-process the input data before passing it through a Long Short-Term Memory (LSTM) network. The wavelet-based LSTM model is a powerful approach that combines the benefits of wavelet analysis and LSTM networks to improve the accuracy of predictions in various applications. This package has been developed using the algorithm of Anjoy and Paul (2017) and Paul and Garai (2021) <DOI:10.1007/s00521-017-3289-9> <doi:10.1007/s00500-021-06087-4>.
Generate data frames from templates.
Allows to generate on-demand or by batch, any R documentation file, whatever is kind, data, function, class or package. It populates documentation sections, either automatically or by considering your input. Input code could be standard R code or offensive programming code. Documentation content completeness depends on the type of code you use. With offensive programming code, expect generated documentation to be fully completed, from a format and content point of view. With some standard R code, you will have to activate post processing to fill-in any section that requires complements. Produced manual page validity is automatically tested against R documentation compliance rules. Documentation language proficiency, wording style, and phrasal adjustments remains your job.
This package provides a set of wrappers intended to check, read and download information from the Wikimedia sources. It is specifically created to work with names of celebrities, in which case their information and statistics can be downloaded. Additionally, it also builds links and snippets to use in combination with the function gallery() in netCoin package.
This package performs an analysis of time-to-event clinical trial data using various "win time" methods, including ewt', ewtr', rmt', ewtp', rewtp', ewtpr', rewtpr', max', wtr', rwtr', pwt', and rpwt'. These methods are used to calculate and compare treatment effects on ordered composite endpoints. The package handles event times, event indicators, and treatment arm indicators and supports calculations on observed and resampled data. Detailed explanations of each method and usage examples are provided in "Use of win time for ordered composite endpoints in clinical trials," by Troendle et al. (2024)<https://pubmed.ncbi.nlm.nih.gov/38417455/>. For more information, see the package documentation or the vignette titled "Introduction to wintime.".
Interactive tools for generating random samples. Users select an .xlsx, .csv, or delimited .txt file with population data and are walked through selecting the sample type (Simple Random Sample or Stratified), the number of backups desired, and a "stratify_on" value (if desired). The sample size is determined using a normal approximation to the hypergeometric distribution based on Nicholson (1956) <doi:10.1214/aoms/1177728270>. An .xlsx file is created with the sample and key metadata for reference. It is menu-driven and lets users pick an output directory. See vignettes for a detailed walk-through.
This package provides efficient implementation of the Wild Binary Segmentation and Binary Segmentation algorithms for estimation of the number and locations of multiple change-points in the piecewise constant function plus Gaussian noise model.
Inferences about counterfactuals are essential for prediction, answering what if questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from well-specified statistical analyses become based largely on speculation hidden in convenient modeling assumptions that few would be willing to defend. Unfortunately, standard statistical approaches assume the veracity of the model rather than revealing the degree of model-dependence, which makes this problem hard to detect. WhatIf offers easy-to-apply methods to evaluate counterfactuals that do not require sensitivity testing over specified classes of models. If an analysis fails the tests offered here, then we know that substantive inferences will be sensitive to at least some modeling choices that are not based on empirical evidence, no matter what method of inference one chooses to use. WhatIf implements the methods for evaluating counterfactuals discussed in Gary King and Langche Zeng, 2006, "The Dangers of Extreme Counterfactuals," Political Analysis 14 (2) <DOI:10.1093/pan/mpj004>; and Gary King and Langche Zeng, 2007, "When Can History Be Our Guide? The Pitfalls of Counterfactual Inference," International Studies Quarterly 51 (March) <DOI:10.1111/j.1468-2478.2007.00445.x>.
Fast computation of Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) for weighted binary classification problems (weights are example-specific cost values).
Four filters have been chosen namely haar', c6', la8', and bl14 (Kindly refer to wavelets in CRAN repository for more supported filters). Levels of decomposition are 2, 3, 4, etc. up to maximum decomposition level which is ceiling value of logarithm of length of the series base 2. For each combination two models are run separately. Results are stored in input'. First five metrics are expected to be minimum and last three metrics are expected to be maximum for a model to be considered good. Firstly, every metric value (among first five) is searched in every columns and minimum values are denoted as MIN and other values are denoted as NA'. Secondly, every metric (among last three) is searched in every columns and maximum values are denoted as MAX and other values are denoted as NA'. output contains the similar number of rows (which is 8) and columns (which is number filter-level combinations) as of input'. Values in output are corresponding NA', MIN or MAX'. Finally, the column containing minimum number of NA values is denoted as the best ('FL'). In special case, if two columns having equal NA', it has been checked among these two columns which one is having least NA in first five rows and has been inferred as the best. FL_metrics_values are the corresponding metrics values. WARIGAANbest is the data frame (dimension: 1*8) containing different metrics of the best filter-level combination. More details can be found in Garai and others (2023) <doi:10.13140/RG.2.2.11977.42087>.
This package provides a set of tools for processing and analyzing data developed in the context of the "Who Has Eaten the Planet" (WHEP) project, funded by the European Research Council (ERC). For more details on multi-regional inputâ output model "Food and Agriculture Biomass Inputâ Output" (FABIO) see Bruckner et al. (2019) <doi:10.1021/acs.est.9b03554>.
This package provides automated downloading, parsing and formatting of weather data for Australia through API endpoints provided by the Department of Primary Industries and Regional Development (DPIRD) of Western Australia and by the Science and Technology Division of the Queensland Government's Department of Environment and Science (DES). As well as the Bureau of Meteorology (BOM) of the Australian government precis and coastal forecasts, and downloading and importing radar and satellite imagery files. DPIRD weather data are accessed through public APIs provided by DPIRD, <https://www.dpird.wa.gov.au/online-tools/apis/>, providing access to weather station data from the DPIRD weather station network. Australia-wide weather data are based on data from the Australian Bureau of Meteorology (BOM) data and accessed through SILO (Scientific Information for Land Owners) Jeffrey et al. (2001) <doi:10.1016/S1364-8152(01)00008-1>. DPIRD data are made available under a Creative Commons Attribution 3.0 Licence (CC BY 3.0 AU) license <https://creativecommons.org/licenses/by/3.0/au/deed.en>. SILO data are released under a Creative Commons Attribution 4.0 International licence (CC BY 4.0) <https://creativecommons.org/licenses/by/4.0/>. BOM data are (c) Australian Government Bureau of Meteorology and released under a Creative Commons (CC) Attribution 3.0 licence or Public Access Licence (PAL) as appropriate, see <https://www.bom.gov.au/copyright> for further details.
Computation of approximate potentials for both gradient and non gradient fields. It is known from physics that only gradient fields, also known as conservative, have a well defined potential function. Here we present an algorithm, based on the classical Helmholtz decomposition, to obtain an approximate potential function for non gradient fields. More information in Rodrà guez-Sánchez (2020) <doi:10.1371/journal.pcbi.1007788>.
This package provides a WebSocket client interface for R. WebSocket is a protocol for low-overhead real-time communication: <https://en.wikipedia.org/wiki/WebSocket>.
Post-construction fatality monitoring studies at wind facilities are based on data from searches for bird and bat carcasses in plots beneath turbines. Bird and bat carcasses can fall outside of the search plot. Bird and bat carcasses from wind turbines often fall outside of the searched area. To compensate, area correction (AC) estimations are calculated to estimate the percentage of fatalities that fall within the searched area versus those that fall outside of it. This package provides two likelihood based methods and one physics based method (Hull and Muir (2010) <doi:10.1080/14486563.2010.9725253>, Huso and Dalthorp (2014) <doi:10.1002/jwmg.663>) to estimate the carcass fall distribution. There are also functions for calculating the proportion of area searched within one unit annuli, log logistic distribution functions, and truncated distribution functions.
This package implements a functional approximation of the four panel cointegration tests developed by Westerlund (2007) <doi:10.1111/j.1468-0084.2007.00477.x>. The tests are based on structural rather than residual dynamics and allow for heterogeneity in both the long-run cointegrating relationship and the short-run dynamics. The package includes logic for automated lag and lead selection via AIC/BIC, Bartlett kernel long-run variance estimation, and a bootstrap procedure to handle cross-sectional dependence. It also includes a bootstrapping distribution visualization function for diagnostic purposes.
Time series outlier detection with non parametric test. This is a new outlier detection methodology (washer): efficient for time saving elaboration and implementation procedures, adaptable for general assumptions and for needing very short time series, reliable and effective as involving robust non parametric test. You can find two approaches: single time series (a vector) and grouped time series (a data frame). For other informations: Andrea Venturini (2011) Statistica - Universita di Bologna, Vol.71, pp.329-344. For an informal explanation look at R-bloggers on web.
Easily plot heat maps of the world, based on continuous or categorical data. Country labels can also be added to the map.
Power calculator for the two-sample Wilcoxon-Mann-Whitney rank-sum test for a continuous outcome (Mollan, Trumble, Reifeis et. al., Mar. 2020) <doi:10.1080/10543406.2020.1730866> <arXiv:1901.04597>, (Mann and Whitney 1947) <doi:10.1214/aoms/1177730491>, (Shieh, Jan, and Randles 2006) <doi:10.1080/10485250500473099>.
Explore data in the wpp2019 (or 2017, 2015, ...) package using a shiny interface.
This package provides functions to convert between weather metrics, including conversions for metrics of temperature, air moisture, wind speed, and precipitation. This package also includes functions to calculate the heat index from air temperature and air moisture.
Calibrates Bayesian two-stage designs for single-arm phase II trials with time-to-event endpoints using a window-cohort with controlled roll-on. Interim monitoring is anchored to a locked interim cohort and a pre-specified follow-up requirement, so analysis timing remains predictable while preserving follow-up maturity. The package searches feasible interim rules, optimizes final sample size and decision thresholds, evaluates operating characteristics by Monte Carlo simulation, and supports exponential, Weibull, log-normal, log-logistic, and user-defined baseline survival models. Related published foundations include Simon (1989) <doi:10.1016/0197-2456(89)90015-9> and Cotterill and Whitehead (2015) <doi:10.1002/sim.6426>.
Meta testing is the ability to test a function without having to provide its parameter values. Those values will be generated, based on semantic naming of parameters, as introduced by package wyz.code.offensiveProgramming'. Value generation logic can be completed with your own data types and generation schemes. This to meet your most specific requirements and to answer to a wide variety of usages, from general use case to very specific ones. While using meta testing, it becomes easier to generate stress test campaigns, non-regression test campaigns and robustness test campaigns, as generated tests can be saved and reused from session to session. Main benefits of using wyz.code.metaTesting is ability to discover valid and invalid function parameter combinations, ability to infer valid parameter values, and to provide smart summaries that allows you to focus on dysfunctional cases.
The german Wikibook "GNU R" introduces R to new users. This package is a collection of functions and datas used in the german WikiBook "GNU R".