Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The efficient treatment and convenient analysis of experimental high-throughput (omics) data gets facilitated through this collection of diverse functions. Several functions address advanced object-conversions, like manipulating lists of lists or lists of arrays, reorganizing lists to arrays or into separate vectors, merging of multiple entries, etc. Another set of functions provides speed-optimized calculation of standard deviation (sd), coefficient of variance (CV) or standard error of the mean (SEM) for data in matrixes or means per line with respect to additional grouping (eg n groups of replicates). A group of functions facilitate dealing with non-redundant information, by indexing unique, adding counters to redundant or eliminating lines with respect redundancy in a given reference-column, etc. Help is provided to identify very closely matching numeric values to generate (partial) distance matrixes for very big data in a memory efficient manner or to reduce the complexity of large data-sets by combining very close values. Other functions help aligning a matrix or data.frame to a reference using partial matching or to mine an experimental setup to extract patterns of replicate samples. Many times large experimental datasets need some additional filtering, adequate functions are provided. Convenient data normalization is supported in various different modes, parameter estimation via permutations or boot-strap as well as flexible testing of multiple pair-wise combinations using the framework of limma is provided, too. Batch reading (or writing) of sets of files and combining data to arrays is supported, too.
Query Wikidata and get facts from current and historic Wikipedia main pages.
This package implements the Whale Optimization Algorithm(WOA) for k-medoids clustering, providing tools for effective and efficient cluster analysis in various data sets. The methodology is based on "The Whale Optimization Algorithm" by Mirjalili and Lewis (2016) <doi:10.1016/j.advengsoft.2016.01.008>.
Access Wikipedia through the several MediaWiki APIs (<https://www.mediawiki.org/wiki/API>), as well as through the XTools API (<https://www.mediawiki.org/wiki/XTools/API>). Ensure your API calls are correct, and receive results in tidy tibbles.
This package provides a single function to fit data of an input data frame into one of the selected Weibull functions (w2, w3 and it's truncated versions), calculating the scale, location and shape parameters accordingly. The resulting plots and files are saved into the folder parameter provided by the user. References: a) John C. Nash, Ravi Varadhan (2011). "Unifying Optimization Algorithms to Aid Software System Users: optimx for R" <doi:10.18637/jss.v043.i09>.
This package implements detection for the number and locations of the change-points in a time series using the Wild Binary Segmentation and the Locally Stationary Wavelet model of Korkas and Fryzlewicz (2017) <doi:10.5705/ss.202015.0262>.
Shows the relationship between an independent and dependent variable through Weight of Evidence and Information Value.
This package provides functions for determining the effect of data weights on the variance of survey data: users will load a data set which has a weights column, and the package will calculate the design effect (DEFF), weighting loss, root design effect (DEFT), effective sample size (ESS), and/or weighted margin of error.
Efficient Bayesian generalized linear models with time-varying coefficients as in Helske (2022, <doi:10.1016/j.softx.2022.101016>). Gaussian, Poisson, and binomial observations are supported. The Markov chain Monte Carlo (MCMC) computations are done using Hamiltonian Monte Carlo provided by Stan, using a state space representation of the model in order to marginalise over the coefficients for efficient sampling. For non-Gaussian models, the package uses the importance sampling type estimators based on approximate marginal MCMC as in Vihola, Helske, Franks (2020, <doi:10.1111/sjos.12492>).
Adds ... to a function's argument list so that it can tolerate non-matching arguments.
This package provides a collection of tools to fit and work with trophic Species Distribution Models. Trophic Species Distribution Models combine knowledge of trophic interactions with Bayesian structural equation models that model each species as a function of its prey (or predators) and environmental conditions. It exploits the topological ordering of the known trophic interaction network to predict species distribution in space and/or time, where the prey (or predator) distribution is unavailable. The method implemented by the package is described in Poggiato, Andréoletti, Pollock and Thuiller (2022) <doi:10.22541/au.166853394.45823739/v1>.
Supplies permutation-test alternatives to traditional hypothesis-test procedures such as two-sample tests for means, medians, and standard deviations; correlation tests; tests for homogeneity and independence; and more. Suitable for general audiences, including individual and group users, introductory statistics courses, and more advanced statistics courses that desire an introduction to permutation tests.
This package provides efficient implementation of the Wild Binary Segmentation and Binary Segmentation algorithms for estimation of the number and locations of multiple change-points in the piecewise constant function plus Gaussian noise model.
Noise in the time-series data significantly affects the accuracy of the ARIMA model. Wavelet transformation decomposes the time series data into subcomponents to reduce the noise and help to improve the model performance. The wavelet-ARIMA model can achieve higher prediction accuracy than the traditional ARIMA model. This package provides Wavelet-ARIMA model for time series forecasting based on the algorithm by Aminghafari and Poggi (2012) and Paul and Anjoy (2018) <doi:10.1142/S0219691307002002> <doi:10.1007/s00704-017-2271-x>.
It generates the roster of turn for an outlet which is flowing (water) 24X7 or 168 hours towards the area under command or agricutural area (to be irrigated). The area under command is differentially owned by different individual farmers. The Outlet runs for free of cost to irrigate the area under command 24X7. So, flow time of the outlet has to be divided based on an area owned by an individual farmer and the location of his land or farm. This roster is known as warabandi and its generation in agriculture practices is a very tedious task. Calculations of time in microseconds are more error-prone, especially whenever it is performed by hands. That division of flow time for an individual farmer can be calculated by warabandi'. However, it generates a full publishable report for an outlet and all the farmers who have farms subjected to be irrigated. It reduces error risk and makes a more reproducible roster. For more details about warabandi system you can found elsewhere in Bandaragoda DJ(1995) <https://publications.iwmi.org/pdf/H_17571i.pdf>.
Create dense vector representation of words and documents using quanteda'. Currently implements Word2vec (Mikolov et al., 2013) <doi:10.48550/arXiv.1310.4546> and Latent Semantic Analysis (Deerwester et al., 1990) <doi:10.1002/(SICI)1097-4571(199009)41:6%3C391::AID-ASI1%3E3.0.CO;2-9>.
This package provides insight into how the best hand for a poker game changes based on the game dealt, players who stay in until the showdown and wildcards added to the base game. At this time the package does not support player tactics, so draw poker variants are not included.
This package performs an analysis of time-to-event clinical trial data using various "win time" methods, including ewt', ewtr', rmt', ewtp', rewtp', ewtpr', rewtpr', max', wtr', rwtr', pwt', and rpwt'. These methods are used to calculate and compare treatment effects on ordered composite endpoints. The package handles event times, event indicators, and treatment arm indicators and supports calculations on observed and resampled data. Detailed explanations of each method and usage examples are provided in "Use of win time for ordered composite endpoints in clinical trials," by Troendle et al. (2024)<https://pubmed.ncbi.nlm.nih.gov/38417455/>. For more information, see the package documentation or the vignette titled "Introduction to wintime.".
This package provides statistical methods and visualizations that are often used in reliability engineering. Comprises a compact and easily accessible set of methods and visualization tools that make the examination and adjustment as well as the analysis and interpretation of field data (and bench tests) as simple as possible. Non-parametric estimators like Median Ranks, Kaplan-Meier (Abernethy, 2006, <ISBN:978-0-9653062-3-2>), Johnson (Johnson, 1964, <ISBN:978-0444403223>), and Nelson-Aalen for failure probability estimation within samples that contain failures as well as censored data are included. The package supports methods like Maximum Likelihood and Rank Regression, (Genschel and Meeker, 2010, <DOI:10.1080/08982112.2010.503447>) for the estimation of multiple parametric lifetime distributions, as well as the computation of confidence intervals of quantiles and probabilities using the delta method related to Fisher's confidence intervals (Meeker and Escobar, 1998, <ISBN:9780471673279>) and the beta-binomial confidence bounds. If desired, mixture model analysis can be done with segmented regression and the EM algorithm. Besides the well-known Weibull analysis, the package also contains Monte Carlo methods for the correction and completion of imprecisely recorded or unknown lifetime characteristics. (Verband der Automobilindustrie e.V. (VDA), 2016, <ISSN:0943-9412>). Plots are created statically ('ggplot2') or interactively ('plotly') and can be customized with functions of the respective visualization package. The graphical technique of probability plotting as well as the addition of regression lines and confidence bounds to existing plots are supported.
This is a small, lightweight package that lets users investigate the distribution of genotypes in genotype-by-sequencing (GBS) data where they expect (by and large) Hardy-Weinberg equilibrium, in order to assess rates of genotyping errors and the dependence of those rates on read depth. It implements a Markov chain Monte Carlo (MCMC) sampler using Rcpp to compute a Bayesian estimate of what we call the heterozygote miscall rate for restriction-associated digest (RAD) sequencing data and other types of reduced representation GBS data. It also provides functions to generate plots of expected and observed genotype frequencies. Some background on these topics can be found in a recent paper "Recent advances in conservation and population genomics data analysis" by Hendricks et al. (2018) <doi:10.1111/eva.12659>, and another paper describing the MCMC approach is in preparation with Gordon Luikart and Thierry Gosselin.
This package provides a prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. A calibration plot provides a simple, yet useful, way of assessing the calibration assumption. The Wally plot consists of a sequence of usual calibration plots. Among the plots contained within the sequence, one is the actual calibration plot which has been obtained from the data and the others are obtained from similar simulated data under the calibration assumption. It provides the investigator with a direct visual understanding of the shape and sampling variability that are common under the calibration assumption. The original calibration plot from the data is included randomly among the simulated calibration plots, similarly to a police lineup. If the original calibration plot is not easily identified then the calibration assumption is not contradicted by the data. The method handles the common situations in which the data contain censored observations and occurrences of competing events.
Computationally easy modeling, interpolation, forecasting of massive temporal-spacial data.
This package implements a functional approximation of the four panel cointegration tests developed by Westerlund (2007) <doi:10.1111/j.1468-0084.2007.00477.x>. The tests are based on structural rather than residual dynamics and allow for heterogeneity in both the long-run cointegrating relationship and the short-run dynamics. The package includes logic for automated lag and lead selection via AIC/BIC, Bartlett kernel long-run variance estimation, and a bootstrap procedure to handle cross-sectional dependence. It also includes a bootstrapping distribution visualization function for diagnostic purposes.
Encapsulates the pattern of untidying data into a wide matrix, performing some processing, then turning it back into a tidy form. This is useful for several operations such as co-occurrence counts, correlations, or clustering that are mathematically convenient on wide matrices.