Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Allows users to create high-quality heatmaps from labelled, hierarchical data. Specifically, for data with a two-level hierarchical structure, it will produce a heatmap where each row and column represents a category at the lower level. These rows and columns are then grouped by the higher-level group each category belongs to, with the names for each category and groups shown in the margins. While other packages (e.g. dendextend') allow heatmap rows and columns to be arranged by groups only, hhmR also allows the labelling of the data at both the category and group level.
Given a database of previous treatment/placebo estimates, their standard errors and sample sizes, the program calculates a significance criteria and power estimate that takes into account the among trial variation.
HAR files are generated and consumed by GEMPACK applications. This package reads/writes HAR files (and SL4 files) directly using basic R functions.
Most common exact, asymptotic and resample based tests are provided for testing the homogeneity of variances of k normal distributions under normality. These tests are Barlett, Bhandary & Dai, Brown & Forsythe, Chang et al., Gokpinar & Gokpinar, Levene, Liu and Xu, Gokpinar. Also, a data generation function from multiple normal distribution is provided using any multiple normal parameters. Bartlett, M. S. (1937) <doi:10.1098/rspa.1937.0109> Bhandary, M., & Dai, H. (2008) <doi:10.1080/03610910802431011> Brown, M. B., & Forsythe, A. B. (1974).<doi:10.1080/01621459.1974.10482955> Chang, C. H., Pal, N., & Lin, J. J. (2017) <doi:10.1080/03610918.2016.1202277> Gokpinar E. & Gokpinar F. (2017) <doi:10.1080/03610918.2014.955110> Liu, X., & Xu, X. (2010) <doi:10.1016/j.spl.2010.05.017> Levene, H. (1960) <https://cir.nii.ac.jp/crid/1573950400526848896> Gökpınar, E. (2020) <doi:10.1080/03610918.2020.1800037>.
An algorithm for flexible conditional density estimation based on application of pooled hazard regression to an artificial repeated measures dataset constructed by discretizing the support of the outcome variable. To facilitate flexible estimation of the conditional density, the highly adaptive lasso, a non-parametric regression function shown to estimate cadlag (RCLL) functions at a suitably fast convergence rate, is used. The use of pooled hazards regression for conditional density estimation as implemented here was first described for by DÃ az and van der Laan (2011) <doi:10.2202/1557-4679.1356>. Building on the conditional density estimation utilities, non-parametric inverse probability weighted (IPW) estimators of the causal effects of additive modified treatment policies are implemented, using conditional density estimation to estimate the generalized propensity score. Non-parametric IPW estimators based on this can be coupled with undersmoothing of the generalized propensity score estimator to attain the semi-parametric efficiency bound (per Hejazi, DÃ az, and van der Laan <doi:10.48550/arXiv.2205.05777>).
Error type I and Optimal critical values to test statistical hypothesis based on Neyman-Pearson Lemma and Likelihood ratio test based on random samples from several distributions. The families of distributions are Bernoulli, Exponential, Geometric, Inverse Normal, Normal, Gamma, Gumbel, Lognormal, Poisson, and Weibull. This package is an ideal resource to help with the teaching of Statistics. The main references for this package are Casella G. and Berger R. (2003,ISBN:0-534-24312-6 , "Statistical Inference. Second Edition", Duxbury Press) and Hogg, R., McKean, J., and Craig, A. (2019,ISBN:013468699, "Introduction to Mathematical Statistic. Eighth edition", Pearson).
We provide an R tool for computation and nonparametric plug-in estimation of Highest Density Regions (HDRs) and general level sets in the directional setting. Concretely, circular and spherical HDRs can be reconstructed from a data sample following Saavedra-Nieves and Crujeiras (2021) <doi:10.1007/s11634-021-00457-4>. This library also contains two real datasets in the circular and spherical settings. The first one concerns a problem from animal orientation studies and the second one is related to earthquakes occurrences.
This package provides a method for identifying responses to experimental stimulation in mass or flow cytometry that uses high dimensional analysis of measured parameters and can be performed with an end-to-end unsupervised approach. In the context of in vitro stimulation assays where high-parameter cytometry was used to monitor intracellular response markers, using cell populations annotated either through automated clustering or manual gating for a combined set of stimulated and unstimulated samples, HDStIM labels cells as responding or non-responding. The package also provides auxiliary functions to rank intracellular markers based on their contribution to identifying responses and generating diagnostic plots.
H(x) is the h-index for the past x years. Here, the h(x) of a scientist/department/etc. can be calculated using the exported excel file from a Web of Science citation report of a search. Also calculated is the year of first publication, total number of publications, and sum of times cited for the specified period. Therefore, for h-10: the date of first publication, total number of publications, and sum of times cited in the past 10 years are calculated. Note: the excel file has to first be saved in a .csv format.
Harmony is a tool using AI which allows you to compare items from questionnaires and identify similar content. You can try Harmony at <https://harmonydata.ac.uk/app/> and you can read our blog at <https://harmonydata.ac.uk/blog/> or at <https://fastdatascience.com/how-does-harmony-work/>. Documentation at <https://harmonydata.ac.uk/harmony-r-released/>.
By analyzing time series, it is possible to observe significant changes in the behavior of observations that frequently characterize events. Events present themselves as anomalies, change points, or motifs. In the literature, there are several methods for detecting events. However, searching for a suitable time series method is a complex task, especially considering that the nature of events is often unknown. This work presents Harbinger, a framework for integrating and analyzing event detection methods. Harbinger contains several state-of-the-art methods described in Salles et al. (2020) <doi:10.5753/sbbd.2020.13626>.
This package provides a dummy package to demonstrate how to interface to a jar file that resides inside an R package.
This package provides a procedure that fits derivative curves based on a sequence of quotient differences. In a hierarchical setting the package produces estimates of subject-specific and group-specific derivative curves. In a non-hierarchical setting the package produces a single derivative curve.
This package implements the Clarke-Wright algorithm to find a quasi-optimal solution to the Capacitated Vehicle Routing Problem. See Clarke, G. and Wright, J.R. (1964) <doi:10.1287/opre.12.4.568> for details. The implementation is accompanied by helper functions to inspect its solution.
We provide a toolbox to conduct a Bayesian meta-analysis for estimating the current expansion rate of the Universe, called the Hubble constant H0, via time delay cosmography. The input data are Fermat potential difference and time delay estimates. For a robust inference, we assume a Student's t error for these inputs. Given these inputs, the meta-analysis produces posterior samples of the model parameters including the Hubble constant via Metropolis-Hastings within Gibbs. The package provides an option to implement repelling-attracting Metropolis-Hastings within Gibbs in a case where the parameter space has multiple modes.
Computes the expectation of the number of transmissions and receptions considering a Hop-by-Hop transport model with limited number of retransmissions per packet. It provides the theoretical results shown in Palma et. al.(2016) <DOI:10.1109/TLA.2016.7555237> and also estimated values based on Monte Carlo simulations. It is also possible to consider random data and ACK probabilities.
There are two interesting games in this package, one is 2048 games(for windows), using up and down to control the direction until there is a 2048 figure. And the other is what to eat today',preparing for people who choose difficulties, including most of the delicious Cantonese cuisine.
Several functions are provided to harmonize CN8 (Combined Nomenclature 8 digits) and PC8 (Production Communautaire 8 digits) product codes over time and the classification systems HS6 and BEC. Harmonization of CN8 codes are possible by default from 1995 to 2022 and of PC8 from 2001 to 2021, respectively.
This package provides a histogram slider input binding for use in Shiny'. Currently supports creating histograms from numeric, date, and date-time vectors.
Compute 21 summary measures of health inequality and its corresponding confidence intervals for ordered and non-ordered dimensions using disaggregated data. Measures for ordered dimensions (e.g., Slope Index of Inequality, Absolute Concentration Index) also accept individual and survey data.
Simple tools for converting columns to new data types. Intuitive functions for columns with missing values.
This package provides a protocol that facilitates the processing and analysis of Hydrogen-Deuterium Exchange Mass Spectrometry data using p-value statistics and Critical Interval analysis. It provides a pipeline for analyzing data from HDXExaminer (Sierra Analytics, Trajan Scientific), automating matching and comparison of protein states through Welch's T-test and the Critical Interval statistical framework. Additionally, it simplifies data export, generates PyMol scripts, and ensures calculations meet publication standards. HDXBoxeR assists in various aspects of hydrogen-deuterium exchange data analysis, including reprocessing data, calculating parameters, identifying significant peptides, generating plots, and facilitating comparison between protein states. For details check papers by Hageman and Weis (2019) <doi:10.1021/acs.analchem.9b01325> and Masson et al. (2019) <doi:10.1038/s41592-019-0459-y>. HDXBoxeR citation: Janowska et al. (2024) <doi:10.1093/bioinformatics/btae479>.
Suite of tools for managing cached files, targeting use in other R packages. Uses rappdirs for cross-platform paths. Provides utilities to manage cache directories, including targeting files by path or by key; cached directories can be compressed and uncompressed easily to save disk space.
An S4 class and several functions which utilize internally stored datasets and gauging data enable 1d water level interpolation. The S4 class (WaterLevelDataFrame) structures the computation and visualisation of 1d water level information along the German federal waterways Elbe and Rhine. hyd1d delivers 1d water level data - extracted from the FLYS database - and validated gauging data - extracted from the hydrological database WISKI7 - package-internally. For computations near real time gauging data are queried externally from the PEGELONLINE REST API <https://pegelonline.wsv.de/webservice/dokuRestapi>.