Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Urban water and sanitation survey dataset collected by Water and Sanitation for the Urban Poor (WSUP) with technical support from Valid International. These citywide surveys have been collecting data allowing water and sanitation service levels across the entire city to be characterised, while also allowing more detailed data to be collected in areas of the city of particular interest. These surveys are intended to generate useful information for others working in the water and sanitation sector. Current release version includes datasets collected from a survey conducted in Dhaka, Bangladesh in March 2017. This survey in Dhaka is one of a series of surveys to be conducted by WSUP in various cities in which they operate including Accra, Ghana; Nakuru, Kenya; Antananarivo, Madagascar; Maputo, Mozambique; and, Lusaka, Zambia. This package will be updated once the surveys in other cities are completed and datasets have been made available.
Convert, validate, format and elegantly print geographic coordinates and waypoints (paired latitude and longitude values) in decimal degrees, degrees and minutes, and degrees, minutes and seconds using high performance C++ code to enable rapid conversion and formatting of large coordinate and waypoint datasets.
This package provides efficient implementations of weighted dependence measures and related asymptotic tests for independence. Implemented measures are the Pearson correlation, Spearman's rho, Kendall's tau, Blomqvist's beta, and Hoeffding's D; see, e.g., Nelsen (2006) <doi:10.1007/0-387-28678-0> and Hollander et al. (2015, ISBN:9780470387375).
Calculate the win ratio for prioritized outcomes and the 95% confidence interval based on Bebu and Lachin (2016) <doi:10.1093/biostatistics/kxv032>. Three type of outcomes can be analyzed: survival "failure-time" events, repeated survival "failure-time" events and continuous or ordinal "non-failure time" events that are captured at specific time-points in the study.
This package provides a toolbox of common robust statistical tests, including robust descriptives, robust t-tests, and robust ANOVA. It is also available as a module for jamovi (see <https://www.jamovi.org> for more information). Walrus is based on the WRS2 package by Patrick Mair, which is in turn based on the scripts and work of Rand Wilcox. These analyses are described in depth in the book Introduction to Robust Estimation & Hypothesis Testing'.
This package provides tools for generating simulated sawn timber strength grading data with a main focus on statistical simulation based on covariance matrices. Simulation data for Norway spruce sawn timber from Austria and reference values of means and standard deviations of grade determining properties from literature for a number of European countries are provided, as well.
Collect multichannel marketing data from sources such as Google analytics, Facebook Ads, and many others using the Windsor.ai API <https://www.windsor.ai/api-fields/>.
This dataset was collected using a new four-arm within-study comparison design. The study aimed to examine the impact of a mathematics training intervention and a vocabulary study session on post-test scores in mathematics and vocabulary, respectively. The innovative four-arm within-study comparison design facilitates both experimental and quasi-experimental identification of average causal effects.
Lossless webp images are 26% smaller in size compared to PNG. Lossy webp images are 25-34% smaller in size compared to JPEG. This package reads and writes webp images into a 3 (rgb) or 4 (rgba) channel bitmap array using conventions from the jpeg and png packages.
Scrape lake metadata tables from Wikipedia <https://www.wikipedia.org/>.
This package provides functions aiming to facilitate the analysis of the structure of animal acoustic signals in R'. warbleR makes use of the basic sound analysis tools from the packages tuneR and seewave', and offers new tools for exploring and quantifying acoustic signal structure. The package allows to organize and manipulate multiple sound files, create spectrograms of complete recordings or individual signals in different formats, run several measures of acoustic structure, and characterize different structural levels in acoustic signals (Araya-Salas et al 2016 <doi:10.1111/2041-210X.12624>).
This package provides functions to compute Wasserstein barycenters of subset posteriors using the swapping algorithm developed by Puccetti, Rüschendorf and Vanduffel (2020) <doi:10.1016/j.jmaa.2017.02.003>. The Wasserstein barycenter is a geometric approach for combining subset posteriors. It allows for parallel and distributed computation of the posterior in case of complex models and/or big datasets, thereby increasing computational speed tremendously.
Book is "Linear Mixed Models: A Practical Guide Using Statistical Software" published in 2006 by Chapman Hall / CRC Press.
This package provides a collection of functions to perform the Application Programming Interface (API) calls associated with the Walk Score website (www.walkscore.com) within the R environment. These functions can be used to query the Walk Score and Transit Score database for a wide variety of information using R scripts. This package includes the simple Walk Score and Transit Score API calls, which return the scores associated with an input location, as well as calls which return some data used to calculate the scores. These functions are especially useful for mass data collection and gathering Walk Score and Transit Score values for large lists of locations.
The efficient treatment and convenient analysis of experimental high-throughput (omics) data gets facilitated through this collection of diverse functions. Several functions address advanced object-conversions, like manipulating lists of lists or lists of arrays, reorganizing lists to arrays or into separate vectors, merging of multiple entries, etc. Another set of functions provides speed-optimized calculation of standard deviation (sd), coefficient of variance (CV) or standard error of the mean (SEM) for data in matrixes or means per line with respect to additional grouping (eg n groups of replicates). A group of functions facilitate dealing with non-redundant information, by indexing unique, adding counters to redundant or eliminating lines with respect redundancy in a given reference-column, etc. Help is provided to identify very closely matching numeric values to generate (partial) distance matrixes for very big data in a memory efficient manner or to reduce the complexity of large data-sets by combining very close values. Other functions help aligning a matrix or data.frame to a reference using partial matching or to mine an experimental setup to extract patterns of replicate samples. Many times large experimental datasets need some additional filtering, adequate functions are provided. Convenient data normalization is supported in various different modes, parameter estimation via permutations or boot-strap as well as flexible testing of multiple pair-wise combinations using the framework of limma is provided, too. Batch reading (or writing) of sets of files and combining data to arrays is supported, too.
Analyze given data frame with multiple endpoints and return Kaplan-Meier survival probabilities together with the specified confidence interval. See Nabipoor M, Westerhout CM, Rathwell S, and Bakal JA (2023) <doi:10.1186/s12874-023-01857-0>.
Set of tools for manipulating gas exchange data from cardiopulmonary exercise testing.
K-means clustering, hierarchical clustering, and PCA with observational weights and/or variable weights. It also includes the corresponding functions for data nuggets which serve as representative samples of large datasets. Cherasia et al., (2022) <doi:10.1007/978-3-031-22687-8_20>. Amaratunga et al., (2009) <doi:10.1002/9780470317129>.
Taxonomic information from Wikipedia', Wikicommons', Wikispecies', and Wikidata'. Functions included for getting taxonomic information from each of the sources just listed, as well performing taxonomic search.
Wavelet decomposes a series into multiple sub series called detailed and smooth components which helps to capture volatility at multi resolution level by various models. Two hybrid Machine Learning (ML) models (Artificial Neural Network and Support Vector Regression have been used) have been developed in combination with stochastic models, feature selection, and optimization algorithms for prediction of the data. The algorithms have been developed following Paul and Garai (2021) <doi:10.1007/s00500-021-06087-4>.
Infectious disease surveillance requires early outbreak detection. This package provides statistical tools for analyzing time-series monitoring data through three core methods: a) EWMA (Exponentially Weighted Moving Average) b) Modified-CUSUM (Modified Cumulative Sum) c) Adjusted-Serfling models Methodologies are based on: - Wang et al. (2010) <doi:10.1016/j.jbi.2009.08.003> - Wang et al. (2015) <doi:10.1371/journal.pone.0119923> Designed for epidemiologists and public health researchers working with disease surveillance systems.
Additional options for making graphics in the context of analyzing high-throughput data are available here. This includes automatic segmenting of the current device (eg window) to accommodate multiple new plots, automatic checking for optimal location of legends in plots, small histograms to insert as legends, histograms re-transforming axis labels to linear when plotting log2-transformed data, a violin-plot <doi:10.1080/00031305.1998.10480559> function for a wide variety of input-formats, principal components analysis (PCA) <doi:10.1080/14786440109462720> with bag-plots <doi:10.1080/00031305.1999.10474494> to highlight and compare the center areas for groups of samples, generic MA-plots (differential- versus average-value plots) <doi:10.1093/nar/30.4.e15>, staggered count plots and generation of mouse-over interactive html pages.
Treemaps are a visually appealing graphical representation of numerical data using a space-filling approach. A plane or map is subdivided into smaller areas called cells. The cells in the map are scaled according to an underlying metric which allows to grasp the hierarchical organization and relative importance of many objects at once. This package contains two different implementations of treemaps, Voronoi treemaps and Sunburst treemaps. The Voronoi treemap function subdivides the plot area in polygonal cells according to the highest hierarchical level, then continues to subdivide those parental cells on the next lower hierarchical level, and so on. The Sunburst treemap is a computationally less demanding treemap that does not require iterative refinement, but simply generates circle sectors that are sized according to predefined weights. The Voronoi tesselation is based on functions from Paul Murrell (2012) <https://www.stat.auckland.ac.nz/~paul/Reports/VoronoiTreemap/voronoiTreeMap.html>.
This package provides a single function to fit data of an input data frame into one of the selected Weibull functions (w2, w3 and it's truncated versions), calculating the scale, location and shape parameters accordingly. The resulting plots and files are saved into the folder parameter provided by the user. References: a) John C. Nash, Ravi Varadhan (2011). "Unifying Optimization Algorithms to Aid Software System Users: optimx for R" <doi:10.18637/jss.v043.i09>.