Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
It covers various approaches to analysis of variance, provides an assumption testing section in order to provide a decision diagram that allows selecting the most appropriate technique. It provides the classical analysis of variance, the nonparametric equivalent of Kruskal Wallis, and the Bayesian approach. These results are shown in an interactive shiny panel, which allows modifying the arguments of the tests, contains interactive graphics and presents automatic conclusions depending on the tests in order to contribute to the interpretation of these analyzes. AovBay uses Stan and FactorBayes for Bayesian analysis and Highcharts for interactive charts.
This package provides algorithms for frequency-based pairing of alpha-beta T cell receptors.
Existing adaptive design methods in clinical trials. The package includes power, stopping boundaries (sample size) calculation functions for two-group group sequential designs, adaptive design with coprimary endpoints, biomarker-informed adaptive design, etc.
An R Shiny application for visual and statistical exploration and web communication of archaeological spatial data, either remains or sites. It offers interactive 3D and 2D visualisations (cross sections and maps of remains, timeline of the work made in a site) which can be exported in SVG and HTML formats. It performs simple spatial statistics (convex hull, regression surfaces, 2D kernel density estimation) and allows exporting data to other online applications for more complex methods. archeoViz can be used offline locally or deployed on a server, either with interactive input of data or with a static data set. Example is provided at <https://analytics.huma-num.fr/archeoviz/en>.
This package provides functions to convert origin-destination data, represented as straight desire lines in the sf Simple Features class system, into JSON files that can be directly imported into A/B Street <https://www.abstreet.org>, a free and open source tool for simulating urban transport systems and scenarios of change <doi:10.1007/s10109-020-00342-2>.
The functions proposed in this package allows to evaluate the process of measurement of the chemical components of water numerically or graphically. TSSS(), ICHS and datacheck() functions are useful to control the quality of measurements of chemical components of a sample of water. If one or more measurements include an error, the generated graph will indicate it with a position of the point that represents the sample outside the confidence interval. The function CI() allows to evaluate the possibility of contamination of a water sample after being obtained. Validation() is a function that allows to calculate the quality parameters of a technique for the measurement of a chemical component.
Interact with Google Ads Data Hub API <https://developers.google.com/ads-data-hub/reference/rest>. The functionality allows to fetch customer details, submit queries to ADH.
This package provides a collection of tools for the estimation of animals home range.
This package provides functions to produce accessible HTML slides, HTML', Word and PDF documents from input R markdown files. Accessible PDF files are produced only on a Windows Operating System. One aspect of accessibility is providing a headings structure that is recognised by a screen reader, providing a navigational tool for a blind or partially-sighted person. A key aim is to produce documents of different formats easily from each of a collection of R markdown source files. Input R markdown files are rendered using the render() function from the rmarkdown package <https://cran.r-project.org/package=rmarkdown>. A zip file containing multiple output files can be produced from one function call. A user-supplied template Word document can be used to determine the formatting of an output Word document. Accessible PDF files are produced from Word documents using OfficeToPDF <https://github.com/cognidox/OfficeToPDF>. A convenience function, install_otp() is provided to install this software. The option to print HTML output to (non-accessible) PDF files is also available.
This package provides a client for AWS Polly <http://aws.amazon.com/documentation/polly>, a speech synthesis service.
This package implements the adaptive sampling procedure, a framework for both positive unlabeled learning and learning with class label noise. Yang, P., Ormerod, J., Liu, W., Ma, C., Zomaya, A., Yang, J. (2018) <doi:10.1109/TCYB.2018.2816984>.
The ggarrow package is a ggplot2 extension that plots a variety of different arrow segments with many options to customize. The arrowheadr package makes it easy to create custom arrowheads and fins within the parameters that ggarrow functions expect. It has preset arrowheads and a collection of functions to create and transform data for customizing arrows.
Using sparse precision matricies and Choleski factorization simulates data that is auto-regressive.
This package provides a customisable set of tools for assessing and grading R or R-markdown scripts from students. It allows for checking correctness of code output, runtime statistics and static code analysis. The latter feature is made possible by representing R expressions using a tree structure.
The anomalize package enables a "tidy" workflow for detecting anomalies in data. The main functions are time_decompose(), anomalize(), and time_recompose(). When combined, it's quite simple to decompose time series, detect anomalies, and create bands separating the "normal" data from the anomalous data at scale (i.e. for multiple time series). Time series decomposition is used to remove trend and seasonal components via the time_decompose() function and methods include seasonal decomposition of time series by Loess ("stl") and seasonal decomposition by piecewise medians ("twitter"). The anomalize() function implements two methods for anomaly detection of residuals including using an inner quartile range ("iqr") and generalized extreme studentized deviation ("gesd"). These methods are based on those used in the forecast package and the Twitter AnomalyDetection package. Refer to the associated functions for specific references for these methods.
Stanford ATLAS (Advanced Temporal Search Engine) is a powerful tool that allows constructing cohorts of patients extremely quickly and efficiently. This package is designed to interface directly with an instance of ATLAS search engine and facilitates API queries and data dumps. Prerequisite is a good knowledge of the temporal language to be able to efficiently construct a query. More information available at <https://shahlab.stanford.edu/start>.
Implementation of gene-level rare variant association tests targeting allelic series: genes where increasingly deleterious mutations have increasingly large phenotypic effects. The COding-variant Allelic Series Test (COAST) operates on the benign missense variants (BMVs), deleterious missense variants (DMVs), and protein truncating variants (PTVs) within a gene. COAST uses a set of adjustable weights that tailor the test towards rejecting the null hypothesis for genes where the average magnitude of effect increases monotonically from BMVs to DMVs to PTVs. See McCaw ZR, Oâ Dushlaine C, Somineni H, Bereket M, Klein C, Karaletsos T, Casale FP, Koller D, Soare TW. (2023) "An allelic series rare variant association test for candidate gene discovery" <doi:10.1016/j.ajhg.2023.07.001>.
This package provides a lightweight, dependency-free toolbox for pre-processing XY data from experimental methods (i.e. any signal that can be measured along a continuous variable). This package provides methods for baseline estimation and correction, smoothing, normalization, integration and peaks detection. Baseline correction methods includes polynomial fitting as described in Lieber and Mahadevan-Jansen (2003) <doi:10.1366/000370203322554518>, Rolling Ball algorithm after Kneen and Annegarn (1996) <doi:10.1016/0168-583X(95)00908-6>, SNIP algorithm after Ryan et al. (1988) <doi:10.1016/0168-583X(88)90063-8>, 4S Peak Filling after Liland (2015) <doi:10.1016/j.mex.2015.02.009> and more.
This package implements a credential chain for Azure OAuth 2.0 authentication based on the package httr2''s OAuth framework. Sequentially attempts authentication methods until one succeeds. During development allows interactive browser-based flows ('Device Code and Auth Code flows) and non-interactive flow ('Client Secret') in batch mode.
This package provides a Tcl/Tk GUI for some basic functions in the ade4 package.
An interface to the ArcGIS arcpy and arcgis python API <https://pro.arcgis.com/en/pro-app/latest/arcpy/get-started/arcgis-api-for-python.htm>. Provides various tools for installing and configuring a Conda environment for accessing ArcGIS geoprocessing functions. Helper functions for manipulating and converting ArcGIS objects from R are also provided.
This package provides capabilities to process Apache HTTPD Log files.The main functionalities are to extract data from access and error log files to data frames.
Find an upper bound for the total amount of overstatement of assets in a set of accounts, or estimate the amount of sales tax owed on a collection of transactions (Meeden and Sargent, 2007, <doi:10.1080/03610920701386802>).
Lets you open a fixed-width ASCII file (.txt or .dat) that has an accompanying setup file (.sps or .sas). These file combinations are sometimes referred to as .txt+.sps, .txt+.sas, .dat+.sps, or .dat+.sas. This will only run in a txt-sps or txt-sas pair in which the setup file contains instructions to open that text file. It will NOT open other text files, .sav, .sas, or .por data files. Fixed-width ASCII files with setup files are common in older (pre-2000) government data.