Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Interface to Altair <https://altair-viz.github.io>, which itself is a Python interface to Vega-Lite <https://vega.github.io/vega-lite/>. This package uses the Reticulate framework <https://rstudio.github.io/reticulate/> to manage the interface between R and Python'.
The anomalize package enables a "tidy" workflow for detecting anomalies in data. The main functions are time_decompose(), anomalize(), and time_recompose(). When combined, it's quite simple to decompose time series, detect anomalies, and create bands separating the "normal" data from the anomalous data at scale (i.e. for multiple time series). Time series decomposition is used to remove trend and seasonal components via the time_decompose() function and methods include seasonal decomposition of time series by Loess ("stl") and seasonal decomposition by piecewise medians ("twitter"). The anomalize() function implements two methods for anomaly detection of residuals including using an inner quartile range ("iqr") and generalized extreme studentized deviation ("gesd"). These methods are based on those used in the forecast package and the Twitter AnomalyDetection package. Refer to the associated functions for specific references for these methods.
Is a collection of models to analyze genome scale codon data using a Bayesian framework. Provides visualization routines and checkpointing for model fittings. Currently published models to analyze gene data for selection on codon usage based on Ribosome Overhead Cost (ROC) are: ROC (Gilchrist et al. (2015) <doi:10.1093/gbe/evv087>), and ROC with phi (Wallace & Drummond (2013) <doi:10.1093/molbev/mst051>). In addition AnaCoDa contains three currently unpublished models. The FONSE (First order approximation On NonSense Error) model analyzes gene data for selection on codon usage against of nonsense error rates. The PA (PAusing time) and PANSE (PAusing time + NonSense Error) models use ribosome footprinting data to analyze estimate ribosome pausing times with and without nonsense error rate from ribosome footprinting data.
This package provides a simple interface to the Microsoft Graph API <https://learn.microsoft.com/en-us/graph/overview>. Graph is a comprehensive framework for accessing data in various online Microsoft services. This package was originally intended to provide an R interface only to the Azure Active Directory part, with a view to supporting interoperability of R and Azure': users, groups, registered apps and service principals. However it has since been expanded into a more general tool for interacting with Graph. Part of the AzureR family of packages.
This package provides a simple client for the Amazon Web Services ('AWS') Identity and Access Management ('IAM') API <https://aws.amazon.com/iam/>.
Package that simulates adaptive (multi-arm, multi-stage) clinical trials using adaptive stopping, adaptive arm dropping, and/or adaptive randomisation. Developed as part of the INCEPT (Intensive Care Platform Trial) project (<https://incept.dk/>), primarily supported by a grant from Sygeforsikringen "danmark" (<https://www.sygeforsikring.dk/>).
This package provides a powerful tool for automating the early detection of disease outbreaks in time series data. aeddo employs advanced statistical methods, including hierarchical models, in an innovative manner to effectively characterize outbreak signals. It is particularly useful for epidemiologists, public health professionals, and researchers seeking to identify and respond to disease outbreaks in a timely fashion. For a detailed reference on hierarchical models, consult Henrik Madsen and Poul Thyregod's book (2011), ISBN: 9781420091557.
Lite interface for finding locations of addresses or businesses around the world using the ArcGIS REST API service <https://developers.arcgis.com/rest/geocode/api-reference/overview-world-geocoding-service.htm>. Address text can be converted to location candidates and a location can be converted into an address. No API key required.
Edit an Antares simulation before running it : create new areas, links, thermal clusters or binding constraints or edit existing ones. Update Antares general & optimization settings. Antares is an open source power system generator, more information available here : <https://antares-simulator.org/>.
We provide tools to estimate two prediction accuracy metrics, the average positive predictive values (AP) as well as the well-known AUC (the area under the receiver operator characteristic curve) for risk scores. The outcome of interest is either binary or censored event time. Note that for censored event time, our functions estimates, the AP and the AUC, are time-dependent for pre-specified time interval(s). A function that compares the APs of two risk scores/markers is also included. Optional outputs include positive predictive values and true positive fractions at the specified marker cut-off values, and a plot of the time-dependent AP versus time (available for event time data).
This package provides functions for generating and visualising networks for characterising the physical attributes and spatial organisations of habitat components (i.e. habitat physical configurations). The network generating algorithm first determines the X and Y coordinates of N nodes within a rectangle with a side length of L and an area of A. Then it computes the pair-wise Euclidean distance Dij between node i and j, and then a complete network with 1/Dij as link weights is constructed. Then, the algorithm removes links from the complete network with the probability as shown in the function ahn_prob(). Such link removals can make the network disconnected whereas a connected network is wanted. In such cases, the algorithm rewires one network component to its spatially nearest neighbouring component and repeat doing this until the network is connected again. Finally, it outputs an undirected network (weighted or unweighted, connected or disconnected). This package came with a manuscript on modelling the physical configurations of animal habitats using networks (in preparation).
It covers various approaches to analysis of variance, provides an assumption testing section in order to provide a decision diagram that allows selecting the most appropriate technique. It provides the classical analysis of variance, the nonparametric equivalent of Kruskal Wallis, and the Bayesian approach. These results are shown in an interactive shiny panel, which allows modifying the arguments of the tests, contains interactive graphics and presents automatic conclusions depending on the tests in order to contribute to the interpretation of these analyzes. AovBay uses Stan and FactorBayes for Bayesian analysis and Highcharts for interactive charts.
This package provides a set of functions for interacting with the DigitalOcean API <https://www.digitalocean.com/>, including creating images, destroying them, rebooting, getting details on regions, and available images.
Solves the problem of identifying the densest submatrix in a given or sampled binary matrix, Bombina et al. (2019) <arXiv:1904.03272>.
This package provides a tidy framework for automatic knowledge classification and visualization. Currently, the core functionality of the framework is mainly supported by modularity-based clustering (community detection) in keyword co-occurrence network, and focuses on co-word analysis of bibliometric research. However, the designed functions in akc are general, and could be extended to solve other tasks in text mining as well.
This package provides a recent method proposed by Yi and Chen (2023) <doi:10.1177/09622802221146308> is used to estimate the average treatment effects using noisy data containing both measurement error and spurious variables. The package AteMeVs contains a set of functions that provide a step-by-step estimation procedure, including the correction of the measurement error effects, variable selection for building the model used to estimate the propensity scores, and estimation of the average treatment effects. The functions contain multiple options for users to implement, including different ways to correct for the measurement error effects, distinct choices of penalty functions to do variable selection, and various regression models to characterize propensity scores.
Wraps the AT Protocol (Authenticated Transfer Protocol) behind Bluesky <https://bsky.social>. Functions can be used for, among others, retrieving posts and followers from the network or posting content.
Flagger to detect acute kidney injury (AKI) in a patient dataset.
Generation of natural looking noise has many application within simulation, procedural generation, and art, to name a few. The ambient package provides an interface to the FastNoise C++ library and allows for efficient generation of perlin, simplex, worley, cubic, value, and white noise with optional perturbation in either 2, 3, or 4 (in case of simplex and white noise) dimensions.
This package provides functions for age-period-cohort analysis. Aggregate data can be organised in matrices indexed by age-cohort, age-period or cohort-period. The data can include dose and response or just doses. The statistical model is a generalized linear model (GLM) allowing for 3,2,1 or 0 of the age-period-cohort factors. 2-sample analysis is possible. Mixed frequency data are possible. Individual-level data should have a row for each individual and columns for each of age, period, and cohort. The statistical model for repeated cross-section is a generalized linear model. The statistical model for panel data is ordinary least squares. The canonical parametrisation of Kuang, Nielsen and Nielsen (2008) <DOI:10.1093/biomet/asn026> is used. Thus, the analysis does not rely on ad hoc identification.
Age-Period-Cohort (APC) analyses are used to differentiate relevant drivers for long-term developments. The APCtools package offers visualization techniques and general routines to simplify the workflow of an APC analysis. Sophisticated functions are available both for descriptive and regression model-based analyses. For the former, we use density (or ridgeline) matrices and (hexagonally binned) heatmaps as innovative visualization techniques building on the concept of Lexis diagrams. Model-based analyses build on the separation of the temporal dimensions based on generalized additive models, where a tensor product interaction surface (usually between age and period) is utilized to represent the third dimension (usually cohort) on its diagonal. Such tensor product surfaces can also be estimated while accounting for further covariates in the regression model. See Weigert et al. (2021) <doi:10.1177/1354816620987198> for methodological details.
Auto-GO is a framework that enables automated, high quality Gene Ontology enrichment analysis visualizations. It also features a handy wrapper for Differential Expression analysis around the DESeq2 package described in Love et al. (2014) <doi:10.1186/s13059-014-0550-8>. The whole framework is structured in different, independent functions, in order to let the user decide which steps of the analysis to perform and which plot to produce.
Uncertainty quantification and inverse estimation by probabilistic generative models from the beginning of the data analysis. An example is a Fourier basis method for inverse estimation in scattering analysis of microscopy videos. It does not require specifying a certain range of Fourier bases and it substantially reduces computational cost via the generalized Schur algorithm. See the reference: Mengyang Gu, Yue He, Xubo Liu and Yimin Luo (2023), <doi:10.48550/arXiv.2309.02468>.
This package provides methods for processing corporate balance sheets with a focus on the Brazilian reporting format. Includes data standardization, classification by accounting categories, and aggregation of values. Supports accounting and financial analyses of companies, improving efficiency and ensuring reproducibility of empirical studies.