Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Discrete factor analysis for dependent Poisson and negative binomial models with truncation, zero inflation, and zero inflated truncation.
Algorithms implementing populations of agents that interact with one another and sense their environment may exhibit emergent behavior such as self-organization and swarm intelligence. Here, a swarm system called Databionic swarm (DBS) is introduced which was published in Thrun, M.C., Ultsch A.: "Swarm Intelligence for Self-Organized Clustering" (2020), Artificial Intelligence, <DOI:10.1016/j.artint.2020.103237>. DBS is able to adapt itself to structures of high-dimensional data such as natural clusters characterized by distance and/or density based structures in the data space. The first module is the parameter-free projection method called Pswarm (Pswarm()), which exploits the concepts of self-organization and emergence, game theory, swarm intelligence and symmetry considerations. The second module is the parameter-free high-dimensional data visualization technique, which generates projected points on the topographic map with hypsometric tints defined by the generalized U-matrix (GeneratePswarmVisualization()). The third module is the clustering method itself with non-critical parameters (DBSclustering()). Clustering can be verified by the visualization and vice versa. The term DBS refers to the method as a whole. It enables even a non-professional in the field of data mining to apply its algorithms for visualization and/or clustering to data sets with completely different structures drawn from diverse research fields. The comparison to common projection methods can be found in the book of Thrun, M.C.: "Projection Based Clustering through Self-Organization and Swarm Intelligence" (2018) <DOI:10.1007/978-3-658-20540-9>.
Function to test spatial segregation and association based in contingency table analysis of nearest neighbour counts following Dixon (2002) <doi:10.1080/11956860.2002.11682700>. Some Fortran code has been included to the original dixon2002() function of the ecespa package to improve speed.
Data frame, tibble, or tbl objects are converted to data package objects using specific metadata labels (name, version, title, homepage, description). A data package object ('dpkg') can be written to disk as a parquet file or released to a GitHub repository. Data package objects can be read into R from online repositories and downloaded files are cached locally across R sessions.
Query for metrics from Datadog (<https://www.datadoghq.com/>) via its API.
This package provides a wide collection of univariate discrete data sets from various applied domains related to distribution theory. The functions allow quick, easy, and efficient access to 100 univariate discrete data sets. The data are related to different applied domains, including medical, reliability analysis, engineering, manufacturing, occupational safety, geological sciences, terrorism, psychology, agriculture, environmental sciences, road traffic accidents, demography, actuarial science, law, and justice. The documentation, along with associated references for further details and uses, is presented.
Use leaf physiognomic methods to reconstruct mean annual temperature (MAT), mean annual precipitation (MAP), and leaf dry mass per area (Ma), along with other useful quantitative leaf traits. Methods in this package described in Lowe et al. (in review).
This package performs emulation of dynamic simulators using Gaussian process via one-step ahead approach. The package implements a flexible framework for approximating time-dependent outputs from computationally expensive dynamic systems. It is specifically designed for nonlinear dynamic systems where full simulations may be costly. The underlying Gaussian process model accounts for temporal dependency through the one-step-ahead formulation, allowing for accurate emulation of complex dynamics. Hyperparameters are estimated via maximum likelihood. For methodological details, see Heo (2025, <doi:10.48550/arXiv.2503.20250>) for exact method, and Mohammadi, Challenor, and Goodfellow (2019, <doi:10.1016/j.csda.2019.05.006>) for Monte Carlo method.
The debar sequence processing pipeline is designed for denoising high throughput sequencing data for the animal DNA barcode marker cytochrome c oxidase I (COI). The package is designed to detect and correct insertion and deletion errors within sequencer outputs. This is accomplished through comparison of input sequences against a profile hidden Markov model (PHMM) using the Viterbi algorithm (for algorithm details see Durbin et al. 1998, ISBN: 9780521629713). Inserted base pairs are removed and deleted base pairs are accounted for through the introduction of a placeholder character. Since the PHMM is a probabilistic representation of the COI barcode, corrections are not always perfect. For this reason debar censors base pairs adjacent to reported indel sites, turning them into placeholder characters (default is 7 base pairs in either direction, this feature can be disabled). Testing has shown that this censorship results in the correct sequence length being restored, and erroneous base pairs being masked the vast majority of the time (>95%).
DataSHIELD is an infrastructure and series of R packages that enables the remote and non-disclosive analysis of sensitive research data. This package is the DataSHIELD interface implementation for Opal', which is the data integration application for biobanks by OBiBa'. Participant data, once collected from any data source, must be integrated and stored in a central data repository under a uniform model. Opal is such a central repository. It can import, process, validate, query, analyze, report, and export data. Opal is the reference implementation of the DataSHIELD infrastructure.
Researchers can characterize and learn about the properties of research designs before implementation using `DeclareDesign`. Ex ante declaration and diagnosis of designs can help researchers clarify the strengths and limitations of their designs and to improve their properties, and can help readers evaluate a research strategy prior to implementation and without access to results. It can also make it easier for designs to be shared, replicated, and critiqued.
Extends the functionality of other plotting packages (notably ggplot2') to help facilitate the plotting of data over long time intervals, including, but not limited to, geological, evolutionary, and ecological data. The primary goal of deeptime is to enable users to add highly customizable timescales to their visualizations. Other functions are also included to assist with other areas of deep time visualization.
This package provides a set of three two-census methods to the estimate the degree of death registration coverage for a population. Implemented methods include the Generalized Growth Balance method (GGB), the Synthetic Extinct Generation method (SEG), and a hybrid of the two, GGB-SEG. Each method offers automatic estimation, but users may also specify exact parameters or use a graphical interface to guess parameters in the traditional way if desired.
An implementation of deliberative reasoning index (DRI) and related tools for analysis of deliberation survey data. Calculation of DRI, plot of intersubjective correlations (IC), generation of large-language model (LLM) survey data, and permutation tests are supported. Example datasets and a graphical user interface (GUI) are also available to support analysis. For more information, see Niemeyer and Veri (2022) <doi:10.1093/oso/9780192848925.003.0007>.
Builds interactive d3.js hierarchical visualisation easily. D3partitionR makes it easy to build and customize sunburst, circle treemap, treemap, partition chart, ...
Implementations of several multiple testing procedures that control the family-wise error rate (FWER) designed specifically for discrete tests. Included are discrete adaptations of the Bonferroni, Holm, Hochberg and Šidák procedures as described in the papers Döhler (2010) "Validation of credit default probabilities using multiple-testing procedures" <doi:10.21314/JRMV.2010.062> and Zhu & Guo (2019) "Family-Wise Error Rate Controlling Procedures for Discrete Data" <doi:10.1080/19466315.2019.1654912>. The main procedures of this package take as input the results of a test procedure from package DiscreteTests or a set of observed p-values and their discrete support under their nulls. A shortcut function to apply discrete procedures directly to data is also provided.
Evaluate the presence of disposition effect and others irrational investor's behaviors based solely on investor's transactions and financial market data. Experimental data can also be used to perform the analysis. Four different methodologies are implemented to account for the different nature of human behaviors on financial markets. Novel analyses such as portfolio driven and time series disposition effect are also allowed.
Base DataSHIELD functions for the client side. DataSHIELD is a software package which allows you to do non-disclosive federated analysis on sensitive data. DataSHIELD analytic functions have been designed to only share non disclosive summary statistics, with built in automated output checking based on statistical disclosure control. With data sites setting the threshold values for the automated output checks. For more details, see citation('dsBaseClient').
Efficient estimation of Dynamic Factor Models using the Expectation Maximization (EM) algorithm or Two-Step (2S) estimation, supporting datasets with missing data and mixed-frequency nowcasting applications. Factors follow a stationary VAR process of order p. Estimation options include: running the Kalman Filter and Smoother once with PCA initial values (2S) as in Doz, Giannone and Reichlin (2011) <doi:10.1016/j.jeconom.2011.02.012>; iterated Kalman Filtering and Smoothing until EM convergence as in Doz, Giannone and Reichlin (2012) <doi:10.1162/REST_a_00225>; or the adapted EM algorithm of Banbura and Modugno (2014) <doi:10.1002/jae.2306>, allowing arbitrary missing-data patterns and monthly-quarterly mixed-frequency datasets. The implementation uses the Armadillo C++ library and the collapse package for fast estimation. A comprehensive set of methods supports interpretation and visualization, forecasting, and decomposition of the news content of macroeconomic data releases following Banbura and Modugno (2014). Information criteria to choose the number of factors are also provided, following Bai and Ng (2002) <doi:10.1111/1468-0262.00273>.
Utilities to represent, visualize, filter, analyse, and summarize time-depth recorder (TDR) data. Miscellaneous functions for handling location data are also provided.
This package performs the drifting Markov models (DMM) which are non-homogeneous Markov models designed for modeling the heterogeneities of sequences in a more flexible way than homogeneous Markov chains or even hidden Markov models. In this context, we developed an R package dedicated to the estimation, simulation and the exact computation of associated reliability of drifting Markov models. The implemented methods are described in Vergne, N. (2008), <doi:10.2202/1544-6115.1326> and Barbu, V.S., Vergne, N. (2019) <doi:10.1007/s11009-018-9682-8> .
Discriminant Analysis (DA) for evolutionary inference (Qin, X. et al, 2020, <doi:10.22541/au.159256808.83862168>), especially for population genetic structure and community structure inference. This package incorporates the commonly used linear and non-linear, local and global supervised learning approaches (discriminant analysis), including Linear Discriminant Analysis of Kernel Principal Components (LDAKPC), Local (Fisher) Linear Discriminant Analysis (LFDA), Local (Fisher) Discriminant Analysis of Kernel Principal Components (LFDAKPC) and Kernel Local (Fisher) Discriminant Analysis (KLFDA). These discriminant analyses can be used to do ecological and evolutionary inference, including demography inference, species identification, and population/community structure inference.
Parses command line arguments and supplies values to scripts. Users can specify names to which parsed inputs are assigned, value types into which inputs are cast, long options or short options, input splitters and callbacks that define how options should be specified and how input values are supplied.
Fast computation of the distance covariance dcov and distance correlation dcor'. The computation cost is only O(n log(n)) for the distance correlation (see Chaudhuri, Hu (2019) <arXiv:1810.11332> <doi:10.1016/j.csda.2019.01.016>). The functions are written entirely in C++ to speed up the computation.