Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An efficient and incremental approach for calculating the differences in orbit counts when performing single edge modifications in a network. Calculating the differences in orbit counts is much more efficient than recalculating all orbit counts from scratch for each time point.
This package implements the compartment model from Tokars (2018) <doi:10.1016/j.vaccine.2018.10.026>. This enables quantification of population-wide impact of vaccination against vaccine-preventable diseases such as influenza.
This package provides advanced functions for image processing based on the package imager'.
Create and view tickets in gitea', a self-hosted git service <https://gitea.io>, using an RStudio addin, and use helper functions to publish documentation and use git.
Manage a GitHub problem using R: wrangle issues, labels and milestones. It includes functions for storing, prioritizing (sorting), displaying, adding, deleting, and selecting (filtering) issues based on qualitative and quantitative information. Issues (labels and milestones) are written in lists and categorized into the S3 class to be easily manipulated as datasets in R.
This package provides functions and classes to compute, handle and visualise incidence from dated events for a defined time interval. Dates can be provided in various standard formats. The class incidence is used to store computed incidence and can be easily manipulated, subsetted, and plotted. In addition, log-linear models can be fitted to incidence objects using fit'. This package is part of the RECON (<https://www.repidemicsconsortium.org/>) toolkit for outbreak analysis.
Identity by Descent (IBD) distributions in pedigrees. A Hidden Markov Model is used to compute identity coefficients, simulate IBD segments and to derive the distribution of total IBD sharing and segment count across chromosomes. The methods are applied in Kruijver (2025) <doi:10.3390/genes16050492>. The probability that the total IBD sharing is zero can be computed using the method of Donnelly (1983) <doi:10.1016/0040-5809(83)90004-7>.
This package provides functions to support the ICES Transparent Assessment Framework <https://taf.ices.dk> to organize data, methods, and results used in ICES assessments. ICES is an organization facilitating international collaboration in marine science.
This package provides tools for estimating uncertainty in individual polygenic risk scores (PRSs) using both sampling-based and analytical methods, as well as the Best Linear Unbiased Estimator (BLUE). These methods quantify variability in PRS estimates for both binary and quantitative traits. See Henderson (1975) <doi:10.2307/2529430> for more details.
The Dynamic Time Warping (DTW) distance measure for time series allows non-linear alignments of time series to match similar patterns in time series of different lengths and or different speeds. IncDTW is characterized by (1) the incremental calculation of DTW (reduces runtime complexity to a linear level for updating the DTW distance) - especially for life data streams or subsequence matching, (2) the vector based implementation of DTW which is faster because no matrices are allocated (reduces the space complexity from a quadratic to a linear level in the number of observations) - for all runtime intensive DTW computations, (3) the subsequence matching algorithm runDTW, that efficiently finds the k-NN to a query pattern in a long time series, and (4) C++ in the heart. For details about DTW see the original paper "Dynamic programming algorithm optimization for spoken word recognition" by Sakoe and Chiba (1978) <DOI:10.1109/TASSP.1978.1163055>. For details about this package, Dynamic Time Warping and Incremental Dynamic Time Warping please see "IncDTW: An R Package for Incremental Calculation of Dynamic Time Warping" by Leodolter et al. (2021) <doi:10.18637/jss.v099.i09>.
This package provides a dataframe validation framework for package builders who use dataframes as function parameters. It performs checks on column names, coerces data-types, and checks grouping to make sure user inputs conform to a specification provided by the package author. It provides a mechanism for package authors to automatically document supported dataframe inputs and selectively dispatch to functions depending on the format of a dataframe much like S3 does for classes. It also contains some developer tools to make working with and documenting dataframe specifications easier. It helps package developers to improve their documentation and simplifies parameter validation where dataframes are used as function parameters.
Classical Ising Model is a land mark system in statistical physics.The model explains the physics of spin glasses and magnetic materials, and cooperative phenomenon in general, for example phase transitions and neural networks.This package provides utilities to simulate one dimensional Ising Model with Metropolis and Glauber Monte Carlo with single flip dynamics in periodic boundary conditions. Utility functions for exact solutions are provided. Such as transfer matrix for 1D. Utility functions for exact solutions are provided. Example use cases are as follows: Measuring effective ergodicity and power-laws in so called functional-diffusion.
Collection of R functions to do purely presence-only species distribution modeling with isolation forest (iForest) and its variations such as Extended isolation forest and SCiForest. See the details of these methods in references: Liu, F.T., Ting, K.M. and Zhou, Z.H. (2008) <doi:10.1109/ICDM.2008.17>, Hariri, S., Kind, M.C. and Brunner, R.J. (2019) <doi:10.1109/TKDE.2019.2947676>, Liu, F.T., Ting, K.M. and Zhou, Z.H. (2010) <doi:10.1007/978-3-642-15883-4_18>, Guha, S., Mishra, N., Roy, G. and Schrijvers, O. (2016) <https://proceedings.mlr.press/v48/guha16.html>, Cortes, D. (2021) <doi:10.48550/arXiv.2110.13402>. Additionally, Shapley values are used to explain model inputs and outputs. See details in references: Shapley, L.S. (1953) <doi:10.1515/9781400881970-018>, Lundberg, S.M. and Lee, S.I. (2017) <https://dm-gatech.github.io/CS8803-Fall2018-DML-Papers/shapley.pdf>, Molnar, C. (2020) <ISBN:978-0-244-76852-2>, Å trumbelj, E. and Kononenko, I. (2014) <doi:10.1007/s10115-013-0679-x>. itsdm also provides functions to diagnose variable response, analyze variable importance, draw spatial dependence of variables and examine variable contribution. As utilities, the package includes a few functions to download bioclimatic variables including WorldClim version 2.0 (see Fick, S.E. and Hijmans, R.J. (2017) <doi:10.1002/joc.5086>) and CMCC-BioClimInd (see Noce, S., Caporaso, L. and Santini, M. (2020) <doi:10.1038/s41597-020-00726-5>.
It performs interlaboratory studies (ILS) to detect those laboratories that provide non-consistent results when comparing to others. It permits to work simultaneously with various testing materials, from standard univariate, and functional data analysis (FDA) perspectives. The univariate approach based on ASTM E691-08 consist of estimating the Mandel's h and k statistics to identify those laboratories that provide more significant different results, testing also the presence of outliers by Cochran and Grubbs tests, Analysis of variance (ANOVA) techniques are provided (F and Tuckey tests) to test differences in means corresponding to different laboratories per each material. Taking into account the functional nature of data retrieved in analytical chemistry, applied physics and engineering (spectra, thermograms, etc.). ILS package provides a FDA approach for finding the Mandel's k and h statistics distribution by smoothing bootstrap resampling.
Interesting igraph datasets from Melanie Walsh's sample social network datasets repository <https://github.com/melaniewalsh/sample-social-network-datasets>.
Spatial interpolation toolkit designed for environmental and geospatial applications. It includes a range of methods, from traditional techniques to advanced machine learning approaches, ensuring accurate and efficient estimation of values in unobserved locations.
This package provides a library for generic interval manipulations using a new interval vector class. Capabilities include: locating various kinds of relationships between two interval vectors, merging overlaps within a single interval vector, splitting an interval vector on its overlapping endpoints, and applying set theoretical operations on interval vectors. Many of the operations in this package were inspired by James Allen's interval algebra, Allen (1983) <doi:10.1145/182.358434>.
An implementation of the MaxLFQ algorithm by Cox et al. (2014) <doi:10.1074/mcp.M113.031591> in a comprehensive pipeline for processing proteomics data in data-independent acquisition mode (Pham et al. 2020 <doi:10.1093/bioinformatics/btz961>). It offers additional options for protein quantification using the N most intense fragment ions, using all fragment ions, the median polish algorithm by Tukey (1977, ISBN:0201076160), and a robust linear model. In general, the tool can be used to integrate multiple proportional observations into a single quantitative value.
This package provides a collection of functions for working with time series data, including functions for drawing, decomposing, and forecasting. Includes capabilities to compare multiple series and fit both additive and multiplicative models. Used by iNZight', a graphical user interface providing easy exploration and visualisation of data for students of statistics, available in both desktop and online versions. Holt (1957) <doi:10.1016/j.ijforecast.2003.09.015>, Winters (1960) <doi:10.1287/mnsc.6.3.324>, Cleveland, Cleveland, & Terpenning (1990) "STL: A Seasonal-Trend Decomposition Procedure Based on Loess".
The Row-column designs are widely recommended for experimental situations when there are two well-identified factors that are cross-classified representing known sources of variability. These designs are expected to result a gain in accuracy of estimating treatment comparisons in an experiment as they eliminate the effects of the row and column factors. However, these designs are not readily available when the number of treatments is more than the levels of row and column blocking factors. This package named iRoCoDe generates row-column designs with incomplete rows and columns, by amalgamating two incomplete block designs (D1 and D2). The selection of D1 and D2 (the input designs) can be done from the available incomplete block designs, viz., balanced incomplete block designs/ partially balanced incomplete block designs/ t-designs. (Mcsorley, J.P., Phillips, N.C., Wallis, W.D. and Yucas, J.L. (2005).<doi:10.1007/s10623-003-6149-9>).
An implementation of the Harris Corner Detection as described in the paper "An Analysis and Implementation of the Harris Corner Detector" by Sánchez J. et al (2018) available at <doi:10.5201/ipol.2018.229>. The package allows to detect relevant points in images which are characteristic to the digital image.
Intensity-duration-frequency (IDF) curves are a widely used analysis-tool in hydrology to assess extreme values of precipitation [e.g. Mailhot et al., 2007, <doi:10.1016/j.jhydrol.2007.09.019>]. The package IDF provides functions to estimate IDF parameters for given precipitation time series on the basis of a duration-dependent generalized extreme value distribution [Koutsoyiannis et al., 1998, <doi:10.1016/S0022-1694(98)00097-3>].
Immunotherapy has revolutionized cancer treatment, but predicting patient response remains challenging. Here, we presented Intelligent Predicting Response to cancer Immunotherapy through Systematic Modeling (iPRISM), a novel network-based model that integrates multiple data types to predict immunotherapy outcomes. It incorporates gene expression, biological functional network, tumor microenvironment characteristics, immune-related pathways, and clinical data to provide a comprehensive view of factors influencing immunotherapy efficacy. By identifying key genetic and immunological factors, it provides an insight for more personalized treatment strategies and combination therapies to overcome resistance mechanisms.
Intervention analysis is used to investigate structural changes in data resulting from external events. Traditional time series intervention models, viz. Autoregressive Integrated Moving Average model with exogeneous variables (ARIMA-X) and Artificial Neural Networks with exogeneous variables (ANN-X), rely on linear intervention functions such as step or ramp functions, or their combinations. In this package, the Gompertz, Logistic, Monomolecular, Richard and Hoerl function have been used as non-linear intervention function. The equation of the above models are represented as: Gompertz: A * exp(-B * exp(-k * t)); Logistic: K / (1 + ((K - N0) / N0) * exp(-r * t)); Monomolecular: A * exp(-k * t); Richard: A + (K - A) / (1 + exp(-B * (C - t)))^(1/beta) and Hoerl: a*(b^t)*(t^c).This package introduced algorithm for time series intervention analysis employing ARIMA and ANN models with a non-linear intervention function. This package has been developed using algorithm of Yeasin et al. <doi:10.1016/j.hazadv.2023.100325> and Paul and Yeasin <doi:10.1371/journal.pone.0272999>.