Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Estimation of dark diversity and site-specific species pools using species co-occurrences. It includes implementations of probabilistic dark diversity based on the Hypergeometric distribution, as well as estimations based on the Beals index, which can be transformed to binary predictions using different thresholds, or transformed into a favorability index. All methods include the possibility of using a calibration dataset that is used to estimate the indication matrix between pairs of species, or to estimate dark diversity directly on a single dataset. See De Caceres and Legendre (2008) <doi:10.1007/s00442-008-1017-y>, Lewis et al. (2016) <doi:10.1111/2041-210X.12443>, Partel et al. (2011) <doi:10.1016/j.tree.2010.12.004>, Real et al. (2017) <doi:10.1093/sysbio/syw072> for further information.
Easy access to species distribution data for 6 regions in the world, for a total of 226 anonymised species. These data are described and made available by Elith et al (2020) <doi:10.17161/bi.v15i2.13384> to compare species distribution modelling methods.
Integrated differential expression (DE) and differential co-expression (DC) analysis on gene expression data based on DECODE (DifferEntial CO-expression and Differential Expression) algorithm.
This package provides functions to manage databases: select, update, insert, and delete records, list tables, backup tables as CSV files, and import CSV files as tables.
This package provides a system designed for detecting concept drift in streaming datasets. It offers a comprehensive suite of statistical methods to detect concept drift, including methods for monitoring changes in data distributions over time. The package supports several tests, such as Drift Detection Method (DDM), Early Drift Detection Method (EDDM), Hoeffding Drift Detection Methods (HDDM_A, HDDM_W), Kolmogorov-Smirnov test-based Windowing (KSWIN) and Page Hinkley (PH) tests. The methods implemented in this package are based on established research and have been demonstrated to be effective in real-time data analysis. For more details on the methods, please check to the following sources. KobyliŠska et al. (2023) <doi:10.48550/arXiv.2308.11446>, S. Kullback & R.A. Leibler (1951) <doi:10.1214/aoms/1177729694>, Gama et al. (2004) <doi:10.1007/978-3-540-28645-5_29>, Baena-Garcia et al. (2006) <https://www.researchgate.net/publication/245999704_Early_Drift_Detection_Method>, Frà as-Blanco et al. (2014) <https://ieeexplore.ieee.org/document/6871418>, Raab et al. (2020) <doi:10.1016/j.neucom.2019.11.111>, Page (1954) <doi:10.1093/biomet/41.1-2.100>, Montiel et al. (2018) <https://jmlr.org/papers/volume19/18-251/18-251.pdf>.
This package provides a set of functions to perform Raju, van der Linden and Fleer's (1995, <doi:10.1177/014662169501900405>) Differential Functioning of Items and Tests (DFIT) analyses. It includes functions to use the Monte Carlo Item Parameter Replication approach (Oshima, Raju, & Nanda, 2006, <doi:10.1111/j.1745-3984.2006.00001.x>) for obtaining the associated statistical significance tests cut-off points. They may also be used for a priori and post-hoc power calculations (Cervantes, 2017, <doi:10.18637/jss.v076.i05>).
Generate balanced factorial designs with crossed and nested random and fixed effects <https://github.com/mmrabe/designr>.
Dynamic path analysis with estimation of the corresponding direct, indirect, and total effects, based on Fosen et al., (2006) <doi:10.1007/s10985-006-9004-2>. The main outcome of interest is a counting process from survival analysis (or recurrent events) data. At each time of event, ordinary linear regression is used to estimate the relation between the covariates, while Aalen's additive hazard model is used for the regression of the counting process on the covariates.
Similarity of dissolution profiles is assessed using the similarity factor f2 according to the EMA guideline (European Medicines Agency 2010) "On the investigation of bioequivalence". Dissolution profiles are regarded as similar if the f2 value is between 50 and 100. For the applicability of the similarity factor f2, the variability between profiles needs to be within certain limits. Often, this constraint is violated. One possibility in this situation is to resample the measured profiles in order to obtain a bootstrap estimate of f2 (Shah et al. (1998) <doi:10.1023/A:1011976615750>). Other alternatives are the model-independent non-parametric multivariate confidence region (MCR) procedure (Tsong et al. (1996) <doi:10.1177/009286159603000427>) or the T2-test for equivalence procedure (Hoffelder (2016) <https://www.ecv.de/suse_item.php?suseId=Z|pi|8430>). Functions for estimation of f1, f2, bootstrap f2, MCR / T2-test for equivalence procedure are implemented.
Calculate multiple biotic indices using diatoms from environmental samples. Diatom species are recognized by their species name using a heuristic search, and their ecological data is retrieved from multiple sources. It includes number/shape of chloroplasts diversity indices, size classes, ecological guilds, and multiple biotic indices. It outputs both a dataframe with all the results and plots of all the obtained data in a defined output folder. - Sample data was taken from Nicolosi Gelis, Cochero & Gómez (2020, <doi:10.1016/j.ecolind.2019.105951>). - The package uses the Diat.Barcode database to calculate morphological and ecological information by Rimet & Couchez (2012, <doi:10.1051/kmae/2012018>),and the combined classification of guilds and size classes established by B-Béres et al. (2017, <doi:10.1016/j.ecolind.2017.07.007>). - Current diatom-based biotic indices include the DES index by Descy (1979) - EPID index by Dell'Uomo (1996, ISBN: 3950009002) - IDAP index by Prygiel & Coste (1993, <doi:10.1007/BF00028033>) - ID-CH index by Hürlimann & Niederhauser (2007) - IDP index by Gómez & Licursi (2001, <doi:10.1023/A:1011415209445>) - ILM index by Leclercq & Maquet (1987) - IPS index by Coste (1982) - LOBO index by Lobo, Callegaro, & Bender (2002, ISBN:9788585869908) - SLA by SládeÄ ek (1986, <doi:10.1002/aheh.19860140519>) - TDI index by Kelly, & Whitton (1995, <doi:10.1007/BF00003802>) - SPEAR(herbicide) index by Wood, Mitrovic, Lim, Warne, Dunlop, & Kefford (2019, <doi:10.1016/j.ecolind.2018.12.035>) - PBIDW index by Castro-Roa & Pinilla-Agudelo (2014) - DISP index by Stenger-Kovács et al. (2018, <doi:10.1016/j.ecolind.2018.07.026>) - EDI index by Chamorro et al. (2024, <doi:10.1021/acsestwater.4c00126>) - DDI index by à lvarez-Blanco et al. (2013, <doi: 10.1007/s10661-012-2607-z>) - PDISE index by Kahlert et al. (2023, <doi:10.1007/s10661-023-11378-4>).
Semi-Binary and Semi-Ternary Matrix Decomposition are performed based on Non-negative Matrix Factorization (NMF) and Singular Value Decomposition (SVD). For the details of the methods, see the reference section of GitHub README.md <https://github.com/rikenbit/dcTensor>.
Create quick and easy dot-and-whisker plots of regression results. It takes as input either (1) a coefficient table in standard form or (2) one (or a list of) fitted model objects (of any type that has methods implemented in the parameters package). It returns ggplot objects that can be further customized using tools from the ggplot2 package. The package also includes helper functions for tasks such as rescaling coefficients or relabeling predictor variables. See more methodological discussion of the visualization and data management methods used in this package in Kastellec and Leoni (2007) <doi:10.1017/S1537592707072209> and Gelman (2008) <doi:10.1002/sim.3107>.
This package provides functions are provided to fit temporal lag models to dynamic networks. The models are build on top of exponential random graph models (ERGM) framework. There are functions for simulating or forecasting networks for future time points. Abhirup Mallik & Zack W. Almquist (2019) Stable Multiple Time Step Simulation/Prediction From Lagged Dynamic Network Regression Models, Journal of Computational and Graphical Statistics, 28:4, 967-979, <DOI: 10.1080/10618600.2019.1594834>.
Density surface modelling of line transect data. A Generalized Additive Model-based approach is used to calculate spatially-explicit estimates of animal abundance from distance sampling (also presence/absence and strip transect) data. Several utility functions are provided for model checking, plotting and variance estimation.
Implementation of different algorithms for analyzing randomly truncated data, one-sided and two-sided (i.e. doubly) truncated data. It serves to compute empirical cumulative distributions and also kernel density and hazard functions using different bandwidth selectors. Several real data sets are included.
Distributional instrumental variable (DIV) model for estimation of the interventional distribution of the outcome Y under a do intervention on the treatment X. Instruments, predictors and targets can be univariate or multivariate. Functionality includes estimation of the (conditional) interventional mean and quantiles, as well as sampling from the fitted (conditional) interventional distribution.
This package provides select, insert, update, upsert, and delete database operations. Supports PostgreSQL', MySQL', SQLite', and more, and plays nicely with the DBI package.
Generates an RMarkdown data report with two components: a summary of an input dataset and a diff of the dataset relative to an old version.
Functionalities for analyzing high-dimensional and longitudinal biomarker data to facilitate precision medicine, using a joint model of Bayesian sparse factor analysis and dependent Gaussian processes. This paper illustrates the method in detail: J Cai, RJB Goudie, C Starr, BDM Tom (2023) <doi:10.48550/arXiv.2307.02781>.
This package performs drug demand forecasting by modeling drug dispensing data while taking into account predicted enrollment and treatment discontinuation dates. The gap time between randomization and the first drug dispensing visit is modeled using interval-censored exponential, Weibull, log-logistic, or log-normal distributions (Anderson-Bergman (2017) <doi:10.18637/jss.v081.i12>). The number of skipped visits is modeled using Poisson, zero-inflated Poisson, or negative binomial distributions (Zeileis, Kleiber & Jackman (2008) <doi:10.18637/jss.v027.i08>). The gap time between two consecutive drug dispensing visits given the number of skipped visits is modeled using linear regression based on least squares or least absolute deviations (Birkes & Dodge (1993, ISBN:0-471-56881-3)). The number of dispensed doses is modeled using linear or linear mixed-effects models (McCulloch & Searle (2001, ISBN:0-471-19364-X)).
This package implements the DAAREM method for accelerating the convergence of slow, monotone sequences from smooth, fixed-point iterations such as the EM algorithm. For further details about the DAAREM method, see Henderson, N.C. and Varadhan, R. (2019) <doi:10.1080/10618600.2019.1594835>.
Package including an interactive Shiny application for plotting common univariate distributions.
An easy-to-use yet powerful system for plotting grouped data effect sizes. Various types of effect size can be estimated, then plotted together with a representation of the original data. Select from many possible data representations (box plots, violin plots, raw data points etc.), and combine as desired. Durga plots are implemented in base R, so are compatible with base R methods for combining plots, such as layout()'. See Khan & McLean (2023) <doi:10.1101/2023.02.06.526960>.
This package implements an algorithm to effortlessly split a column in an R data frame filled with multiple values separated by delimiters. This automates the process of creating separate columns for each unique value, transforming them into binary outcomes.