Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Calculates various estimates for measures of educational differentials, the relative importance of primary and secondary effects in the creation of such differentials and compares the estimates obtained from two datasets.
Providing six different algorithms that can be used to split the available data into training, test and validation subsets with similar distribution for hydrological model developments. The dataSplit() function will help you divide the data according to specific requirements, and you can refer to the par.default() function to set the parameters for data splitting. The getAUC() function will help you measure the similarity of distribution features between the data subsets. For more information about the data splitting algorithms, please refer to: Chen et al. (2022) <doi:10.1016/j.jhydrol.2022.128340>, Zheng et al. (2022) <doi:10.1029/2021WR031818>.
Flexible and efficient cleaning of data with interactivity. datacleanr facilitates best practices in data analyses and reproducibility with built-in features and by translating interactive/manual operations to code. The package is designed for interoperability, and so seamlessly fits into reproducible analyses pipelines in R'.
In practice, we will encounter problems where the longitudinal performance of processes needs to be monitored over time. Dynamic screening systems (DySS) are methods that aim to identify and give signals to processes with poor performance as early as possible. This package is designed to implement dynamic screening systems and the related methods. References: Qiu, P. and Xiang, D. (2014) <doi:10.1080/00401706.2013.822423>; Qiu, P. and Xiang, D. (2015) <doi:10.1002/sim.6477>; Li, J. and Qiu, P. (2016) <doi:10.1080/0740817X.2016.1146423>; Li, J. and Qiu, P. (2017) <doi:10.1002/qre.2160>; You, L. and Qiu, P. (2019) <doi:10.1080/00949655.2018.1552273>; Qiu, P., Xia, Z., and You, L. (2020) <doi:10.1080/00401706.2019.1604434>; You, L., Qiu, A., Huang, B., and Qiu, P. (2020) <doi:10.1002/bimj.201900127>; You, L. and Qiu, P. (2021) <doi:10.1080/00224065.2020.1767006>.
These are data sets for the hit TV show, RuPaul's Drag Race. Data right now include episode-level data, contestant-level data, and episode-contestant-level data. This is a work in progress, and a love letter of a kind to RuPaul's Drag Race and the performers that have appeared on the show. This may not be the most productive use of my time, but I have tenure and what are you going to do about it? I think there is at least some value in this package if it allows the show's fandom to learn more about the R programming language around its contents.
Analysis of preprocessed dramatic texts, with respect to literary research. The package provides functions to analyze and visualize information about characters, stage directions, the dramatic structure and the text itself. The dramatic texts are expected to be in CSV format, which can be installed from within the package, sample texts are provided. The package and the reasoning behind it are described in Reiter et al. (2017) <doi:10.18420/in2017_119>.
An R package for iterative and batched record linkage, and applying epidemiological case definitions. diyar can be used for deterministic and probabilistic record linkage, or multistage record linkage combining both approaches. It features the implementation of nested match criteria, and mechanisms to address missing data and conflicting matches during stepwise record linkage. Case definitions are implemented by assigning records to groups based on match criteria such as person or place, and overlapping time or duration of events e.g. sample collection dates or periods of hospital stays. Matching records are assigned a unique group ID. Index and duplicate records are removed or further analyses as required.
Use dynamic programming method to solve l1 convex clustering with identical weights.
Various methods for the identification of trend and seasonal components in time series (TS) are provided. Among them is a data-driven locally weighted regression approach with automatically selected bandwidth for equidistant short-memory time series. The approach is a combination / extension of the algorithms by Feng (2013) <doi:10.1080/02664763.2012.740626> and Feng, Y., Gries, T., and Fritz, M. (2020) <doi:10.1080/10485252.2020.1759598> and a brief description of this new method is provided in the package documentation. Furthermore, the package allows its users to apply the base model of the Berlin procedure, version 4.1, as described in Speth (2004) <https://www.destatis.de/DE/Methoden/Saisonbereinigung/BV41-methodenbericht-Heft3_2004.pdf?__blob=publicationFile>. Permission to include this procedure was kindly provided by the Federal Statistical Office of Germany.
An anonymization algorithm to resist neighbor label attack in a dynamic network.
Compute the dynamic threshold panel model suggested by (Stephanie Kremer, Alexander Bick and Dieter Nautz (2013) <doi:10.1007/s00181-012-0553-9>) in which they extended the (Hansen (1999) <doi: 10.1016/S0304-4076(99)00025-1>) original static panel threshold estimation and the Caner and (Hansen (2004) <doi:10.1017/S0266466604205011>) cross-sectional instrumental variable threshold model, where generalized methods of moments type estimators are used.
The disparity filter algorithm is a network reduction technique to identify the backbone structure of a weighted network without destroying its multi-scale nature. The algorithm is documented in M. Angeles Serrano, Marian Boguna and Alessandro Vespignani in "Extracting the multiscale backbone of complex weighted networks", Proceedings of the National Academy of Sciences 106 (16), 2009. This implementation of the algorithm supports both directed and undirected networks.
An intuitive, cross-platform graphical data analysis system. It uses menus and dialogs to guide the user efficiently through the data manipulation and analysis process, and has an excel like spreadsheet for easy data frame visualization and editing. Deducer works best when used with the Java based R GUI JGR, but the dialogs can be called from the command line. Dialogs have also been integrated into the Windows Rgui.
This package implements an anomaly detection algorithm based on mutual reachability minimum spanning trees: deadwood trims protruding tree segments and marks small debris as outliers; see Gagolewski (2026) <https://deadwood.gagolewski.com/>. More precisely, the use of a mutual reachability distance pulls peripheral points farther away from each other. Tree edges with weights beyond the detected elbow point are removed. All the resulting connected components whose sizes are smaller than a given threshold are deemed anomalous. The Python version of deadwood is available via PyPI'.
This R function implements the nonstationary Kriging model proposed by Tuo, Wu and Yu (2014) <DOI:10.1080/00401706.2013.842935> for analyzing multi-fidelity computer outputs. This function computes the maximum likelihood estimates for the model parameters as well as the predictive means and variances of the exact solution.
Diagnostic classification models are psychometric models used to categorically estimate respondents mastery, or proficiency, on a set of predefined skills (Bradshaw, 2016, <doi:10.1002/9781118956588.ch13>). Diagnostic models can be estimated with Stan'; however, the necessary scripts can be long and complicated. This package automates the creation of Stan scripts for diagnostic classification models. Specify different types of diagnostic models, define prior distributions, and automatically generate the necessary Stan code for estimating the model.
Models for detecting concreteness in natural language. This package is built in support of Yeomans (2021) <doi:10.1016/j.obhdp.2020.10.008>, which reviews linguistic models of concreteness in several domains. Here, we provide an implementation of the best-performing domain-general model (from Brysbaert et al., (2014) <doi:10.3758/s13428-013-0403-5>) as well as two pre-trained models for the feedback and plan-making domains.
Utility functions to be used to analyse datasets obtained from seed germination/emergence assays. Fits several types of seed germination/emergence models, including those reported in Onofri et al. (2018) "Hydrothermal-time-to-event models for seed germination", European Journal of Agronomy, 101, 129-139 <doi:10.1016/j.eja.2018.08.011>. Contains several datasets for practicing.
Populate data from an R environment into .doc and .docx templates. Create a template document in a program such as Word', and add strings encased in guillemet characters to create flags («example»). Use getDictionary() to create a dictionary of flags and replacement values, then call docket() to generate a populated document.
Create and manage fault-tolerant task queues for the foreach package using the Redis key/value database.
Statistical deadband algorithms are based on the Send-On-Delta concept as in Miskowicz(2006,<doi:10.3390/s6010049>). A collection of functions compare effectiveness and fidelity of sampled signals using statistical deadband algorithms.
Post Global Financial Crisis derivatives reforms have lifted the veil off over-the-counter (OTC) derivative markets. Swap Execution Facilities (SEFs) and Swap Data Repositories (SDRs) now publish data on swaps that are traded on or reported to those facilities (respectively). This package provides you the ability to get this data from supported sources.
This package provides a collection of functions that perform jump regression and image analysis such as denoising, deblurring and jump detection. The implemented methods are based on the following research: Qiu, P. (1998) <doi:10.1214/aos/1024691468>, Qiu, P. and Yandell, B. (1997) <doi: 10.1080/10618600.1997.10474746>, Qiu, P. (2009) <doi: 10.1007/s10463-007-0166-9>, Kang, Y. and Qiu, P. (2014) <doi: 10.1080/00401706.2013.844732>, Qiu, P. and Kang, Y. (2015) <doi: 10.5705/ss.2014.054>, Kang, Y., Mukherjee, P.S. and Qiu, P. (2018) <doi: 10.1080/00401706.2017.1415975>, Kang, Y. (2020) <doi: 10.1080/10618600.2019.1665536>.
Data depth concept offers a variety of powerful and user friendly tools for robust exploration and inference for multivariate data. The offered techniques may be successfully used in cases of lack of our knowledge on parametric models generating data due to their nature. The package consist of among others implementations of several data depth techniques involving multivariate quantile-quantile plots, multivariate scatter estimators, multivariate Wilcoxon tests and robust regressions.