Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides methods for building self-organizing maps (SOMs) with a number of distinguishing features such automatic centroid detection and cluster visualization using starbursts. For more details see the paper "Improved Interpretability of the Unified Distance Matrix with Connected Components" by Hamel and Brown (2011) in <ISBN:1-60132-168-6>. The package provides user-friendly access to two models we construct: (a) a SOM model and (b) a centroid based clustering model. The package also exposes a number of quality metrics for the quantitative evaluation of the map, Hamel (2016) <doi:10.1007/978-3-319-28518-4_4>. Finally, we reintroduced our fast, vectorized training algorithm for SOM with substantial improvements. It is about an order of magnitude faster than the canonical, stochastic C implementation <doi:10.1007/978-3-030-01057-7_60>.
These are useful tools and data sets for the study of quantitative peace science. The goal for this package is to include tools and data sets for doing original research that mimics well what a user would have to previously get from a software package that may not be well-sourced or well-supported. Those software bundles were useful the extent to which they encourage replications of long-standing analyses by starting the data-generating process from scratch. However, a lot of the functionality can be done relatively quickly and more transparently in the R programming language.
Partitioning clustering divides the objects in a data set into non-overlapping subsets or clusters by using the prototype-based probabilistic and possibilistic clustering algorithms. This package covers a set of the functions for Fuzzy C-Means (Bezdek, 1974) <doi:10.1080/01969727308546047>, Possibilistic C-Means (Krishnapuram & Keller, 1993) <doi:10.1109/91.227387>, Possibilistic Fuzzy C-Means (Pal et al, 2005) <doi:10.1109/TFUZZ.2004.840099>, Possibilistic Clustering Algorithm (Yang et al, 2006) <doi:10.1016/j.patcog.2005.07.005>, Possibilistic C-Means with Repulsion (Wachs et al, 2006) <doi:10.1007/3-540-31662-0_6> and the other variants of hard and soft clustering algorithms. The cluster prototypes and membership matrices required by these partitioning algorithms are initialized with different initialization techniques that are available in the package inaparc'. As the distance metrics, not only the Euclidean distance but also a set of the commonly used distance metrics are available to use with some of the algorithms in the package.
Interactively explore various dependencies of a package(s) (on the Comprehensive R Archive Network Like repositories) and perform analysis using tidy philosophy. Most of the functions return a tibble object (enhancement of dataframe') which can be used for further analysis. The package offers functions to produce network and igraph dependency graphs. The plot method produces a static plot based on ggnetwork and plotd3 function produces an interactive D3 plot based on networkD3'.
The primary goal of phase I clinical trials is to find the maximum tolerated dose (MTD). To reach this objective, we introduce a new design for phase I clinical trials, the posterior predictive (PoP) design. The PoP design is an innovative model-assisted design that is as simply as the conventional algorithmic designs as its decision rules can be pre-tabulated prior to the onset of trial, but is of more flexibility of selecting diverse target toxicity rates and cohort sizes. The PoP design has desirable properties, such as coherence and consistency. Moreover, the PoP design provides better empirical performance than the BOIN and Keyboard design with respect to high average probabilities of choosing the MTD and slightly lower risk of treating patients at subtherapeutic or overly toxic doses.
This package provides a wrapper for Paddle - The Merchant of Record for digital products API (Application Programming Interface) <https://developer.paddle.com/api-reference/overview>. Provides functions to manage and analyze products, customers, invoices and many more.
Calculates profile repeatability for replicate stress response curves, or similar time-series data. Profile repeatability is an individual repeatability metric that uses the variances at each timepoint, the maximum variance, the number of crossings (lines that cross over each other), and the number of replicates to compute the repeatability score. For more information see Reed et al. (2019) <doi:10.1016/j.ygcen.2018.09.015>.
Fast functions for dealing with prime numbers, such as testing whether a number is prime and generating a sequence prime numbers. Additional functions include finding prime factors and Ruth-Aaron pairs, finding next and previous prime numbers in the series, finding or estimating the nth prime, estimating the number of primes less than or equal to an arbitrary number, computing primorials, prime k-tuples (e.g., twin primes), finding the greatest common divisor and smallest (least) common multiple, testing whether two numbers are coprime, and computing Euler's totient function. Most functions are vectorized for speed and convenience.
Convert Chinese characters into Pinyin (the official romanization system for Standard Chinese in mainland China, Malaysia, Singapore, and Taiwan. See <https://en.wikipedia.org/wiki/Pinyin> for details), Sijiao (four or five numerical digits per character. See <https://en.wikipedia.org/wiki/Four-Corner_Method>.), Wubi (an input method with five strokes. See <https://en.wikipedia.org/wiki/Wubi_method>) or user-defined codes.
Several functions introduced in Aster et al.'s book on inverse theory. The functions are often translations of MATLAB code developed by the authors to illustrate concepts of inverse theory as applied to geophysics. Generalized inversion, tomographic inversion algorithms (conjugate gradients, ART and SIRT'), non-linear least squares, first and second order Tikhonov regularization, roughness constraints, and procedures for estimating smoothing parameters are included.
The Prize-Collecting Steiner Tree problem asks to find a subgraph connecting a given set of vertices with the most expensive nodes and least expensive edges. Since it is proven to be NP-hard, exact and efficient algorithm does not exist. This package provides convenient functionality for obtaining an approximate solution to this problem using loopy belief propagation algorithm.
To build a shiny app for visualization of the hierarchy of PheCode Mapping with International Classification of Diseases (ICD). The same PheCode hierarchy is displayed in two ways: as a sunburst plot and as a tree.
This package implements principal component analysis, orthogonal rotation and multiple factor analysis for a mixture of quantitative and qualitative variables.
Fit a time-series model to a crop phenology data, such as time-series rice canopy height. This package returns the model parameters as the summary statistics of crop phenology, and these parameters will be useful to characterize the growth pattern of each cultivar and predict manually-measured traits, such as days to heading and biomass. Please see Taniguchi et al. (2022) <doi:10.3389/fpls.2022.998803> and Taniguchi et al. (2025) <doi: 10.3389/frai.2024.1477637> for detail. This package has been designed for scientific use. Use for commercial purposes shall not be allowed.
This package provides randomization using permutation for applications. To provide a Quality Control (QC) check, QC samples can be randomized within strata. A second function allows for the ability to â switchâ samples to meet set requirements and perform a certain amount of minimization on these switches. The functions are flexible for users by specifying strata size and number of QC samples per strata. The randomization meets the following requirements â ¢ QC sample requirements: QC samples not adjacent, QC samples from same mother must follow certain patterns. â ¢ Matched sample sets must be within a single strata, and next to each other.
Probabilistic factor analysis for spatially-aware dimension reduction across multi-section spatial transcriptomics data with millions of spatial locations. More details can be referred to Wei Liu, et al. (2023) <doi:10.1101/2023.07.11.548486>.
Handles and formats author information in scientific writing in R Markdown and Quarto'. plume provides easy-to-use and flexible tools for inserting author data in YAML as well as generating author and contribution lists (among others) as strings from tabular data.
Estimates (and controls for) phylogenetic signal through phylogenetic eigenvectors regression (PVR) and phylogenetic signal-representation (PSR) curve, along with some plot utilities.
Small self-contained plots for use in larger plots or to delegate plotting in other functions. Also contains a number of alternative color palettes and HSL color space based tools to modify colors or palettes.
Tabular data from statistical institutes and agencies are mostly confidential and must be protected prior to publications. The cell-key method is a post-tabular Statistical Disclosure Control perturbation technique that adds random noise to tabular data. The statistical properties of the perturbations are defined by some noise probability distributions - also referred to as perturbation tables. This tool can be used to create the perturbation tables based on a maximum entropy approach as described for example in Giessing (2016) <doi:10.1007/978-3-319-45381-1_18>. The perturbation tables created can finally be used to apply a cell-key method to frequency count or magnitude tables.
Implementation of the Phoenix and Phoenix-8 Sepsis Criteria as described in "Development and Validation of the Phoenix Criteria for Pediatric Sepsis and Septic Shock" by Sanchez-Pinto, Bennett, DeWitt, Russell et al. (2024) <doi:10.1001/jama.2024.0196> (Drs. Sanchez-Pinto and Bennett contributed equally to this manuscript; Dr. DeWitt and Mr. Russell contributed equally to the manuscript), "International Consensus Criteria for Pediatric Sepsis and Septic Shock" by Schlapbach, Watson, Sorce, Argent, et al. (2024) <doi:10.1001/jama.2024.0179> (Drs Schlapbach, Watson, Sorce, and Argent contributed equally) and the application note "phoenix: an R package and Python module for calculating the Phoenix pediatric sepsis score and criteria" by DeWitt, Russell, Rebull, Sanchez-Pinto, and Bennett (2024) <doi:10.1093/jamiaopen/ooae066>.
Calculates an acceptance sampling plan, (sample size and acceptance number) based in MIL STD 105E, Dodge Romig and MIL STD 414 tables and procedures. The arguments for each function are related to lot size, inspection level and quality level. The specific plan operating curve (OC), is calculated by the binomial distribution.
This package implements an efficient and powerful Bayesian approach for sparse high-dimensional linear regression. It uses minimal prior assumptions on the parameters through plug-in empirical Bayes estimates of hyperparameters. An efficient Parameter-Expanded Expectation-Conditional-Maximization (PX-ECM) algorithm estimates maximum a posteriori (MAP) values of regression parameters and variable selection probabilities. The PX-ECM results in a robust computationally efficient coordinate-wise optimization, which adjusts for the impact of other predictor variables. The E-step is motivated by the popular two-group approach to multiple testing. The result is a PaRtitiOned empirical Bayes Ecm (PROBE) algorithm applied to sparse high-dimensional linear regression, implemented using one-at-a-time or all-at-once type optimization. More information can be found in McLain, Zgodic, and Bondell (2022) <arXiv:2209.08139>.
The Penn World Table 9.x (<http://www.ggdc.net/pwt/>) provides information on relative levels of income, output, inputs, and productivity for 182 countries between 1950 and 2017.