Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Aids in identifying the Koeppen-Geiger (KG) climatic zone for a given location. The Koeppen-Geiger climate zones were first published in 1884, as a system to classify regions of the earth by their relative heat and humidity through the year, for the benefit of human health, plant and agriculture and other human activity [1]. This climate zone classification system, applicable to all of the earths surface, has continued to be developed by scientists up to the present day. Recently one of use (FZ) has published updated, higher accuracy KG climate zone definitions [2]. In this package we use these updated high-resolution maps as the data source [3]. We provide functions that return the KG climate zone for a given longitude and lattitude, or for a given United States zip code. In addition the CZUncertainty() function will check climate zones nearby to check if the given location is near a climate zone boundary. In addition an interactive shiny app is provided to define the KG climate zone for a given longitude and lattitude, or United States zip code. Digital data, as well as animated maps, showing the shift of the climate zones are provided on the following website <http://koeppen-geiger.vu-wien.ac.at>. This work was supported by the DOE-EERE SunShot award DE-EE-0007140. [1] W. Koeppen, (2011) <doi:10.1127/0941-2948/2011/105>. [2] F. Rubel and M. Kottek, (2010) <doi:10.1127/0941-2948/2010/0430>. [3] F. Rubel, K. Brugger, K. Haslinger, and I. Auer, (2016) <doi:10.1127/metz/2016/0816>.
This package provides useful functions which are needed for bioinformatic analysis such as calculating linear principal components from numeric data and Single-nucleotide polymorphism (SNP) dataset, calculating fixation index (Fst) using Hudson method, creating scatter plots in 3 views, handling with PLINK binary file format, detecting rough structures and outliers using unsupervised clustering, and calculating matrix multiplication in the faster way for big data.
This package provides a multi-purpose and flexible k-meric enrichment analysis software. kmeRtone measures the enrichment of k-mers by comparing the population of k-mers in the case loci with a carefully devised internal negative control group, consisting of k-mers from regions close to, yet sufficiently distant from, the case loci to mitigate any potential sequencing bias. This method effectively captures both the local sequencing variations and broader sequence influences, while also correcting for potential biases, thereby ensuring more accurate analysis. The core functionality of kmeRtone is the SCORE() function, which calculates the susceptibility scores for k-mers in case and control regions. Case regions are defined by the genomic coordinates provided in a file by the user and the control regions can be constructed relative to the case regions or provided directly. The k-meric susceptibility scores are calculated by using a one-proportion z-statistic. kmeRtone is highly flexible by allowing users to also specify their target k-mer patterns and quantify the corresponding k-mer enrichment scores in the context of these patterns, allowing for a more comprehensive approach to understanding the functional implications of specific DNA sequences on a genomic scale (e.g., CT motifs upon UV radiation damage). Adib A. Abdullah, Patrick Pflughaupt, Claudia Feng, Aleksandr B. Sahakyan (2024) Bioinformatics (submitted).
Distance metrics for mixed-type data consisting of continuous, nominal, and ordinal variables. This methodology uses additive and product kernels to calculate similarity functions and metrics, and selects variables relevant to the underlying distance through bandwidth selection via maximum similarity cross-validation. These methods can be used in any distance-based algorithm, such as distance-based clustering. For further details, we refer the reader to Ghashti and Thompson (2024) <doi:10.1007/s00357-024-09493-z> for dkps() methodology, and Ghashti (2024) <doi:10.14288/1.0443975> for dkss() methodology.
Restore underlining numeric data from rating history graph of KGS (an online platform of the game of go, <http://www.gokgs.com/>). A shiny application is also provided.
Computes Khattree-Bahuguna's univariate and multivariate skewness, principal-component-based Khattree-Bahuguna's multivariate skewness. It also provides several measures of univariate or multivariate skewnesses including, Pearsonâ s coefficient of skewness, Bowleyâ s univariate skewness and Mardia's multivariate skewness. See Khattree, R. and Bahuguna, M. (2019) <doi: 10.1007/s41060-018-0106-1>.
The format KVH is a lightweight format that can be read/written both by humans and machines. It can be useful in situations where XML or alike formats seem to be an overkill. We provide an ability to parse KVH files in R pretty fast due to Rcpp use.
Allows analyzing time series representing two-dimensional movements. It accepts a data frame with a time (t), horizontal (x) and vertical (y) coordinate as columns, and returns several dynamical properties such as speed, acceleration or curvature.
Kernel smoothing for Wishart random matrices described in Daayeb, Khardani and Ouimet (2025) <doi:10.48550/arXiv.2506.08816>, Gaussian and log-Gaussian models using least square or likelihood cross validation criteria for optimal bandwidth selection.
Training and evaluating k-gram language models in R, supporting several probability smoothing techniques, perplexity computations, random text generation and more.
Detect and test for changes in covariance structures of functional data, as well as changepoint detection for multivariate data more generally. Method for detecting non-stationarity in resting state functional Magnetic Resonance Imaging (fMRI) scans as seen in Ramsay, K., & Chenouri, S. (2025) <doi:10.1080/10485252.2025.2503891> is implemented in fmri_changepoints(). Also includes depth- and rank-based implementation of the wild binary segmentation algorithm for detecting multiple changepoints in multivariate data.
This package performs variable selection for many types of L1-regularised regressions using the revisited knockoffs procedure. This procedure uses a matrix of knockoffs of the covariates independent from the response variable Y. The idea is to determine if a covariate belongs to the model depending on whether it enters the model before or after its knockoff. The procedure suits for a wide range of regressions with various types of response variables. Regression models available are exported from the R packages glmnet and ordinalNet'. Based on the paper linked to via the URL below: Gegout A., Gueudin A., Karmann C. (2019) <arXiv:1907.03153>.
The knockoff filter is a general procedure for controlling the false discovery rate (FDR) when performing variable selection. For more information, see the website below and the accompanying paper: Candes et al., "Panning for gold: model-X knockoffs for high-dimensional controlled variable selection", J. R. Statist. Soc. B (2018) 80, 3, pp. 551-577.
New kernel-based test and fast tests for testing whether two samples are from the same distribution. They work well particularly for high-dimensional data. Song, H. and Chen, H. (2023) <arXiv:2011.06127>.
Functional magnetic resonance imaging ('fMRI') data from the Kirby21 reproducibility study <doi:10.1016/j.neuroimage.2010.11.047>.
This package contains basic tools for sample size estimation in studies of interobserver/interrater agreement (reliability). Includes functions for both the power-based and confidence interval-based methods, with binary or multinomial outcomes and two through six raters.
This package provides a collection of personal helper functions to avoid redundancy in the spirit of the "Don't repeat yourself" principle of software development (<https://en.wikipedia.org/wiki/Don%27t_repeat_yourself>).
This function performs the two-sample Kuiper test to assess the anomaly of continuous, one-dimensional probability distributions. References used for this method are (1). Kuiper, N. H. (1960). <DOI:10.1016/S1385-7258(60)50006-0> and (2). Paltani, S. (2004). <DOI:10.1051/0004-6361:20034220>.
Assign and listen to keyboard shortcuts in shiny using the Mousetrap Javascript library.
Control your keyboard and mouse with R code by simulating key presses and mouse clicks. The input simulation is implemented with the Windows API.
Implementations of the kernel measure of multi-sample dissimilarity (KMD) between several samples using K-nearest neighbor graphs and minimum spanning trees. The KMD measures the dissimilarity between multiple samples, based on the observations from them. It converges to the population quantity (depending on the kernel) which is between 0 and 1. A small value indicates the multiple samples are from the same distribution, and a large value indicates the corresponding distributions are different. The population quantity is 0 if and only if all distributions are the same, and 1 if and only if all distributions are mutually singular. The package also implements the tests based on KMD for H0: the M distributions are equal against H1: not all the distributions are equal. Both permutation test and asymptotic test are available. These tests are consistent against all alternatives where at least two samples have different distributions. For more details on KMD and the associated tests, see Huang, Z. and B. Sen (2022) <arXiv:2210.00634>.
This package provides a phenotype-aware algorithm for resolving cryptic relatedness in genetic studies. It removes related individuals based on kinship or identity-by-descent (IBD) scores while prioritizing subjects with phenotypes of interest. This approach helps maximize the retention of informative subjects, particularly for rare or valuable traits, and improves statistical power in genetic and epidemiological studies. KDPS supports both categorical and quantitative phenotypes, composite scoring, and customizable pruning strategies using a fuzziness parameter. Benchmark results show improved phenotype retention and high computational efficiency on large-scale datasets like the UK Biobank. Methods used include Manichaikul et al. (2010) <doi:10.1093/bioinformatics/btq559> for kinship estimation, Purcell et al. (2007) <doi:10.1086/519795> for IBD estimation, and Bycroft et al. (2018) <doi:10.1038/s41586-018-0579-z> for UK Biobank data reference.
Data on houses in and around Seattle WA are included. Basic characteristics are given along with sale prices.
This package provides a set of tools to analyze texts. Includes, amongst others, functions for automatic language detection, hyphenation, several indices of lexical diversity (e.g., type token ratio, HD-D/vocd-D, MTLD) and readability (e.g., Flesch, SMOG, LIX, Dale-Chall). Basic import functions for language corpora are also provided, to enable frequency analyses (supports Celex and Leipzig Corpora Collection file formats) and measures like tf-idf. Note: For full functionality a local installation of TreeTagger is recommended. It is also recommended to not load this package directly, but by loading one of the available language support packages from the l10n repository <https://undocumeantit.github.io/repos/l10n/>. koRpus also includes a plugin for the R GUI and IDE RKWard, providing graphical dialogs for its basic features. The respective R package rkward cannot be installed directly from a repository, as it is a part of RKWard. To make full use of this feature, please install RKWard from <https://rkward.kde.org> (plugins are detected automatically). Due to some restrictions on CRAN, the full package sources are only available from the project homepage. To ask for help, report bugs, request features, or discuss the development of the package, please subscribe to the koRpus-dev mailing list (<https://korpusml.reaktanz.de>).