Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Monitor and trace changes in clustering solutions of accumulating datasets at successive time points. The clusters can adopt External and Internal transition at succeeding time points. The External transitions comprise of Survived, Merged, Split, Disappeared, and newly Emerged candidates. In contrast, Internal transition includes changes in location and cohesion of the survived clusters. The package uses MONIC framework developed by Spiliopoulou, Ntoutsi, Theodoridis, and Schult (2006)<doi:10.1145/1150402.1150491> .
This package contains some commonly used categorical variable encoders, such as LabelEncoder and OneHotEncoder'. Inspired by the encoders implemented in Python sklearn.preprocessing package (see <http://scikit-learn.org/stable/modules/preprocessing.html>).
Color and visualize wildlife distributions in space-time using raster data. In addition to enabling display of sequential change in distributions through the use of small multiples, colorist provides functions for extracting several features of interest from a sequence of distributions and for visualizing those features using HCL (hue-chroma-luminance) color palettes. Resulting maps allow for "fair" visual comparison of intensity values (e.g., occurrence, abundance, or density) across space and time and can be used to address questions about where, when, and how consistently a species, group, or individual is likely to be found.
This package provides a large number of measurements generate count data. This is a statistical data type that only assumes non-negative integer values and is generated by counting. Typically, counting data can be found in biomedical applications, such as the analysis of DNA double-strand breaks. The number of DNA double-strand breaks can be counted in individual cells using various bioanalytical methods. For diagnostic applications, it is relevant to record the distribution of the number data in order to determine their biomedical significance (Roediger, S. et al., 2018. Journal of Laboratory and Precision Medicine. <doi:10.21037/jlpm.2018.04.10>). The software offers functions for a comprehensive automated evaluation of distribution models of count data. In addition to programmatic interaction, a graphical user interface (web server) is included, which enables fast and interactive data-scientific analyses. The user is supported in selecting the most suitable counting distribution for his own data set.
Data from statistical agencies and other institutions often need to be protected before they can be published. This package can be used to perturb statistical tables in a consistent way. The main idea is to add - at the micro data level - a record key for each unit. Based on these keys, for any cell in a statistical table a cell key is computed as a function on the record keys contributing to a specific cell. Values that are added to the cell in order to perturb it are derived from a lookup-table that maps values of cell keys to specific perturbation values. The theoretical basis for the methods implemented can be found in Thompson, Broadfoot and Elazar (2013) <https://unece.org/fileadmin/DAM/stats/documents/ece/ces/ge.46/2013/Topic_1_ABS.pdf> which was extended and enhanced by Giessing and Tent (2019) <https://unece.org/fileadmin/DAM/stats/documents/ece/ces/ge.46/2019/mtg1/SDC2019_S2_Germany_Giessing_Tent_AD.pdf>.
Cellular cooperation compromises the plating efficiency-based analysis of clonogenic survival data. This tool provides functions that enable a robust analysis of colony formation assay (CFA) data in presence or absence of cellular cooperation. The implemented method has been described in Brix et al. (2020). (Brix, N., Samaga, D., Hennel, R. et al. "The clonogenic assay: robustness of plating efficiency-based analysis is strongly compromised by cellular cooperation." Radiat Oncol 15, 248 (2020). <doi:10.1186/s13014-020-01697-y>) Power regression for parameter estimation, calculation of survival fractions, uncertainty analysis and plotting functions are provided.
Includes commands for bootstrapping and permutation tests, a command for created grouped bar plots, and a demo of the quantile-normal plot for data drawn from different distributions.
This package implements the semiparametric efficient estimators of continuous-time causal models for time-varying treatments and confounders in the presence of dependent censoring (including structural failure time model and Cox proportional hazards marginal structural model). S. Yang, K. Pieper, and F. Cools (2019) <doi:10.1111/biom.12845>.
This package contains functions for solving commonly encountered problems while programming in R. This package is intended to provide a lightweight supplement to Base R, and will be useful for almost any R user.
Seek the significant cutoff value for a continuous variable, which will be transformed into a classification, for linear regression, logistic regression, logrank analysis and cox regression. First of all, all combinations will be gotten by combn() function. Then n.per argument, abbreviated of total number percentage, will be used to remove the combination of smaller data group. In logistic, Cox regression and logrank analysis, we will also use p.per argument, patient percentage, to filter the lower proportion of patients in each group. Finally, p value in regression results will be used to get the significant combinations and output relevant parameters. In this package, there is no limit to the number of cutoff points, which can be 1, 2, 3 or more. Still, we provide 2 methods, typical Bonferroni and Duglas G (1994) <doi: 10.1093/jnci/86.11.829>, to adjust the p value, Missing values will be deleted by na.omit() function before analysis.
Extract and monitor price and market cap of Cryptocurrencies from Coin Market Cap <https://coinmarketcap.com/api/>.
This package provides a tool to estimate IRT item parameters (2 PL) using CTT-based item statistics from small samples via artificial neural networks and regression trees.
This package provides functions to access data from public RESTful APIs including FINDIC API', REST Countries API', World Bank API', and Nager.Date', retrieving real-time or historical data related to Chile such as financial indicators, holidays, international demographic and geopolitical indicators, and more. Additionally, the package includes curated datasets related to Chile, covering topics such as human rights violations during the Pinochet regime, electoral data, census samples, health surveys, seismic events, territorial codes, and environmental measurements. The package supports research and analysis focused on Chile by integrating open APIs with high-quality datasets from multiple domains. For more information on the APIs, see: FINDIC <https://findic.cl/>, REST Countries <https://restcountries.com/>, World Bank API <https://datahelpdesk.worldbank.org/knowledgebase/articles/889392>, and Nager.Date <https://date.nager.at/Api>.
This package contains selected variables from the time series profiles for statistical areas level 2 from the 2006, 2011, and 2016 censuses of population and housing, Australia. Also provides methods for viewing the questions asked for convenience during analysis.
The nonparametric methods for estimating copula entropy, transfer entropy, and the statistics for multivariate normality test and two-sample test are implemented. The methods for estimating transfer entropy and the statistics for multivariate normality test and two-sample test are based on the method for estimating copula entropy. The method for change point detection with copula entropy based two-sample test is also implemented. Please refer to Ma and Sun (2011) <doi:10.1016/S1007-0214(11)70008-6>, Ma (2019) <doi:10.48550/arXiv.1910.04375>, Ma (2022) <doi:10.48550/arXiv.2206.05956>, Ma (2023) <doi:10.48550/arXiv.2307.07247>, and Ma (2024) <doi:10.48550/arXiv.2403.07892> for more information.
Copula-based regression models for multivariate censored data, including bivariate right-censored data, bivariate interval-censored data, and right/interval-censored semi-competing risks data. Currently supports Clayton, Gumbel, Frank, Joe, AMH and Copula2 copula models. For marginal models, it supports parametric (Weibull, Loglogistic, Gompertz) and semiparametric (Cox and transformation) models. Includes methods for convenient prediction and plotting. Also provides a bivariate time-to-event simulation function and an information ratio-based goodness-of-fit test for copula. Method details can be found in Sun et.al (2019) Lifetime Data Analysis, Sun et.al (2021) Biostatistics, Sun et.al (2022) Statistical Methods in Medical Research, Sun et.al (2022) Biometrics, and Sun et al. (2023+) JRSSC.
This package provides tools for measuring the compositionality of signalling systems (in particular the information-theoretic measure due to Spike (2016) <http://hdl.handle.net/1842/25930> and the Mantel test for distance matrix correlation (after Dietz 1983) <doi:10.1093/sysbio/32.1.21>), functions for computing string and meaning distance matrices as well as an implementation of the Page test for monotonicity of ranks (Page 1963) <doi:10.1080/01621459.1963.10500843> with exact p-values up to k = 22.
Convex Clustering methods, including K-means algorithm, On-line Update algorithm (Hard Competitive Learning) and Neural Gas algorithm (Soft Competitive Learning), and calculation of several indexes for finding the number of clusters in a data set.
Call the DeOldify <https://github.com/jantic/DeOldify> image colorization API on DeepAI'<https://deepai.org/machine-learning-model/colorizer> to colorize black and white images.
Concordance probability estimate (CPE) is a commonly used performance measure in survival analysis that evaluates the predictive accuracy of a survival model. It measures how well a model can distinguish between pairs of individuals with different survival times. Specifically, it calculate the proportion of all pairs of individuals whose predicted survival times are correctly ordered.
Utility functions that provides wrapper to descriptive base functions like cor, mean and table. It makes use of the formula interface to pass variables to functions. It also provides operators to concatenate (%+%), to repeat (%n%) and manage character vectors for nice display.
Includes wrapper functions around existing functions for the analysis of categorical data and introduces functions for calculating risk differences and matched odds ratios. R currently supports a wide variety of tools for the analysis of categorical data. However, many functions are spread across a variety of packages with differing syntax and poor compatibility with each another. prop_test() combines the functions binom.test(), prop.test() and BinomCI() into one output. prop_power() allows for power and sample size calculations for both balanced and unbalanced designs. riskdiff() is used for calculating risk differences and matched_or() is used for calculating matched odds ratios. For further information on methods used that are not documented in other packages see Nathan Mantel and William Haenszel (1959) <doi:10.1093/jnci/22.4.719> and Alan Agresti (2002) <ISBN:0-471-36093-7>.
Color palettes for all people, including those with color vision deficiency. Popular color palette series have been organized by type and have been scored on several properties such as color-blind-friendliness and fairness (i.e. do colors stand out equally?). Own palettes can also be loaded and analysed. Besides the common palette types (categorical, sequential, and diverging) it also includes cyclic and bivariate color palettes. Furthermore, a color for missing values is assigned to each palette.
Simulating and estimating peer effect models and network formation models. The class of peer effect models includes linear-in-means models (Lee, 2004; <doi:10.1111/j.1468-0262.2004.00558.x>), Tobit models (Xu and Lee, 2015; <doi:10.1016/j.jeconom.2015.05.004>), and discrete numerical data models (Houndetoungan, 2025; <doi:10.48550/arXiv.2405.17290>). The network formation models include pair-wise regressions with degree heterogeneity (Graham, 2017; <doi:10.3982/ECTA12679>) and exponential random graph models (Mele, 2017; <doi:10.3982/ECTA10400>).