Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for estimating times of common ancestry and molecular clock rates of evolution using a variety of evolutionary models, parametric and nonparametric bootstrap confidence intervals, methods for detecting outlier lineages, root-to-tip regression, and a statistical test for selecting molecular clock models. For more details see Volz and Frost (2017) <doi:10.1093/ve/vex025>.
For writing tables with custom formats in a Excel file ready to be distributed.
This package provides a novel and fast two stage method for simultaneous multiple change point detection and variable selection for piecewise stationary autoregressive (PSAR) processes and linear regression model. It also simultaneously performs variable selection for each autoregressive model and hence the order selection.
The best ANN structure for time series data analysis is a demanding need in the present era. This package will find the best-fitted ANN model based on forecasting accuracy. The optimum size of the hidden layers was also determined after determining the number of lags to be included. This package has been developed using the algorithm of Paul and Garai (2021) <doi:10.1007/s00500-021-06087-4>.
This package provides a toolbox for comparing two data frames. This package is defunct. I recommend you use the "versus" package instead.
Interface to TensorFlow Estimators <https://www.tensorflow.org/guide/estimator>, a high-level API that provides implementations of many different model types including linear models and deep neural networks.
Helps the user to build and register schema descriptions of disorganised (messy) tables. Disorganised tables are tables that are not in a topologically coherent form, where packages such as tidyr could be used for reshaping. The schema description documents the arrangement of input tables and is used to reshape them into a standardised (tidy) output format.
Allows users to analyze text and classify emotions such as happiness, sadness, anger, fear, and neutrality. It combines text preprocessing, TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction, and Random Forest classification to predict emotions and map them to corresponding emojis for enhanced sentiment visualization.
This package provides methods for representations (i.e. dimensionality reduction, preprocessing, feature extraction) of time series to help more accurate and effective time series data mining. Non-data adaptive, data adaptive, model-based and data dictated (clipped) representation methods are implemented. Also various normalisation methods (min-max, z-score, Box-Cox, Yeo-Johnson), and forecasting accuracy measures are implemented.
This package implements an algorithm for generating maps, known as tile maps, in which each region is represented by a single tile of the same shape and size. The algorithm was first proposed in "Generating Tile Maps" by Graham McNeill and Scott Hale (2017) <doi:10.1111/cgf.13200>. Functions allow users to generate, plot, and compare square or hexagon tile maps.
This package implements an algorithm for variable selection in high-dimensional linear regression using the "tilted correlation", a new way of measuring the contribution of each variable to the response which takes into account high correlations among the variables in a data-driven way.
Our method introduces mathematically well-defined measures for tightness of branches in a hierarchical tree. Statistical significance of the findings is determined, for all branches of the tree, by performing permutation tests, optionally with generalized Pareto p-value estimation.
Cluster analysis is one of the most fundamental problems in data science. We provide a variety of algorithms from clustering to the learning on the space of partitions. See Hennig, Meila, and Rocci (2016, ISBN:9781466551886) for general exposition to cluster analysis.
Calculate Expert Team on Climate Change Detection and Indices (ETCCDI) <-- (acronym) climate indices from daily or hourly temperature and precipitation data. Provides flexible data handling.
Bringing business and financial analysis to the tidyverse'. The tidyquant package provides a convenient wrapper to various xts', zoo', quantmod', TTR and PerformanceAnalytics package functions and returns the objects in the tidy tibble format. The main advantage is being able to use quantitative functions with the tidyverse functions including purrr', dplyr', tidyr', ggplot2', lubridate', etc. See the tidyquant website for more information, documentation and examples.
This package implements harmonic analysis of tidal and sea-level data. Over 400 harmonic tidal constituents can be estimated, all with daily nodal corrections. Time-varying mean sea-levels can also be used.
This package provides tools that stem and lemmatize text. Stemming is a process that removes endings such as affixes. Lemmatization is the process of grouping inflected forms together as a single base form.
The classical two-sample t-test works well for the normally distributed data or data with large sample size. The tcfu() and tt() tests implemented in this package provide better type-I-error control with more accurate power when testing the equality of two-sample means for skewed populations having unequal variances. These tests are especially useful when the sample sizes are moderate. The tcfu() uses the Cornish-Fisher expansion to achieve a better approximation to the true percentiles. The tt() provides transformations of the Welch's t-statistic so that the sampling distribution become more symmetric. For more technical details, please refer to Zhang (2019) <http://hdl.handle.net/2097/40235>.
Estimation of transition probabilities for the illness-death model and or the three-state progressive model.
This package provides a novel feature-wise normalization method based on a zero-inflated negative binomial model. This method assumes that the effects of sequencing depth vary for each taxon on their mean and also incorporates a rational link of zero probability and taxon dispersion as a function of sequencing depth. Ziyue Wang, Dillon Lloyd, Shanshan Zhao, Alison Motsinger-Reif (2023) <doi:10.1101/2023.10.31.563648>.
An integrated R interface to several United States Census Bureau APIs (<https://www.census.gov/data/developers/data-sets.html>) and the US Census Bureau's geographic boundary files. Allows R users to return Census and ACS data as tidyverse-ready data frames, and optionally returns a list-column with feature geometry for mapping and spatial analysis.
This package provides threshold sweep methods for Qualitative Comparative Analysis (QCA). Implements Condition Threshold Sweep-Single (CTS-S), Condition Threshold Sweep-Multiple (CTS-M), Outcome Threshold Sweep (OTS), and Dual Threshold Sweep (DTS) for systematic exploration of threshold calibration effects on crisp-set QCA results. These methods extend traditional robustness approaches by treating threshold variation as an exploratory tool for discovering causal structures. Built on top of the QCA package by Dusa (2019) <doi:10.1007/978-3-319-75668-4>, with function arguments following QCA conventions. Based on set-theoretic methods by Ragin (2008) <doi:10.7208/chicago/9780226702797.001.0001> and established robustness protocols by Rubinson et al. (2019) <doi:10.1177/00491241211036158>.
Partially penalized versions of specific transformation models implemented in package mlt'. Available models include a fully parametric version of the Cox model, other parametric survival models (Weibull, etc.), models for binary and ordered categorical variables, normal and transformed-normal (Box-Cox type) linear models, and continuous outcome logistic regression. Hyperparameter tuning is facilitated through model-based optimization functionalities from package mlrMBO'. The accompanying vignette describes the methodology used in tramnet in detail. Transformation models and model-based optimization are described in Hothorn et al. (2019) <doi:10.1111/sjos.12291> and Bischl et al. (2016) <doi:10.48550/arXiv.1703.03373>, respectively.
Census and administrative data in South Korea are a basic source of quantitative and mixed-methods research for social and urban scientists. This package provides a sf (Pebesma et al., 2024 <doi:10.32614/CRAN.package.sf>) based standardized workflow based on direct open API access to the major census and administrative data sources and pre-generated files in South Korea.