Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Fast, flexible and user-friendly tools for distribution comparison through direct density ratio estimation. The estimated density ratio can be used for covariate shift adjustment, outlier-detection, change-point detection, classification and evaluation of synthetic data quality. The package implements multiple non-parametric estimation techniques (unconstrained least-squares importance fitting, ulsif(), Kullback-Leibler importance estimation procedure, kliep(), spectral density ratio estimation, spectral(), kernel mean matching, kmm(), and least-squares hetero-distributional subspace search, lhss()). with automatic tuning of hyperparameters. Helper functions are available for two-sample testing and visualizing the density ratios. For an overview on density ratio estimation, see Sugiyama et al. (2012) <doi:10.1017/CBO9781139035613> for a general overview, and the help files for references on the specific estimation techniques.
S4-distribution classes based on package distr for distributions from packages fBasics and fGarch'.
Infer progression of circadian rhythms in transcriptome data in which samples are not labeled with time of day and coverage of the circadian cycle may be incomplete. See Shilts et al. (2018) <doi:10.7717/peerj.4327>.
This hosts the findRFM function which generates RFM scores on a 1-5 point scale for customer transaction data. The function consumes a data frame with Transaction Number, Customer ID, Date of Purchase (in date format) and Amount of Purchase as the attributes. The function returns a data frame with RFM data for the sales information.
Implement some deep learning architectures and neural network algorithms, including BP,RBM,DBN,Deep autoencoder and so on.
Shiny application that performs bifurcation and phaseplane analysis of systems of ordinary differential equations. The package allows for computation of equilibrium curves as a function of a single free parameter, detection of transcritical, saddle-node and hopf bifurcation points along these curves, and computation of curves representing these transcritical, saddle-node and hopf bifurcation points as a function of two free parameters. The shiny-based GUI allows visualization of the results in both 2D- and 3D-plots. The implemented methods for solution localisation and curve continuation are based on the book "Elements of applied bifurcation theory" (Kuznetsov, Y. A., 1995; ISBN: 0-387-94418-4).
This package provides functions are provided that facilitate the import and analysis of SNP (single nucleotide polymorphism) and silicodart (presence/absence) data. The main focus is on data generated by DarT (Diversity Arrays Technology), however, data from other sequencing platforms can be used once SNP or related fragment presence/absence data from any source is imported. Genetic datasets are stored in a derived genlight format (package adegenet'), that allows for a very compact storage of data and metadata. Functions are available for importing and exporting of SNP and silicodart data, for reporting on and filtering on various criteria (e.g. CallRate', heterozygosity, reproducibility, maximum allele frequency). Additional functions are available for visualization (e.g. Principle Coordinate Analysis) and creating a spatial representation using maps. dartR supports also the analysis of 3rd party software package such as newhybrid', structure', NeEstimator and blast'. Since version 2.0.3 we also implemented simulation functions, that allow to forward simulate SNP dynamics under different population and evolutionary dynamics. Comprehensive tutorials and support can be found at our github repository: github.com/green-striped-gecko/dartR/. If you want to cite dartR', you find the information by typing citation('dartR') in the console.
This package implements the deflist class, a read-only list-like object that accesses its elements via a function. The deflist class can be used to model deferred access to data or computations by routing indexed list access to a function. This approach is particularly useful when sequential list-like access to data is required but holding all the data in memory at once is not feasible. The package also provides utilities for memoisation and caching to optimize access to frequently requested elements.
This package provides an implementation of a mixture of hidden Markov models (HMMs) for discrete sequence data in the Discrete Bayesian HMM Clustering (DBHC) algorithm. The DBHC algorithm is an HMM Clustering algorithm that finds a mixture of discrete-output HMMs while using heuristics based on Bayesian Information Criterion (BIC) to search for the optimal number of HMM states and the optimal number of clusters.
Written to help undergraduate as well as graduate students to get started with R for basic econometrics without the need to import specific functions and datasets from many different sources. Primarily, the package is meant to accompany the German textbook Auer, L.v., Hoffmann, S., Kranz, T. (2024, ISBN: 978-3-662-68263-0) from which the exercises cover all the topics from the textbook Auer, L.v. (2023, ISBN: 978-3-658-42699-6).
As a distributed imputation strategy, the Distributed full information Multiple Imputation method is developed to impute missing response variables in distributed linear regression. The philosophy of the package is described in Guo (2025) <doi:10.1038/s41598-025-93333-6>.
The Demographic Table in R combines contingency table for categorical variables, mean and standard deviation for continuous variables. t-test, chi-square test and Fisher's exact test calculated the p-value of two groups. The standardized mean difference were performed with 95 % confident interval, and writing table into document file.
This package creates discretised versions of continuous distribution functions by mapping continuous values to an underlying discrete grid, based on a (uniform) frequency of discretisation, a valid discretisation point, and an integration range. For a review of discretisation methods, see Chakraborty (2015) <doi:10.1186/s40488-015-0028-6>.
Inference functionalities for distributed-lag linear structural equation models (DLSEMs). DLSEMs are Markovian structural causal models where each factor of the joint probability distribution is a distributed-lag linear regression with constrained lag shapes (Magrini, 2018 <doi:10.2478/bile-2018-0012>; Magrini et al., 2019 <doi:10.1007/s11135-019-00855-z>). DLSEMs account for temporal delays in the dependence relationships among the variables through a single parameter per covariate, thus allowing to perform dynamic causal inference in a feasible fashion. Endpoint-constrained quadratic, quadratic decreasing, linearly decreasing and gamma lag shapes are available.
Models for analyzing site occupancy and count data models with detection error, including single-visit based models (Lele et al. 2012 <doi:10.1093/jpe/rtr042>, Moreno et al. 2010 <doi:10.1890/09-1073.1>, Solymos et al. 2012 <doi:10.1002/env.1149>, Denes et al. 2016 <doi:10.1111/1365-2664.12818>), conditional distance sampling and time-removal models (QPAD) (Solymos et al. 2013 <doi:10.1111/2041-210X.12106>, Solymos et al. 2018 <doi:10.1650/CONDOR-18-32.1>), and single bin QPAD (SQPAD) models (Lele & Solymos 2025 <doi:10.1093/ornithapp/duaf078>). Package development was supported by the Alberta Biodiversity Monitoring Institute and the Boreal Avian Modelling Project.
This package provides a wrapper on top of the Domino Data Python SDK library. It lets you query and access Domino Data Sources directly from your R environment. Under the hood, Domino Data R SDK leverages the API provided by the Domino Data Python SDK', which must be installed as a prerequisite. Domino is a platform that makes it easy to run your code on scalable hardware, with integrated version control and collaboration features designed for analytical workflows. See <https://docs.dominodatalab.com/en/latest/api_guide/140b48/domino-data-api> for more information.
This is the companion package to the Data Visualization Geometries Encyclopedia, providing seamless access to the associated data.
Local linear hazard estimator and its multiplicatively bias correction, including three bandwidth selection methods: best one-sided cross-validation, double one-sided cross-validation, and standard cross-validation.
The desirable Dietary Pattern (DDP)/ PPH score measures the variety of food consumption. The (weighted) score is calculated based on the type of food. This package is intended to calculate the DDP/ PPH score that is faster than traditional method via a manual calculation by BKP (2017) <http://bkp.pertanian.go.id/storage/app/uploads/public/5bf/ca9/06b/5bfca906bc654274163456.pdf> and is simpler than the nutrition survey <http://www.nutrisurvey.de>. The database to create weights and baseline values is the Indonesia national survey in 2017.
This package provides tools for estimating the Remaining Useful Life (RUL) of degrading systems using linear mixed-effects models and creating a health index. It supports both univariate and multivariate degradation signals. For multivariate inputs, the signals are merged into a univariate health index prior to modeling. Linear and exponential degradation trajectories are supported (the latter using a log transformation). Remaining Useful Life (RUL) distributions are estimated using Bayesian updating for new units, enabling on-site predictive maintenance. Based on the methodology of Liu and Huang (2016) <doi:10.1109/TASE.2014.2349733>.
Employ time-calibrated phylogenies and trait/range data to test for differences in diversification rates over evolutionary time. Extend the STRAPP test from BAMMtools::traitDependentBAMM() to any time step along phylogenies. See inst/COPYRIGHTS for details on third-party code.
Dual Scaling, developed by Professor Shizuhiko Nishisato (1994, ISBN: 0-9691785-3-6), is a fundamental technique in multivariate analysis used for data scaling and correspondence analysis. Its utility lies in its ability to represent multidimensional data in a lower-dimensional space, making it easier to visualize and understand underlying patterns in complex data. This technique has been implemented to handle various types of data, including Contingency and Frequency data (CF), Multiple-Choice data (MC), Sorting data (SO), Paired-Comparison data (PC), and Rank-Order data (RO), providing users with a powerful tool to explore relationships between variables and observations in various fields, from sociology to ecology, enabling deeper and more efficient analysis of multivariate datasets.
We have the code for disaggregation as found in Wei and Stram (1990, <doi:10.1111/j.2517-6161.1990.tb01799.x>), and Hodgess and Wei (1996, "Temporal Disaggregation of Time Series" in Statistical Science I, Nova Publishing). The disaggregation models have different orders of the moving average component. These are based on ARIMA models rather than differencing or using similar time series.
Detection and attribution of climate change using methods including optimal fingerprinting via generalized total least squares or an estimating equation approach (Li et al., 2025, <doi:10.1175/JCLI-D-24-0193.1>; Ma et al., 2023, <doi:10.1175/JCLI-D-22-0681.1>). Provides shrinkage estimators for the covariance matrix following Ledoit and Wolf (2004, <doi:10.1016/S0047-259X(03)00096-4>) and Ledoit and Wolf (2017, <doi:10.2139/ssrn.2383361>).