Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Simulates categorical maps on actual geographical realms, starting from either empty landscapes or landscapes provided by the user (e.g. land use maps). Allows to tweak or create landscapes while retaining a high degree of control on its features, without the hassle of specifying each location attribute. In this it differs from other tools which generate null or neutral landscapes in a theoretical space. The basic algorithm currently implemented uses a simple agent style/cellular automata growth model, with no rules (apart from areas of exclusion) and von Neumann neighbourhood (four cells, aka Rook case). Outputs are raster dataset exportable to any common GIS format.
An implementation of algorithms described in Jewell and Witten (2017) <arXiv:1703.08644>.
Package shiny provides interactive web applications in R. Package loon is an interactive toolkit engaged in open-ended, creative and unscripted data exploration. The loon.shiny package can take loon widgets and display a selfsame shiny app.
Maximum likelihood estimation of log-binomial regression with special functionality when the MLE is on the boundary of the parameter space.
This package provides a bunch of algorithms based on linear programming for estimating, under the homogeneity hypothesis, RxC ecological contingency tables (or vote transition matrices) using mainly aggregate data (from voting units). References: Pavà a and Romero (2024) <doi:10.1177/00491241221092725>. Pavà a and Romero (2024) <doi:10.1093/jrsssa/qnae013>. Pavà a (2023) <doi:10.1007/s43545-023-00658-y>. Pavà a (2024) <doi:10.1080/0022250X.2024.2423943>. Pavà a (2024) <doi:10.1177/07591063241277064>. Pavà a and Penadés (2024). A bottom-up approach for ecological inference. Romero, Pavà a, Martà n and Romero (2020) <doi:10.1080/02664763.2020.1804842>. Acknowledgements: The authors wish to thank Consellerà a de Educación, Cultura, Universidades y Empleo, Generalitat Valenciana (grants AICO/2021/257, CIAICO/2023/031) and MICIU/AEI/10.13039/501100011033/FEDER, UE (grant PID2021-128228NB-I00) for supporting this research.
This package provides a stochastic, spatially-explicit, demo-genetic model simulating the spread and evolution of a plant pathogen in a heterogeneous landscape to assess resistance deployment strategies. It is based on a spatial geometry for describing the landscape and allocation of different cultivars, a dispersal kernel for the dissemination of the pathogen, and a SEIR ('Susceptible-Exposed-Infectious-Removedâ ) structure with a discrete time step. It provides a useful tool to assess the performance of a wide range of deployment options with respect to their epidemiological, evolutionary and economic outcomes. Loup Rimbaud, Julien Papaïx, Jean-François Rey, Luke G Barrett, Peter H Thrall (2018) <doi:10.1371/journal.pcbi.1006067>.
This package provides a largish collection of example datasets, including several classics. Many of these datasets are well suited for regression, classification, and visualization.
Build powerful, linked-view dashboards in shiny applications. With a declarative, one-line setup, you can create bidirectional links between interactive components. When a user interacts with one element (e.g., clicking a map marker), all linked components (such as DT tables or other charts) instantly update. Supports leaflet maps, DT tables, plotly charts, and spatial data via sf objects out-of-the-box, with an extensible API for custom components.
Label-free bottom-up proteomics expression data is often affected by data heterogeneity and missing values. Normalization and missing value imputation are commonly used techniques to address these issues and make the dataset suitable for further downstream analysis. This package provides an optimal combination of normalization and imputation methods for the dataset. The package utilizes three normalization methods and three imputation methods.The statistical evaluation measures named pooled co-efficient of variance, pooled estimate of variance and pooled median absolute deviation are used for selecting the best combination of normalization and imputation method for the given dataset. The user can also visualize the results by using various plots available in this package. The user can also perform the differential expression analysis between two sample groups with the function included in this package. The chosen three normalization methods, three imputation methods and three evaluation measures were chosen for this study based on the research papers published by Välikangas et al. (2016) <doi:10.1093/bib/bbw095>, Jin et al. (2021) <doi:10.1038/s41598-021-81279-4> and Srivastava et al. (2023) <doi:10.2174/1574893618666230223150253>.This work has published by Sakthivel et al. (2025) <doi:10.1021/acs.jproteome.4c00552>.
Conducts a cointegration test for high-dimensional vector autoregressions (VARs) of order k based on the large N,T asymptotics of Bykhovskaya and Gorin, 2022 (<doi:10.48550/arXiv.2202.07150>). The implemented test is a modification of the Johansen likelihood ratio test. In the absence of cointegration the test converges to the partial sum of the Airy-1 point process. This package contains simulated quantiles of the first ten partial sums of the Airy-1 point process that are precise up to the first three digits.
This package provides a function for classifying a landscape into different categories based on the Topographic Position Index (TPI) and slope. It offers two types of classifications: Slope Position Classification, and Landform Classification. The function internally calculates the TPI for the given landscape and then uses it along with the slope to perform the classification. Optionally, descriptive statistics for every class are calculated and plotted. The classifications are useful for identifying the position of a location on a slope and for identifying broader landform types.
Calculation of rectifying LTPD and AOQL plans for sampling inspection by variables which minimize mean inspection cost per lot of process average quality.
Set of tools for mapping of categorical response variables based on principal component analysis (pca) and multidimensional unfolding (mdu).
Random forests are a statistical learning method widely used in many areas of scientific research essentially for its ability to learn complex relationships between input and output variables and also its capacity to handle high-dimensional data. However, current random forests approaches are not flexible enough to handle longitudinal data. In this package, we propose a general approach of random forests for high-dimensional longitudinal data. It includes a flexible stochastic model which allows the covariance structure to vary over time. Furthermore, we introduce a new method which takes intra-individual covariance into consideration to build random forests. The method is fully detailled in Capitaine et.al. (2020) <doi:10.1177/0962280220946080> Random forests for high-dimensional longitudinal data.
Calculate mean statistics and leaf angle distribution type from measured leaf inclination angles. LAD distribution is fitted using a two-parameters (mu, nu) Beta distribution and compared with six theoretical LAD distributions. Additional information is provided in Chianucci and Cesaretti (2022) <doi:10.1101/2022.10.28.513998>.
This package provides a collection of helper functions and illustrative datasets to support learning and teaching of data science with R. The package is designed as a companion to the book <https://book-data-science-r.netlify.app>, making key data science techniques accessible to individuals with minimal coding experience. Functions include tools for data partitioning, performance evaluation, and data transformations (e.g., z-score and min-max scaling). The included datasets are curated to highlight practical applications in data exploration, modeling, and multivariate analysis. An early inspiration for the package came from an ancient Persian idiom about "eating the liveR," symbolizing deep and immersive engagement with knowledge.
Imports a data frame containing a single time resolved laser ablation mass spectrometry analysis of a foraminifera (or other carbonate shell), then detects when the laser has burnt through the foraminifera test as a function of change in signal over time.
Compute and visualize using the visNetwork package all the bivariate correlations of a dataframe. Several and different types of correlation coefficients (Pearson's r, Spearman's rho, Kendall's tau, distance correlation, maximal information coefficient and equal-freq discretization-based maximal normalized mutual information) are used according to the variable couple type (quantitative vs categorical, quantitative vs quantitative, categorical vs categorical).
Transforms away factors with many levels prior to doing an OLS. Useful for estimating linear models with multiple group fixed effects, and for estimating linear models which uses factors with many levels as pure control variables. See Gaure (2013) <doi:10.1016/j.csda.2013.03.024> Includes support for instrumental variables, conditional F statistics for weak instruments, robust and multi-way clustered standard errors, as well as limited mobility bias correction (Gaure 2014 <doi:10.1002/sta4.68>). Since version 3.0, it provides dedicated functions to estimate Poisson models.
Lipid annotation in untargeted LC-MS lipidomics based on fragmentation rules. Alcoriza-Balaguer MI, Garcia-Canaveras JC, Lopez A, Conde I, Juan O, Carretero J, Lahoz A (2019) <doi:10.1021/acs.analchem.8b03409>.
This package contains different algorithms and construction methods for optimal Latin hypercube designs (LHDs) with flexible sizes. Our package is comprehensive since it is capable of generating maximin distance LHDs, maximum projection LHDs, and orthogonal and nearly orthogonal LHDs. Detailed comparisons and summary of all the algorithms and construction methods in this package can be found at Hongzhi Wang, Qian Xiao and Abhyuday Mandal (2021) <doi:10.48550/arXiv.2010.09154>. This package is particularly useful in the area of Design and Analysis of Experiments (DAE). More specifically, design of computer experiments.
This package provides a collection of tools for the calculation of freewater metabolism from in situ time series of dissolved oxygen, water temperature, and, optionally, additional environmental variables. LakeMetabolizer implements 5 different metabolism models with diverse statistical underpinnings: bookkeeping, ordinary least squares, maximum likelihood, Kalman filter, and Bayesian. Each of these 5 metabolism models can be combined with 1 of 7 models for computing the coefficient of gas exchange across the airâ water interface (k). LakeMetabolizer also features a variety of supporting functions that compute conversions and implement calculations commonly applied to raw data prior to estimating metabolism (e.g., oxygen saturation and optical conversion models).
This package provides a system for fitting Logistic Curve by Rhodes Method. Method for fitting logistic curve by Rhodes Method is described in A.M.Gun,M.K.Gupta and B.Dasgupta(2019,ISBN:81-87567-81-3).
Set of the data science tools created by various members of the Long Term Ecological Research (LTER) community. These functions were initially written largely as standalone operations and have later been aggregated into this package.