Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The development of post-processing functionality for simulated snow profiles by the snow and avalanche community is often done in python'. This package aims to make some of these tools accessible to R users. Currently integrated modules contain functions to calculate dry snow layer instabilities in support of avalache hazard assessments following the publications of Richter, Schweizer, Rotach, and Van Herwijnen (2019) <doi:10.5194/tc-13-3353-2019>, and Mayer, Van Herwijnen, Techel, and Schweizer (2022) <doi:10.5194/tc-2022-34>.
This package provides a Graphical user interface to calculate the rainfall-runoff relation using the Natural Resources Conservation Service - Curve Number method (NRCS-CN method) but include modifications by Hawkins et al., (2002) about the Initial Abstraction. This GUI follows the programming logic of a previously published software (Hernandez-Guzman et al., 2011)<doi:10.1016/j.envsoft.2011.07.006>. It is a raster-based GIS tool that outputs runoff estimates from Land use/land cover and hydrologic soil group maps. This package has already been published in Journal of Hydroinformatics (Hernandez-Guzman et al., 2021)<doi:10.2166/hydro.2020.087> but it is under constant development at the Institute about Natural Resources Research (INIRENA) from the Universidad Michoacana de San Nicolas de Hidalgo and represents a collaborative effort between the Hydro-Geomatic Lab (INIRENA) with the Environmental Management Lab (CIAD, A.C.).
This package provides a set of consistent, opinionated functions to quickly check function arguments, coerce them to the desired configuration, or deliver informative error messages when that is not possible.
Routine that allows the user to run several goodness-of-fit tests. It also combines the tests and returns a properly adjusted family-wise p value. Details can be found in <arXiv:2007.04727>.
Data on standard load profiles from the German Association of Energy and Water Industries (BDEW Bundesverband der Energie- und Wasserwirtschaft e.V.) in a tidy format. The data and methodology are described in VDEW (1999), "Repräsentative VDEW-Lastprofile", <https://www.bdew.de/media/documents/1999_Repraesentative-VDEW-Lastprofile.pdf>. The package also offers an interface for generating a standard load profile over a user-defined period. For the algorithm, see VDEW (2000), "Anwendung der Repräsentativen VDEW-Lastprofile step-by-step", <https://www.bdew.de/media/documents/2000131_Anwendung-repraesentativen_Lastprofile-Step-by-step.pdf>.
This package provides a framework for undertaking space and time varying coefficient models (varying parameter models) using a Generalized Additive Model (GAM) with smooths approach. The framework suggests the need to investigate for the presence and nature of any space-time dependencies in the data. It proposes a workflow that creates and refines an initial space-time GAM and includes tools to create and evaluate multiple model forms. The workflow sequence is to: i) Prepare the data by lengthening it to have a single location and time variables for each observation. ii) Create all possible space and/or time models in which each predictor is specified in different ways in smooths. iii) Evaluate each model via their AIC value and pick the best one. iv) Create the final model. v) Calculate the varying coefficient estimates to quantify how the relationships between the target and predictor variables vary over space, time or space-time. vi) Create maps, time series plots etc. The number of knots used in each smooth can be specified directly or iteratively increased. This is illustrated with a climate point dataset of the dry rain forest in South America. This builds on work in Comber et al (2024) <doi:10.1080/13658816.2023.2270285> and Comber et al (2004) <doi:10.3390/ijgi13120459>.
This package provides functions to generate K-fold cross validation (CV) folds and CV test error estimates that take into account how a survey dataset's sampling design was constructed (SRS, clustering, stratification, and/or unequal sampling weights). You can input linear and logistic regression models, along with data and a type of survey design in order to get an output that can help you determine which model best fits the data using K-fold cross validation. Our paper on "K-Fold Cross-Validation for Complex Sample Surveys" by Wieczorek, Guerin, and McMahon (2022) <doi:10.1002/sta4.454> explains why differing how we take folds based on survey design is useful.
This package provides a workflow based on scTenifoldNet to perform in-silico knockout experiments using single-cell RNA sequencing (scRNA-seq) data from wild-type (WT) control samples as input. First, the package constructs a single-cell gene regulatory network (scGRN) and knocks out a target gene from the adjacency matrix of the WT scGRN by setting the geneâ s outdegree edges to zero. Then, it compares the knocked out scGRN with the WT scGRN to identify differentially regulated genes, called virtual-knockout perturbed genes, which are used to assess the impact of the gene knockout and reveal the geneâ s function in the analyzed cells.
This package provides a pipeline for estimating the average treatment effect via semi-supervised learning. Outcome regression is fit with cross-fitting using various machine learning method or user customized function. Doubly robust ATE estimation leverages both labeled and unlabeled data under a semi-supervised missing-data framework. For more details see Hou et al. (2021) <doi:10.48550/arxiv.2110.12336>. A detailed vignette is included.
This package implements estimators for structured covariance matrices in the presence of pairwise and spatial covariates. Metodiev, Perrot-Dockès, Ouadah, Fosdick, Robin, Latouche & Raftery (2025) <doi:10.48550/arXiv.2411.04520>.
This package provides a systematic bioinformatics tool to perform single-sample mutation-based pathway analysis by integrating somatic mutation data with the Protein-Protein Interaction (PPI) network. In this method, we use local and global weighted strategies to evaluate the effects of network genes from mutations according to the network topology and then calculate the mutation-based pathway enrichment score (ssMutPES) to reflect the accumulated effect of mutations of each pathway. Subsequently, the ssMutPES profiles are used for unsupervised spectral clustering to identify cancer subtypes.
Sample surveys use scientific methods to draw inferences about population parameters by observing a representative part of the population, called sample. The SRSWOR (Simple Random Sampling Without Replacement) is one of the most widely used probability sampling designs, wherein every unit has an equal chance of being selected and units are not repeated.This function draws multiple SRSWOR samples from a finite population and estimates the population parameter i.e. total of HT, Ratio, and Regression estimators. Repeated simulations (e.g., 500 times) are used to assess and compare estimators using metrics such as percent relative bias (%RB), percent relative root means square error (%RRMSE).For details on sampling methodology, see, Cochran (1977) "Sampling Techniques" <https://archive.org/details/samplingtechniqu0000coch_t4x6>.
Analysis of multi environment data of plant breeding experiments following the analyses described in Malosetti, Ribaut, and van Eeuwijk (2013), <doi:10.3389/fphys.2013.00044>. One of a series of statistical genetic packages for streamlining the analysis of typical plant breeding experiments developed by Biometris. Some functions have been created to be used in conjunction with the R package asreml for the ASReml software, which can be obtained upon purchase from VSN international (<https://vsni.co.uk/software/asreml-r/>).
This package provides the SMOTE with Boosting (SMOTEWB) algorithm. See F. SaÄ lam, M. A. Cengiz (2022) <doi:10.1016/j.eswa.2022.117023>. It is a SMOTE-based resampling technique which creates synthetic data on the links between nearest neighbors. SMOTEWB uses boosting weights to determine where to generate new samples and automatically decides the number of neighbors for each sample. It is robust to noise and outperforms most of the alternatives according to Matthew Correlation Coefficient metric. Alternative resampling methods are also available in the package.
An index is created using a mathematical model that transforms multi-dimensional variables into a single value. These variables are often correlated, and while PCA-based indices can address the issue of multicollinearity, they typically do not account for survey weights, which can lead to inaccurate rankings of survey units such as households, districts, or states. To resolve this, the current package facilitates the development of a principal component analysis-based composite index by incorporating survey weights for each sample observation. This ensures the generation of a survey-weighted principal component-based normalized composite index. Additionally, the package provides a normalized principal component-based composite index and ranks the sample observations based on the values of the composite indices. For method details see, Skinner, C. J., Holmes, D. J. and Smith, T. M. F. (1986) <DOI:10.1080/01621459.1986.10478336>, Singh, D., Basak, P., Kumar, R. and Ahmad, T. (2023) <DOI:10.3389/fams.2023.1274530>.
This package provides a suite of functions that allow a full, fast, and efficient Bayesian treatment of the Bradley--Terry model. Prior assumptions about the model parameters can be encoded through a multivariate normal prior distribution. Inference is performed using a latent variable representation of the model.
This package provides a tool for producing synthetic versions of microdata containing confidential information so that they are safe to be released to users for exploratory analysis. The key objective of generating synthetic data is to replace sensitive original values with synthetic ones causing minimal distortion of the statistical information contained in the data set. Variables, which can be categorical or continuous, are synthesised one-by-one using sequential modelling. Replacements are generated by drawing from conditional distributions fitted to the original data using parametric or classification and regression trees models. Data are synthesised via the function syn() which can be largely automated, if default settings are used, or with methods defined by the user. Optional parameters can be used to influence the disclosure risk and the analytical quality of the synthesised data. For a description of the implemented method see Nowok, Raab and Dibben (2016) <doi:10.18637/jss.v074.i11>. Functions to assess identity and attribute disclosure for the original and for the synthetic data are included in the package, and their use is illustrated in a vignette on disclosure (Practical Privacy Metrics for Synthetic Data).
This package provides functions to format and summarise already computed outputs from commonly used statistical and psychometric functions into compact, single-row tables and simple graphs, with utilities to export results to CSV, Word, and Excel formats. The package does not implement new statistical methods or estimation procedures; instead, it organises and presents results obtained from existing functions such as psych::describe(), psych::alpha(), stats::t.test(), and gtsummary::tbl_summary() to streamline reporting workflows in clinical and psychological research.
This gadget allows you to use the recipes package belonging to tidymodels to carry out the data preprocessing tasks in an interactive way. Build your recipe by dragging the variables, visually analyze your data to decide which steps to use, add those steps and preprocess your data.
Simple and flexible quizzes in shiny'. Easily create quizzes from various pre-built question and choice types or create your own using htmltools and shiny packages as building blocks. Integrates with larger shiny applications. Ideal for non-web-developers such as educators, data scientists, and anyone who wants to assess responses interactively in a small form factor.
This package provides a fast and flexible set of tools for large scale estimation. It features many stochastic gradient methods, built-in models, visualization tools, automated hyperparameter tuning, model checking, interval estimation, and convergence diagnostics.
This package provides an abstraction for managing, installing, and switching between sets of installed R packages. This allows users to maintain multiple package libraries simultaneously, e.g. to maintain strict, package-version-specific reproducibility of many analyses, or work within a development/production release paradigm. Introduces a generalized package installation process which supports multiple repository and non-repository sources and tracks package provenance.
Uses parametric and nonparametric methods to quantify the proportion of the estimated selection bias (SB) explained by each observed confounder when estimating propensity score weighted treatment effects. Parast, L and Griffin, BA (2020). "Quantifying the Bias due to Observed Individual Confounders in Causal Treatment Effect Estimates". Statistics in Medicine, 39(18): 2447- 2476 <doi: 10.1002/sim.8549>.
Routines for creating, manipulating, and performing Bayesian inference about Gaussian processes in one and two dimensions using the Fourier basis approximation: simulation and plotting of processes, calculation of coefficient variances, calculation of process density, coefficient proposals (for use in MCMC). It uses R environments to store GP objects as references/pointers.