Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Develop and evaluate treatment rules based on: (1) the standard indirect approach of split-regression, which fits regressions separately in both treatment groups and assigns an individual to the treatment option under which predicted outcome is more desirable; (2) the direct approach of outcome-weighted-learning proposed by Yingqi Zhao, Donglin Zeng, A. John Rush, and Michael Kosorok (2012) <doi:10.1080/01621459.2012.695674>; (3) the direct approach, which we refer to as direct-interactions, proposed by Shuai Chen, Lu Tian, Tianxi Cai, and Menggang Yu (2017) <doi:10.1111/biom.12676>. Please see the vignette for a walk-through of how to start with an observational dataset whose design is understood scientifically and end up with a treatment rule that is trustworthy statistically, along with an estimation of rule benefit in an independent sample.
An RStudio addin for teaching and learning data manipulation using the dplyr package. You can learn each steps of data manipulation by clicking your mouse without coding. You can get resultant data (as a tibble') and the code for data manipulation.
An easy-to-use yet powerful system for plotting grouped data effect sizes. Various types of effect size can be estimated, then plotted together with a representation of the original data. Select from many possible data representations (box plots, violin plots, raw data points etc.), and combine as desired. Durga plots are implemented in base R, so are compatible with base R methods for combining plots, such as layout()'. See Khan & McLean (2023) <doi:10.1101/2023.02.06.526960>.
This package performs Bayesian model averaging for capture-recapture. This includes code to stratify records, check the strata for suitable overlap to be used for capture-recapture, and some functions to plot the estimated population size.
The Data Driven I-V Feature Extraction is used to extract Current-Voltage (I-V) features from I-V curves. I-V curves indicate the relationship between current and voltage for a solar cell or Photovoltaic (PV) modules. The I-V features such as maximum power point (Pmp), shunt resistance (Rsh), series resistance (Rs),short circuit current (Isc), open circuit voltage (Voc), fill factor (FF), current at maximum power (Imp) and voltage at maximum power(Vmp) contain important information of the performance for PV modules. The traditional method uses the single diode model to model I-V curves and extract I-V features. This package does not use the diode model, but uses data-driven a method which select different linear parts of the I-V curves to extract I-V features. This method also uses a sampling method to calculate uncertainties when extracting I-V features. Also, because of the partially shaded array, "steps" occurs in I-V curves. The "Segmented Regression" method is used to identify steps in I-V curves. This material is based upon work supported by the U.S. Department of Energyâ s Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Number DE-EE0007140. Further information can be found in the following paper. [1] Ma, X. et al, 2019. <doi:10.1109/JPHOTOV.2019.2928477>.
This package provides a set of functions and a class to connect, extract and upload information from the LSEG Datastream database. This package uses the DSWS API and server used by the Datastream DFO addin'. Details of this API are available at <https://www.lseg.com/en/data-analytics>. Please report issues at <https://github.com/CharlesCara/DatastreamDSWS2R/issues>.
Written to help undergraduate as well as graduate students to get started with R for basic econometrics without the need to import specific functions and datasets from many different sources. Primarily, the package is meant to accompany the German textbook Auer, L.v., Hoffmann, S., Kranz, T. (2024, ISBN: 978-3-662-68263-0) from which the exercises cover all the topics from the textbook Auer, L.v. (2023, ISBN: 978-3-658-42699-6).
An open, multi-algorithmic pipeline for easy, fast and efficient analysis of cellular sub-populations and the molecular signatures that characterize them. The pipeline consists of four successive steps: data pre-processing, cellular clustering with pseudo-temporal ordering, defining differential expressed genes and biomarker identification. More details on Ghannoum et. al. (2021) <doi:10.3390/ijms22031399>. This package implements extensions of the work published by Ghannoum et. al. (2019) <doi:10.1101/700989>.
Numerical quadrature of functions of one variable over a finite or infinite interval with double exponential formulas.
Detection of differential item functioning (DIF) among dichotomously scored items and differential distractor functioning (DDF) among unscored items with non-linear regression procedures based on generalized logistic regression models (Hladka & Martinkova, 2020, <doi:10.32614/RJ-2020-014>).
This package provides a Graphical User Interface (GUI) to import, save, detrend and perform standard tree-ring analyses. The interactive detrending allows the user to check how well the detrending curve fits each time-series and change it when needed.
The DYMO package provides tools for multi-feature time-series forecasting using a Dynamic Mode Decomposition (DMD) model combined with conformal predictive sampling for uncertainty quantification.
This function provides an interface between Matlab and R in facilitating fast processing for reading and saving DICOM images.
Data depth concept offers a variety of powerful and user friendly tools for robust exploration and inference for multivariate data. The offered techniques may be successfully used in cases of lack of our knowledge on parametric models generating data due to their nature. The package consist of among others implementations of several data depth techniques involving multivariate quantile-quantile plots, multivariate scatter estimators, multivariate Wilcoxon tests and robust regressions.
Diagnostic classification models are psychometric models used to categorically estimate respondents mastery, or proficiency, on a set of predefined skills (Bradshaw, 2016, <doi:10.1002/9781118956588.ch13>). Diagnostic models can be estimated with Stan'; however, the necessary scripts can be long and complicated. This package automates the creation of Stan scripts for diagnostic classification models. Specify different types of diagnostic models, define prior distributions, and automatically generate the necessary Stan code for estimating the model.
This package provides a unified framework to building Area Deprivation Index (ADI), Social Vulnerability Index (SVI), and Neighborhood Deprivation Index (NDI) deprivation measures and accessing related data from the U.S. Census Bureau such as Gini coefficient data. Tools are also available for calculating percentiles, quantiles, and for creating clear map breaks for data visualization.
Data screening is an important first step of any statistical analysis. dataMaid auto generates a customizable data report with a thorough summary of the checks and the results that a human can use to identify possible errors. It provides an extendable suite of test for common potential errors in a dataset.
Smooth testing of goodness of fit. These tests are data driven (alternative hypothesis is dynamically selected based on data). In this package you will find various tests for exponent, Gaussian, Gumbel and uniform distribution.
Estimate the Deterministic Input, Noisy "And" Gate (DINA) cognitive diagnostic model parameters using the Gibbs sampler described by Culpepper (2015) <doi:10.3102/1076998615595403>.
This package provides tools to create and manipulate probability distributions using S3. Generics pdf(), cdf(), quantile(), and random() provide replacements for base R's d/p/q/r style functions. Functions and arguments have been named carefully to minimize confusion for students in intro stats courses. The documentation for each distribution contains detailed mathematical notes.
Statistical methods for retrospectively detecting changes in location and/or dispersion of univariate and multivariate variables. Data values are assumed to be independent, can be individual (one observation at each instant of time) or subgrouped (more than one observation at each instant of time). Control limits are computed, often using a permutation approach, so that a prescribed false alarm probability is guaranteed without making any parametric assumptions on the stable (in-control) distribution. See G. Capizzi and G. Masarotto (2018) <doi:10.1007/978-3-319-75295-2_1> for an introduction to the package.
This package provides a function toolkit to facilitate reproducible RNA-Seq Differential Gene Expression (DGE) analysis (Law (2015) <doi:10.12688/f1000research.9005.3>). The tools include both analysis work-flow and utility functions: mapping/unit conversion, count normalization, accounting for unknown covariates, and more. This is a complement/cohort to the DGEobj package that provides a flexible container to manage and annotate Differential Gene Expression analysis results.
Offers functionality which provides methods for data analyses and cleaning that can be flexibly applied across multiple variables and in groups. These include cleaning accidental text, contingent calculations, counting missing data, and building summarizations of the data.
Joint dimension reduction and spatial clustering is conducted for Single-cell RNA sequencing and spatial transcriptomics data, and more details can be referred to Wei Liu, Xu Liao, Yi Yang, Huazhen Lin, Joe Yeong, Xiang Zhou, Xingjie Shi and Jin Liu. (2022) <doi:10.1093/nar/gkac219>. It is not only computationally efficient and scalable to the sample size increment, but also is capable of choosing the smoothness parameter and the number of clusters as well.