Introduce weights into Ordered Weighted Averages and extend bivariate means based on n-ary tree construction. Please refer to the following: G. Beliakov, H. Bustince, and T. Calvo (2016, ISBN: 978-3-319-24753-3), G. Beliakov(2018) <doi:10.1002/int.21913>, G. Beliakov, J.J. Dujmovic (2016) <doi:10.1016/j.ins.2015.10.040>, J.J. Dujmovic and G. Beliakov (2017) <doi:10.1002/int.21828>.
Our approach provides a way to assign continuous cell cycle phase using scRNA-seq
data, and consequently, allows to identify cyclic trend of gene expression levels along the cell cycle. This package provides method and training data, which includes scRNA-seq
data collected from 6 individual cell lines of induced pluripotent stem cells (iPSCs
), and also continuous cell cycle phase derived from FUCCI fluorescence imaging data.
Rapidly create a GUI for a function you created by automatically creating widgets for arguments of the function. This package automatically parses help routines for context-sensitive help to these arguments. The interface is essentially a wrapper to some Tcl/Tk routines to both simplify and facilitate GUI creation. More advanced Tcl/Tk routines/GUI objects can be incorporated into the interface for greater customization for the more experienced.
This package provides a toolkit for archaeological time series and time intervals. This package provides a system of classes and methods to represent and work with archaeological time series and time intervals. Dates are represented as "rata die" and can be converted to (virtually) any calendar defined by Reingold and Dershowitz (2018) <doi:10.1017/9781107415058>. This packages offers a simple API that can be used by other specialized packages.
This package provides statistical tools for analyzing net and relative survival, with a key feature of relaxing the assumption of independent censoring and incorporating the effect of dependent competing risks. It employs a copula-based methodology, specifically the Archimedean copula, to simulate data, conduct survival analysis, and offer comparisons with other methods. This approach is detailed in the work of Adatorwovor et al. (2022) <doi:10.1515/ijb-2021-0016>.
Test for no adverse shift in two-sample comparison when we have a training set, the reference distribution, and a test set. The approach is flexible and relies on a robust and powerful test statistic, the weighted AUC. Technical details are in Kamulete, V. M. (2021) <arXiv:1908.04000>
. Modern notions of outlyingness such as trust scores and prediction uncertainty can be used as the underlying scores for example.
Joint DNA-based disaster victim identification (DVI), as described in Vigeland and Egeland (2021) <doi:10.21203/rs.3.rs-296414/v1>. Identification is performed by optimising the joint likelihood of all victim samples and reference individuals. Individual identification probabilities, conditional on all available information, are derived from the joint solution in the form of posterior pairing probabilities. dvir is part of the pedsuite collection of packages for pedigree analysis.
Analysis of dichotomous and polytomous response data using the explanatory item response modeling framework, as described in Bulut, Gorgun, & Yildirim-Erbasli (2021) <doi:10.3390/psych3030023>, Stanke & Bulut (2019) <doi:10.21449/ijate.515085>, and De Boeck & Wilson (2004) <doi:10.1007/978-1-4757-3990-9>. Generalized linear mixed modeling is used for estimating the effects of item-related and person-related variables on dichotomous and polytomous item responses.
Elastic net regression models are controlled by two parameters, lambda, a measure of shrinkage, and alpha, a metric defining the model's location on the spectrum between ridge and lasso regression. glmnet provides tools for selecting lambda via cross validation but no automated methods for selection of alpha. Elastic Net SearcheR
automates the simultaneous selection of both lambda and alpha. Developed, in part, with support by NICHD R03 HD094912.
Estimates causal effects with panel data using the counterfactual methods. It is suitable for panel or time-series cross-sectional analysis with binary treatments under (hypothetically) baseline randomization.It allows a treatment to switch on and off and limited carryover effects. It supports linear factor models, a generalization of gsynth and the matrix completion method. Implementation details can be found in Liu, Wang and Xu (2022) <arXiv:2107.00856>
.
Simulate and analyze multistate models with general hazard functions. gems provides functionality for the preparation of hazard functions and parameters, simulation from a general multistate model and predicting future events. The multistate model is not required to be a Markov model and may take the history of previous events into account. In the basic version, it allows to simulate from transition-specific hazard function, whose parameters are multivariable normally distributed.
This tool identifies hydropeaking events from raw time-series flow record, a rapid flow variation induced by the hourly-adjusted electricity market. The novelty of HEDA is to use vector angle instead of the first-order derivative to detect change points which not only largely improves the computing efficiency but also accounts for the rate of change of the flow variation. More details <doi:10.1016/j.jhydrol.2021.126392>.
Interpretable nonparametric modeling of longitudinal data using additive Gaussian process regression. Contains functionality for inferring covariate effects and assessing covariate relevances. Models are specified using a convenient formula syntax, and can include shared, group-specific, non-stationary, heterogeneous and temporally uncertain effects. Bayesian inference for model parameters is performed using Stan'. The modeling approach and methods are described in detail in Timonen et al. (2021) <doi:10.1093/bioinformatics/btab021>.
This package provides tools for data analysis with partially observed Markov process (POMP) models (also known as stochastic dynamical systems, hidden Markov models, and nonlinear, non-Gaussian, state-space models). The package provides facilities for implementing POMP models, simulating them, and fitting them to time series data by a variety of frequentist and Bayesian methods. It is also a versatile platform for implementation of inference methods for general POMP models.
Efficient statistical inference of two-sample MR (Mendelian Randomization) analysis. It can account for the correlated instruments and the horizontal pleiotropy, and can provide the accurate estimates of both causal effect and horizontal pleiotropy effect as well as the two corresponding p-values. There are two main functions in the PPMR package. One is PMR_individual()
for individual level data, the other is PMR_summary()
for summary data.
Option pricing (financial derivatives) techniques mainly following textbook Options, Futures and Other Derivatives', 9ed by John C.Hull, 2014. Prentice Hall. Implementations are via binomial tree option model (BOPM), Black-Scholes model, Monte Carlo simulations, etc. This package is a result of Quantitative Financial Risk Management course (STAT 449 and STAT 649) at Rice University, Houston, TX, USA, taught by Oleg Melnikov, statistics PhD
student, as of Spring 2015.
Analyse species-habitat associations in R. Therefore, information about the location of the species (as a point pattern) is needed together with environmental conditions (as a categorical raster). To test for significance habitat associations, one of the two components is randomized. Methods are mainly based on Plotkin et al. (2000) <doi:10.1006/jtbi.2000.2158> and Harms et al. (2001) <doi:10.1111/j.1365-2745.2001.00615.x>.
This package implements a three-step procedure in the spirit of Leffondre et al. (2004) to identify clusters of individual longitudinal trajectories. The procedure involves (1) computing a number of "measures of change" capturing various features of the trajectories; (2) using a Principal Component Analysis based dimension reduction algorithm to select a subset of measures and (3) using the k-medoids or k-means algorithm to identify clusters of trajectories.
This package provides tools for 3D point cloud voxelisation, projection, geometrical and morphological description of trees (DBH, height, volume, crown diameter), analyses of temporal changes between different measurement times, distance based clustering and visualisation of 3D voxel clouds and 2D projection. Most analyses and algorithms provided in the package are based on the concept of space exploration and are described in Lecigne et al. (2018, <doi:10.1093/aob/mcx095>).
Similarity Weighted Nonnegative Embedding (SWNE) is a method for visualizing high dimensional datasets. SWNE uses Nonnegative Matrix Factorization to decompose datasets into latent factors, projects those factors onto 2 dimensions, and embeds samples and key features in 2 dimensions relative to the factors. SWNE can capture both the local and global dataset structure, and allows relevant features to be embedded directly onto the visualization, facilitating interpretation of the data.
It provides access to and information about the most important Brazilian economic time series - from the Getulio Vargas Foundation <http://portal.fgv.br/en>, the Central Bank of Brazil <http://www.bcb.gov.br> and the Brazilian Institute of Geography and Statistics <http://www.ibge.gov.br>. It also presents tools for managing, analysing (e.g. generating dynamic reports with a complete analysis of a series) and exporting these time series.
Cuddy-Della valle index gives the degree of instability present in the data by accommodating the effect of a trend. The adjusted R squared value of the best fitted model is chosen. The index is obtained by multiplying the coefficient of variation with square root of one minus the adjusted R-squared value. This package has been developed using concept of Shankar et al. (2022)<doi:10.3389/fsufs.2023.1208898>.
This package provides a collection of functions and jamovi module for the estimation approach to inferential statistics, the approach which emphasizes effect sizes, interval estimates, and meta-analysis. Nearly all functions are based on statpsych and metafor'. This package is still under active development, and breaking changes are likely, especially with the plot and hypothesis test functions. Data sets are included for all examples from Cumming & Calin-Jageman (2024) <ISBN:9780367531508>.
Computes the penalized maximum likelihood estimates of factor loadings and unique variances for various tuning parameters. The pathwise coordinate descent along with EM algorithm is used. This package also includes a new graphical tool which outputs path diagram, goodness-of-fit indices and model selection criteria for each regularization parameter. The user can change the regularization parameter by manipulating scrollbars, which is helpful to find a suitable value of regularization parameter.