An efficient Gibbs sampling algorithm is developed for Bayesian multivariate longitudinal data analysis with the focus on selection of important elements in the generalized autoregressive matrix. It provides posterior samples and estimates of parameters. In addition, estimates of several information criteria such as Akaike information criterion (AIC), Bayesian information criterion (BIC), deviance information criterion (DIC) and prediction accuracy such as the marginal predictive likelihood (MPL) and the mean squared prediction error (MSPE) are provided for model selection.
Racket is a general-purpose programming language in the Scheme family, with a large set of libraries and a compiler based on Chez Scheme. Racket is also a platform for language-oriented programming, from small domain-specific languages to complete language implementations.
The main Racket distribution comes with many bundled packages, including the DrRacket IDE, libraries for GUI and web programming, and implementations of languages such as Typed Racket, R5RS and R6RS Scheme, Algol 60, and Datalog.
Training datasets for iC10
; which implements the classifier described in the paper Genome-driven integrated classification of breast cancer validated in over 7,500 samples (Ali HR et al., Genome Biology 2014). It uses copy number and/or expression form breast cancer data, trains a pamr classifier (Tibshirani et al.) with the features available and predicts the iC10
group. Genomic annotation for the training dataset has been obtained from Mark Dunning's lluminaHumanv3.db
package.
Based on the standard DataFrame
metaphor, we are trying to implement the feature of delayed operation on the DelayedDataFrame
, with a slot of lazyIndex
, which saves the mapping indexes for each column of DelayedDataFrame
. Methods like show, validity check, [/[[ subsetting, rbind/cbind are implemented for DelayedDataFrame
to be operated around lazyIndex
. The listData
slot stays untouched until a realization call e.g., DataFrame
constructor OR as.list()
is invoked.
Several functions are provided for small area estimation at the area level using the hierarchical bayesian (HB) method with panel data under beta distribution for variable interest. This package also provides a dataset produced by data generation. The rjags package is employed to obtain parameter estimates. Model-based estimators involve the HB estimators, which include the mean and the variation of the mean. For the reference, see Rao and Molina (2015, ISBN: 978-1-118-73578-7).
The goal of this package is to provide an easy to use, fast and scalable exhaustive search framework. Exhaustive feature selections typically require a very large number of models to be fitted and evaluated. Execution speed and memory management are crucial factors here. This package provides solutions for both. Execution speed is optimized by using a multi-threaded C++ backend, and memory issues are solved by by only storing the best results during execution and thus keeping memory usage constant.
Pure set data visualization approaches are often limited in scalability due to the combinatorial explosion of distinct set families as the number of sets under investigation increases. hierarchicalSets
applies a set centric hierarchical clustering of the sets under investigation and uses this hierarchy as a basis for a range of scalable visual representations. hierarchicalSets
is especially well suited for collections of sets that describe comparable comparable entities as it relies on the sets to have a meaningful relational structure.
Starting from user-supplied institutional data, these scripts transform, aggregate, and reshape the information to produce key-value pair data files that are able to be uploaded to IPEDS (Integrated Postsecondary Education Data System) through their submission portal <https://surveys.nces.ed.gov/ipeds/>. Starting data specifications can be found in the vignettes. Final files are saved locally to a location of the user's choice. User-friendly readable files can also be produced for purposes of data review and validation.
Provides a set of udev rules to allow using Android devices with tools such as adb
and fastboot
without root privileges. This package is intended to be added as a rule to the udev-service-type
in your operating-system
configuration. Additionally, an adbusers
group must be defined and your user added to it.
Simply installing this package will not have any effect. It is meant to be passed to the udev
service.
DEComplexDisease is designed to find the DEGs for complex disease, which is characterized by the heterogeneous genomic expression profiles. Different from the established DEG analysis tools, it does not assume the patients of complex diseases to share the common DEGs. By applying a bi-clustering algorithm, DEComplexDisease finds the DEGs shared by as many patients. Applying the DEComplexDisease analysis results, users are possible to find the patients affected by the same mechanism based on the shared signatures.
The significance of mean difference tests in clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. This package enables one to compute necessary sample sizes for single-step (Bonferroni) and step-wise procedures (Holm and Hochberg). These three procedures control the q-generalized family-wise error rate (probability of making at least q false rejections). Sample size is computed (for these single-step and step-wise procedures) in a such a way that the r-power (probability of rejecting at least r false null hypotheses, i.e. at least r significant endpoints among m) is above some given threshold, in the context of tests of difference of means for two groups of continuous endpoints (variables). Various types of structure of correlation are considered. It is also possible to analyse data (i.e., actually test difference in means) when these are available. The case r equals 1 is treated in separate functions that were used in Lafaye de Micheaux et al. (2014) <doi:10.1080/10543406.2013.860156>.
Fit Elastic Net, Lasso, and Ridge regression and do cross-validation in a fast way. We build the algorithm based on Least Angle Regression by Bradley Efron, Trevor Hastie, Iain Johnstone, etc. (2004)(<doi:10.1214/009053604000000067 >) and some algorithms like Givens rotation and Forward/Back Substitution. In this way, many matrices to be computed are retained as triangular matrices which can eventually speed up the computation. The fitting algorithm for Elastic Net is written in C++ using Armadillo linear algebra library.
Experiment objects such as the SummarizedExperiment
or SingleCellExperiment
are data containers for one or more matrix-like assays along with the associated row and column data. Often only a subset of the original data is needed for down-stream analysis. For example, filtering out poor quality samples will require excluding some columns before analysis. The ExperimentSubset
object is a container to efficiently manage different subsets of the same data without having to make separate objects for each new subset.
Trading of Butterfly Options Strategies is represented here through their Graphs. The graphic indicators, strategies, calculations, functions and all the discussions are for academic, research, and educational purposes only and should not be construed as investment advice and come with absolutely no Liability. Guy Cohen (â The Bible of Options Strategies (2nd ed.)â , 2015, ISBN: 9780133964028). Zura Kakushadze, Juan A. Serur (â 151 Trading Strategiesâ , 2018, ISBN: 9783030027919). John C. Hull (â Options, Futures, and Other Derivatives (11th ed.)â , 2022, ISBN: 9780136939979).
Calculates expected values, variance, different moments (kth moment, truncated mean), stop-loss, mean excess loss, Value-at-Risk (VaR
) and Tail Value-at-Risk (TVaR
) as well as some density and cumulative (survival) functions of continuous, discrete and compound distributions. This package also includes a visual Shiny component to enable students to visualize distributions and understand the impact of their parameters. This package is intended to expand the stats package so as to enable students to develop an intuition for probability.
This package contains the development of a tool that provides a web-based graphical user interface (GUI) to perform Techniques from a subset of spatial statistics known as geographically weighted (GW) models. Contains methods described by Brunsdon et al., 1996 <doi:10.1111/j.1538-4632.1996.tb00936.x>, Brunsdon et al., 2002 <doi:10.1016/s0198-9715(01)00009-6>, Harris et al., 2011 <doi:10.1080/13658816.2011.554838>, Brunsdon et al., 2007 <doi:10.1111/j.1538-4632.2007.00709.x>.
Function library for the identification and separation of exponentially decaying signal components in continuous-wave optically stimulated luminescence measurements. A special emphasis is laid on luminescence dating with quartz, which is known for systematic errors due to signal components with unequal physical behaviour. Also, this package enables an easy to use signal decomposition of data sets imported and analysed with the R package Luminescence'. This includes the optional automatic creation of HTML reports. Further information and tutorials can be found at <https://luminescence.de>.
An implementation of fast cluster-based permutation analysis (CPA) for densely-sampled time data developed in Maris & Oostenveld, 2007 <doi:10.1016/j.jneumeth.2007.03.024>. Supports (generalized, mixed-effects) regression models for the calculation of timewise statistics. Provides both a wholesale and a piecemeal interface to the CPA procedure with an emphasis on interpretability and diagnostics. Integrates Julia libraries MixedModels.jl
and GLM.jl for performance improvements, with additional functionalities for interfacing with Julia from R powered by the JuliaConnectoR
package.
Includes four functions: RFactor_calc()
, RFactor_est()
, KFactor()
and SoilLoss()
. The rainfall erosivity factors can be calculated or estimated, and soil erodibility will be estimated by the equation extracted from the monograph. Soil loss will be estimated by the product of five factors (rainfall erosivity, soil erodibility, length and steepness slope, cover-management factor and support practice factor. In the future, additional functions can be included. This efforts to advance research in soil and water conservation, with fast and accurate results.
Cronbach's alpha and McDonald's
omega are widely used reliability or internal consistency measures in social, behavioral and education sciences. Alpha is reported in nearly every study that involves measuring a construct through multiple test items. The package coefficientalpha calculates coefficient alpha and coefficient omega with missing data and non-normal data. Robust standard errors and confidence intervals are also provided. A test is also available to test the tau-equivalent and homogeneous assumptions. Since Version 0.5, the bootstrap confidence intervals were added.
Perform nonparametric Bayesian analysis using Dirichlet processes without the need to program the inference algorithms. Utilise included pre-built models or specify custom models and allow the dirichletprocess package to handle the Markov chain Monte Carlo sampling. Our Dirichlet process objects can act as building blocks for a variety of statistical models including and not limited to: density estimation, clustering and prior distributions in hierarchical models. See Teh, Y. W. (2011) <https://www.stats.ox.ac.uk/~teh/research/npbayes/Teh2010a.pdf>, among many other sources.
Trading Strategies for high Option Volatility environment are represented here through their Graphs. The graphic indicators, strategies, calculations, functions and all the discussions are for academic, research, and educational purposes only and should not be construed as investment advice and come with absolutely no Liability. Guy Cohen (â The Bible of Options Strategies (2nd ed.)â , 2015, ISBN: 9780133964028). Zura Kakushadze, Juan A. Serur (â 151 Trading Strategiesâ , 2018, ISBN: 9783030027919). John C. Hull (â Options, Futures, and Other Derivatives (11th ed.)â , 2022, ISBN: 9780136939979).
Create cellular automata from Wolfram rules. Allows the creation of Wolfram style plots, as well as of animations. Easy to create multiple plots, for example the output of a rule with different initial states, or the output of many different rules from the same state. The output of a cellular automaton is given as a matrix, making it easy to try to explore the possibility of predicting its time evolution using various statistical tools available in R. Wolfram S. (2002, ISBN:1579550088) "A New Kind of Science".
This package provides a set of functions that help to create plots based on Hilbert curves. Hilbert curves are used to map one dimensional data into the 2D plane. The package provides a function that generate a 2D coordinate from an integer position. As a specific use case the package provides a function that allows mapping a character column in a data frame into 2D space using ggplot2'. This allows visually comparing long lists of URLs, words, genes or other data that has a fixed order and position.