_            _    _        _         _
      /\ \         /\ \ /\ \     /\_\      / /\
      \_\ \       /  \ \\ \ \   / / /     / /  \
      /\__ \     / /\ \ \\ \ \_/ / /     / / /\ \__
     / /_ \ \   / / /\ \ \\ \___/ /     / / /\ \___\
    / / /\ \ \ / / /  \ \_\\ \ \_/      \ \ \ \/___/
   / / /  \/_// / /   / / / \ \ \        \ \ \
  / / /      / / /   / / /   \ \ \   _    \ \ \
 / / /      / / /___/ / /     \ \ \ /_/\__/ / /
/_/ /      / / /____\/ /       \ \_\\ \/___/ /
\_\/       \/_________/         \/_/ \_____\/
r-biogas 1.64.0
Channel: guix-cran
Location: guix-cran/packages/b.scm (guix-cran packages b)
Home page: https://github.com/sashahafner/biogas/
Licenses: GPL 2
Synopsis: Process Biogas Data and Predict Biogas Production
Description:

This package provides functions for calculating biochemical methane potential (BMP) from laboratory measurements and other types of data processing and prediction useful for biogas research. Raw laboratory measurements for diverse methods (volumetric, manometric, gravimetric, gas density) can be processed to calculate BMP. Theoretical maximum BMP or methane or biogas yield can be predicted from various measures of substrate composition. Molar mass and calculated oxygen demand (COD') can be determined from a chemical formula. Measured gas volume can be corrected for water vapor and to standard (or user-defined) temperature and pressure. Gas quantity can be converted between volume, mass, and moles. A function for planning BMP experiments can consider multiple constraints in suggesting substrate or inoculum quantities, and check for problems. Inoculum and substrate mass can be determined for planning BMP experiments. Finally, a set of first-order models can be fit to measured methane production rate or cumulative yield in order to extract estimates of ultimate yield and kinetic constants. See Hafner et al. (2018) <doi:10.1016/j.softx.2018.06.005> for details. OBA is a web application that provides access to some of the package functionality: <https://biotransformers.shinyapps.io/oba1/>. The Standard BMP Methods website documents the calculations in detail: <https://www.dbfz.de/en/BMP>.

r-fabmix 5.1
Propagated dependencies: r-rcpparmadillo@15.2.2-1 r-rcpp@1.1.0 r-rcolorbrewer@1.1-3 r-mvtnorm@1.3-3 r-mclust@6.1.2 r-mass@7.3-65 r-label-switching@1.8 r-ggplot2@4.0.1 r-foreach@1.5.2 r-doparallel@1.0.17 r-corrplot@0.95 r-coda@0.19-4.1
Channel: guix-cran
Location: guix-cran/packages/f.scm (guix-cran packages f)
Home page: https://github.com/mqbssppe/overfittingFABMix
Licenses: GPL 2
Synopsis: Overfitting Bayesian Mixtures of Factor Analyzers with Parsimonious Covariance and Unknown Number of Components
Description:

Model-based clustering of multivariate continuous data using Bayesian mixtures of factor analyzers (Papastamoulis (2019) <DOI:10.1007/s11222-019-09891-z> (2018) <DOI:10.1016/j.csda.2018.03.007>). The number of clusters is estimated using overfitting mixture models (Rousseau and Mengersen (2011) <DOI:10.1111/j.1467-9868.2011.00781.x>): suitable prior assumptions ensure that asymptotically the extra components will have zero posterior weight, therefore, the inference is based on the ``alive components. A Gibbs sampler is implemented in order to (approximately) sample from the posterior distribution of the overfitting mixture. A prior parallel tempering scheme is also available, which allows to run multiple parallel chains with different prior distributions on the mixture weights. These chains run in parallel and can swap states using a Metropolis-Hastings move. Eight different parameterizations give rise to parsimonious representations of the covariance per cluster (following Mc Nicholas and Murphy (2008) <DOI:10.1007/s11222-008-9056-0>). The model parameterization and number of factors is selected according to the Bayesian Information Criterion. Identifiability issues related to label switching are dealt by post-processing the simulated output with the Equivalence Classes Representatives algorithm (Papastamoulis and Iliopoulos (2010) <DOI:10.1198/jcgs.2010.09008>, Papastamoulis (2016) <DOI:10.18637/jss.v069.c01>).

r-korpus 0.13-8
Propagated dependencies: r-sylly@0.1-6 r-matrix@1.7-4 r-data-table@1.17.8
Channel: guix-cran
Location: guix-cran/packages/k.scm (guix-cran packages k)
Home page: https://reaktanz.de/?c=hacking&s=koRpus
Licenses: GPL 3+
Synopsis: Text Analysis with Emphasis on POS Tagging, Readability, and Lexical Diversity
Description:

This package provides a set of tools to analyze texts. Includes, amongst others, functions for automatic language detection, hyphenation, several indices of lexical diversity (e.g., type token ratio, HD-D/vocd-D, MTLD) and readability (e.g., Flesch, SMOG, LIX, Dale-Chall). Basic import functions for language corpora are also provided, to enable frequency analyses (supports Celex and Leipzig Corpora Collection file formats) and measures like tf-idf. Note: For full functionality a local installation of TreeTagger is recommended. It is also recommended to not load this package directly, but by loading one of the available language support packages from the l10n repository <https://undocumeantit.github.io/repos/l10n/>. koRpus also includes a plugin for the R GUI and IDE RKWard, providing graphical dialogs for its basic features. The respective R package rkward cannot be installed directly from a repository, as it is a part of RKWard. To make full use of this feature, please install RKWard from <https://rkward.kde.org> (plugins are detected automatically). Due to some restrictions on CRAN, the full package sources are only available from the project homepage. To ask for help, report bugs, request features, or discuss the development of the package, please subscribe to the koRpus-dev mailing list (<https://korpusml.reaktanz.de>).

r-sk4fga 0.1.1
Propagated dependencies: r-rcpp@1.1.0
Channel: guix-cran
Location: guix-cran/packages/s.scm (guix-cran packages s)
Home page: https://github.com/tobyhayward13/SCI118UOA_ForensicGlassAnalysis
Licenses: GPL 2+
Synopsis: Scott-Knott for Forensic Glass Analysis
Description:

In forensics, it is common and effective practice to analyse glass fragments from the scene and suspects to gain evidence of placing a suspect at the crime scene. This kind of analysis involves comparing the physical and chemical attributes of glass fragments that exist on both the person and at the crime scene, and assessing the significance in a likeness that they share. The package implements the Scott-Knott Modification 2 algorithm (SKM2) (Christopher M. Triggs and James M. Curran and John S. Buckleton and Kevan A.J. Walsh (1997) <doi:10.1016/S0379-0738(96)02037-3> "The grouping problem in forensic glass analysis: a divisive approach", Forensic Science International, 85(1), 1--14) for small sample glass fragment analysis using the refractive index (ri) of a set of glass samples. It also includes an experimental multivariate analog to the Scott-Knott algorithm for similar analysis on glass samples with multiple chemical concentration variables and multiple samples of the same item; testing against the Hotellings T^2 distribution (J.M. Curran and C.M. Triggs and J.R. Almirall and J.S. Buckleton and K.A.J. Walsh (1997) <doi:10.1016/S1355-0306(97)72197-X> "The interpretation of elemental composition measurements from forensic glass evidence", Science & Justice, 37(4), 241--244).

r-chillr 0.77
Propagated dependencies: r-xml@3.99-0.20 r-tidyr@1.3.1 r-stringr@1.6.0 r-scales@1.4.0 r-rmawgen@1.3.9.3 r-rlang@1.1.6 r-reshape2@1.4.5 r-readxl@1.4.5 r-rcurl@1.98-1.17 r-rcpp@1.1.0 r-raster@3.6-32 r-r-utils@2.13.0 r-purrr@1.2.0 r-progress@1.2.3 r-plyr@1.8.9 r-pls@2.8-5 r-patchwork@1.3.2 r-metr@0.18.3 r-magrittr@2.0.4 r-lubridate@1.9.4 r-jsonlite@2.0.0 r-httr@1.4.7 r-ggplot2@4.0.1 r-gensa@1.1.15 r-fields@17.1 r-ecmwfr@2.0.3 r-dplyr@1.1.4 r-assertthat@0.2.1
Channel: guix-cran
Location: guix-cran/packages/c.scm (guix-cran packages c)
Home page: https://cran.r-project.org/package=chillR
Licenses: GPL 3
Synopsis: Statistical Methods for Phenology Analysis in Temperate Fruit Trees
Description:

The phenology of plants (i.e. the timing of their annual life phases) depends on climatic cues. For temperate trees and many other plants, spring phases, such as leaf emergence and flowering, have been found to result from the effects of both cool (chilling) conditions and heat. Fruit tree scientists (pomologists) have developed some metrics to quantify chilling and heat (e.g. see Luedeling (2012) <doi:10.1016/j.scienta.2012.07.011>). chillR contains functions for processing temperature records into chilling (Chilling Hours, Utah Chill Units and Chill Portions) and heat units (Growing Degree Hours). Regarding chilling metrics, Chill Portions are often considered the most promising, but they are difficult to calculate. This package makes it easy. chillR also contains procedures for conducting a PLS analysis relating phenological dates (e.g. bloom dates) to either mean temperatures or mean chill and heat accumulation rates, based on long-term weather and phenology records (Luedeling and Gassner (2012) <doi:10.1016/j.agrformet.2011.10.020>). As of version 0.65, it also includes functions for generating weather scenarios with a weather generator, for conducting climate change analyses for temperature-based climatic metrics and for plotting results from such analyses. Since version 0.70, chillR contains a function for interpolating hourly temperature records.

r-cotima 1.0.2
Propagated dependencies: r-zcurve@2.4.5 r-stringi@1.8.7 r-scholar@0.2.5 r-rpushbullet@0.3.5 r-rootsolve@1.8.2.4 r-psych@2.5.6 r-openxlsx@4.2.8.1 r-openmx@2.22.10 r-mbess@4.9.41 r-matrix@1.7-4 r-mass@7.3-65 r-lavaan@0.6-20 r-foreach@1.5.2 r-doparallel@1.0.17 r-ctsem@3.10.4 r-crayon@1.5.3 r-abind@1.4-8
Channel: guix-cran
Location: guix-cran/packages/c.scm (guix-cran packages c)
Home page: https://github.com/CoTiMA/CoTiMA
Licenses: GPL 3
Synopsis: Continuous Time Meta-Analysis ('CoTiMA')
Description:

The CoTiMA package performs meta-analyses of correlation matrices of repeatedly measured variables taken from studies that used different time intervals. Different time intervals between measurement occasions impose problems for meta-analyses because the effects (e.g. cross-lagged effects) cannot be simply aggregated, for example, by means of common fixed or random effects analysis. However, continuous time math, which is applied in CoTiMA', can be used to extrapolate or intrapolate the results from all studies to any desired time lag. By this, effects obtained in studies that used different time intervals can be meta-analyzed. CoTiMA fits models to empirical data using the structural equation model (SEM) package ctsem', the effects specified in a SEM are related to parameters that are not directly included in the model (i.e., continuous time parameters; together, they represent the continuous time structural equation model, CTSEM). Statistical model comparisons and significance tests are then performed on the continuous time parameter estimates. CoTiMA also allows analysis of publication bias (Egger's test, PET-PEESE estimates, zcurve analysis etc.) and analysis of statistical power (post hoc power, required sample sizes). See Dormann, C., Guthier, C., & Cortina, J. M. (2019) <doi:10.1177/1094428119847277>. and Guthier, C., Dormann, C., & Voelkle, M. C. (2020) <doi:10.1037/bul0000304>.

r-nevada 0.2.0
Propagated dependencies: r-withr@3.0.2 r-umap@0.2.10.0 r-tsne@0.1-3.1 r-tidyr@1.3.1 r-tibble@3.3.0 r-rlang@1.1.6 r-rgeomstats@0.0.1 r-rcpparmadillo@15.2.2-1 r-rcpp@1.1.0 r-purrr@1.2.0 r-magrittr@2.0.4 r-igraph@2.2.1 r-ggplot2@4.0.1 r-furrr@0.3.1 r-forcats@1.0.1 r-flipr@0.3.3 r-dplyr@1.1.4 r-cli@3.6.5
Channel: guix-cran
Location: guix-cran/packages/n.scm (guix-cran packages n)
Home page: https://astamm.github.io/nevada/
Licenses: GPL 3+
Synopsis: Network-Valued Data Analysis
Description:

This package provides a flexible statistical framework for network-valued data analysis. It leverages the complexity of the space of distributions on graphs by using the permutation framework for inference as implemented in the flipr package. Currently, only the two-sample testing problem is covered and generalization to k samples and regression will be added in the future as well. It is a 4-step procedure where the user chooses a suitable representation of the networks, a suitable metric to embed the representation into a metric space, one or more test statistics to target specific aspects of the distributions to be compared and a formula to compute the permutation p-value. Two types of inference are provided: a global test answering whether there is a difference between the distributions that generated the two samples and a local test for localizing differences on the network structure. The latter is assumed to be shared by all networks of both samples. References: Lovato, I., Pini, A., Stamm, A., Vantini, S. (2020) "Model-free two-sample test for network-valued data" <doi:10.1016/j.csda.2019.106896>; Lovato, I., Pini, A., Stamm, A., Taquet, M., Vantini, S. (2021) "Multiscale null hypothesis testing for network-valued data: Analysis of brain networks of patients with autism" <doi:10.1111/rssc.12463>.

r-dmchmm 1.32.0
Propagated dependencies: r-summarizedexperiment@1.40.0 r-s4vectors@0.48.0 r-rtracklayer@1.70.0 r-multcomp@1.4-29 r-iranges@2.44.0 r-genomicranges@1.62.0 r-fdrtool@1.2.18 r-calibrate@1.7.7 r-biocparallel@1.44.0
Channel: guix-bioc
Location: guix-bioc/packages/d.scm (guix-bioc packages d)
Home page: https://bioconductor.org/packages/DMCHMM
Licenses: GPL 3
Synopsis: Differentially Methylated CpG using Hidden Markov Model
Description:

This package provides a pipeline for identifying differentially methylated CpG sites using Hidden Markov Model in bisulfite sequencing data. DNA methylation studies have enabled researchers to understand methylation patterns and their regulatory roles in biological processes and disease. However, only a limited number of statistical approaches have been developed to provide formal quantitative analysis. Specifically, a few available methods do identify differentially methylated CpG (DMC) sites or regions (DMR), but they suffer from limitations that arise mostly due to challenges inherent in bisulfite sequencing data. These challenges include: (1) that read-depths vary considerably among genomic positions and are often low; (2) both methylation and autocorrelation patterns change as regions change; and (3) CpG sites are distributed unevenly. Furthermore, there are several methodological limitations: almost none of these tools is capable of comparing multiple groups and/or working with missing values, and only a few allow continuous or multiple covariates. The last of these is of great interest among researchers, as the goal is often to find which regions of the genome are associated with several exposures and traits. To tackle these issues, we have developed an efficient DMC identification method based on Hidden Markov Models (HMMs) called “DMCHMM” which is a three-step approach (model selection, prediction, testing) aiming to address the aforementioned drawbacks.

r-qusage 2.44.0
Propagated dependencies: r-nlme@3.1-168 r-limma@3.66.0 r-fftw@1.0-9 r-emmeans@2.0.0 r-biobase@2.70.0
Channel: guix-bioc
Location: guix-bioc/packages/q.scm (guix-bioc packages q)
Home page: http://clip.med.yale.edu/qusage
Licenses: GPL 2+
Synopsis: qusage: Quantitative Set Analysis for Gene Expression
Description:

This package is an implementation the Quantitative Set Analysis for Gene Expression (QuSAGE) method described in (Yaari G. et al, Nucl Acids Res, 2013). This is a novel Gene Set Enrichment-type test, which is designed to provide a faster, more accurate, and easier to understand test for gene expression studies. qusage accounts for inter-gene correlations using the Variance Inflation Factor technique proposed by Wu et al. (Nucleic Acids Res, 2012). In addition, rather than simply evaluating the deviation from a null hypothesis with a single number (a P value), qusage quantifies gene set activity with a complete probability density function (PDF). From this PDF, P values and confidence intervals can be easily extracted. Preserving the PDF also allows for post-hoc analysis (e.g., pair-wise comparisons of gene set activity) while maintaining statistical traceability. Finally, while qusage is compatible with individual gene statistics from existing methods (e.g., LIMMA), a Welch-based method is implemented that is shown to improve specificity. The QuSAGE package also includes a mixed effects model implementation, as described in (Turner JA et al, BMC Bioinformatics, 2015), and a meta-analysis framework as described in (Meng H, et al. PLoS Comput Biol. 2019). For questions, contact Chris Bolen (cbolen1@gmail.com) or Steven Kleinstein (steven.kleinstein@yale.edu).

r-basket 0.10.11
Propagated dependencies: r-tidyr@1.3.1 r-tidygraph@1.3.1 r-tibble@3.3.0 r-rcolorbrewer@1.1-3 r-itertools@0.1-3 r-igraph@2.2.1 r-gridextra@2.3 r-ggraph@2.2.2 r-ggplot2@4.0.1 r-gensa@1.1.15 r-foreach@1.5.2 r-dplyr@1.1.4 r-crayon@1.5.3 r-cli@3.6.5
Channel: guix-cran
Location: guix-cran/packages/b.scm (guix-cran packages b)
Home page: https://github.com/kaneplusplus/basket
Licenses: LGPL 2.0
Synopsis: Basket Trial Analysis
Description:

Implementation of multisource exchangeability models for Bayesian analyses of prespecified subgroups arising in the context of basket trial design and monitoring. The R basket package facilitates implementation of the binary, symmetric multi-source exchangeability model (MEM) with posterior inference arising from both exact computation and Markov chain Monte Carlo sampling. Analysis output includes full posterior samples as well as posterior probabilities, highest posterior density (HPD) interval boundaries, effective sample sizes (ESS), mean and median estimations, posterior exchangeability probability matrices, and maximum a posteriori MEMs. In addition to providing "basketwise" analyses, the package includes similar calculations for "clusterwise" analyses for which subgroups are combined into meta-baskets, or clusters, using graphical clustering algorithms that treat the posterior exchangeability probabilities as edge weights. In addition plotting tools are provided to visualize basket and cluster densities as well as their exchangeability. References include Hyman, D.M., Puzanov, I., Subbiah, V., Faris, J.E., Chau, I., Blay, J.Y., Wolf, J., Raje, N.S., Diamond, E.L., Hollebecque, A. and Gervais, R (2015) <doi:10.1056/NEJMoa1502309>; Hobbs, B.P. and Landin, R. (2018) <doi:10.1002/sim.7893>; Hobbs, B.P., Kane, M.J., Hong, D.S. and Landin, R. (2018) <doi:10.1093/annonc/mdy457>; and Kaizer, A.M., Koopmeiners, J.S. and Hobbs, B.P. (2017) <doi:10.1093/biostatistics/kxx031>.

r-lrstat 0.2.15
Propagated dependencies: r-shiny@1.11.1 r-rcpp@1.1.0 r-mvtnorm@1.3-3 r-lpsolve@5.6.23
Channel: guix-cran
Location: guix-cran/packages/l.scm (guix-cran packages l)
Home page: https://github.com/kaifenglu/lrstat
Licenses: GPL 2+
Synopsis: Power and Sample Size Calculation for Non-Proportional Hazards and Beyond
Description:

This package performs power and sample size calculation for non-proportional hazards model using the Fleming-Harrington family of weighted log-rank tests. The sequentially calculated log-rank test score statistics are assumed to have independent increments as characterized in Anastasios A. Tsiatis (1982) <doi:10.1080/01621459.1982.10477898>. The mean and variance of log-rank test score statistics are calculated based on Kaifeng Lu (2021) <doi:10.1002/pst.2069>. The boundary crossing probabilities are calculated using the recursive integration algorithm described in Christopher Jennison and Bruce W. Turnbull (2000, ISBN:0849303168). The package can also be used for continuous, binary, and count data. For continuous data, it can handle missing data through mixed-model for repeated measures (MMRM). In crossover designs, it can estimate direct treatment effects while accounting for carryover effects. For binary data, it can design Simon's 2-stage, modified toxicity probability-2 (mTPI-2), and Bayesian optimal interval (BOIN) trials. For count data, it can design group sequential trials for negative binomial endpoints with censoring. Additionally, it facilitates group sequential equivalence trials for all supported data types. Moreover, it can design adaptive group sequential trials for changes in sample size, error spending function, number and spacing or future looks. Finally, it offers various options for adjusted p-values, including graphical and gatekeeping procedures.

r-mlwrap 0.3.0
Propagated dependencies: r-yardstick@1.3.2 r-workflows@1.3.0 r-tune@2.0.1 r-tidyr@1.3.1 r-tibble@3.3.0 r-sensitivity@1.30.2 r-scales@1.4.0 r-rsample@1.3.1 r-rlang@1.1.6 r-recipes@1.3.1 r-r6@2.6.1 r-patchwork@1.3.2 r-parsnip@1.3.3 r-magrittr@2.0.4 r-innsight@0.3.2 r-glue@1.8.0 r-ggplot2@4.0.1 r-ggbeeswarm@0.7.2 r-fastshap@0.1.1 r-dplyr@1.1.4 r-dials@1.4.2 r-diagrammer@1.0.11 r-cli@3.6.5
Channel: guix-cran
Location: guix-cran/packages/m.scm (guix-cran packages m)
Home page: https://github.com/AlbertSesePsy/MLwrap
Licenses: GPL 3
Synopsis: Machine Learning Modelling for Everyone
Description:

This package provides a minimal library specifically designed to make the estimation of Machine Learning (ML) techniques as easy and accessible as possible, particularly within the framework of the Knowledge Discovery in Databases (KDD) process in data mining. The package provides essential tools to structure and execute each stage of a predictive or classification modeling workflow, aligning closely with the fundamental steps of the KDD methodology, from data selection and preparation, through model building and tuning, to the interpretation and evaluation of results using Sensitivity Analysis. The MLwrap workflow is organized into four core steps; preprocessing(), build_model(), fine_tuning(), and sensitivity_analysis(). It also includes global and pairwise interaction analysis based on Friedmanâ s H-statistic to support a more detailed interpretation of complex feature relationships.These steps correspond, respectively, to data preparation and transformation, model construction, hyperparameter optimization, and sensitivity analysis. The user can access comprehensive model evaluation results including fit assessment metrics, plots, predictions, and performance diagnostics for ML models implemented through Neural Networks', Random Forest', XGBoost (Extreme Gradient Boosting), and Support Vector Machines (SVM) algorithms. By streamlining these phases, MLwrap aims to simplify the implementation of ML techniques, allowing analysts and data scientists to focus on extracting actionable insights and meaningful patterns from large datasets, in line with the objectives of the KDD process.

r-spatfd 0.0.1
Propagated dependencies: r-tidyr@1.3.1 r-sp@2.2-0 r-sf@1.0-23 r-reshape@0.8.10 r-proxy@0.4-27 r-plotly@4.11.0 r-mass@7.3-65 r-gstat@2.1-4 r-ggplot2@4.0.1 r-geor@1.9-6 r-fda-usc@2.2.0 r-fda@6.3.0 r-dplyr@1.1.4
Channel: guix-cran
Location: guix-cran/packages/s.scm (guix-cran packages s)
Home page: https://cran.r-project.org/package=SpatFD
Licenses: GPL 3
Synopsis: Functional Geostatistics: Univariate and Multivariate Functional Spatial Prediction
Description:

Performance of functional kriging, cokriging, optimal sampling and simulation for spatial prediction of functional data. The framework of spatial prediction, optimal sampling and simulation are extended from scalar to functional data. SpatFD is based on the Karhunen-Loève expansion that allows to represent the observed functions in terms of its empirical functional principal components. Based on this approach, the functional auto-covariances and cross-covariances required for spatial functional predictions and optimal sampling, are completely determined by the sum of the spatial auto-covariances and cross-covariances of the respective score components. The package provides new classes of data and functions for modeling spatial dependence structure among curves. The spatial prediction of curves at unsampled locations can be carried out using two types of predictors, and both of them report, the respective variances of the prediction error. In addition, there is a function for the determination of spatial locations sampling configuration that ensures minimum variance of spatial functional prediction. There are also two functions for plotting predicted curves at each location and mapping the surface at each time point, respectively. References Bohorquez, M., Giraldo, R., and Mateu, J. (2016) <doi:10.1007/s10260-015-0340-9>, Bohorquez, M., Giraldo, R., and Mateu, J. (2016) <doi:10.1007/s00477-016-1266-y>, Bohorquez M., Giraldo R. and Mateu J. (2021) <doi:10.1002/9781119387916>.

r-bpvars 1.0
Propagated dependencies: r-tmvtnsim@0.1.4 r-rcpptn@0.2-2 r-rcppprogress@0.4.2 r-rcpparmadillo@15.2.2-1 r-rcpp@1.1.0 r-r6@2.6.1 r-generics@0.1.4 r-bsvars@3.2
Channel: guix-cran
Location: guix-cran/packages/b.scm (guix-cran packages b)
Home page: https://bsvars.org/bpvars/
Licenses: GPL 3+
Synopsis: Forecasting with Bayesian Panel Vector Autoregressions
Description:

This package provides Bayesian estimation and forecasting of dynamic panel data using Bayesian Panel Vector Autoregressions with hierarchical prior distributions. The models include country-specific VARs that share a global prior distribution that extend the model by JarociŠski (2010) <doi:10.1002/jae.1082>. Under this prior expected value, each country's system follows a global VAR with country-invariant parameters. Further flexibility is provided by the hierarchical prior structure that retains the Minnesota prior interpretation for the global VAR and features estimated prior covariance matrices, shrinkage, and persistence levels. Bayesian forecasting is developed for models including exogenous variables, allowing conditional forecasts given the future trajectories of some variables and restricted forecasts assuring that rates are forecasted to stay positive and less than 100. The package implements the model specification, estimation, and forecasting routines, facilitating coherent workflows and reproducibility. It also includes automated pseudo-out-of-sample forecasting and computation of forecasting performance measures. Beautiful plots, informative summary functions, and extensive documentation complement all this. An extraordinary computational speed is achieved thanks to employing frontier econometric and numerical techniques and algorithms written in C++'. The bpvars package is aligned regarding objects, workflows, and code structure with the R packages bsvars by Woźniak (2024) <doi:10.32614/CRAN.package.bsvars> and bsvarSIGNs by Wang & Woźniak (2025) <doi:10.32614/CRAN.package.bsvarSIGNs>, and they constitute an integrated toolset. Copyright: 2025 International Labour Organization.

r-json64 0.1.3
Propagated dependencies: r-jsonlite@2.0.0
Channel: guix-cran
Location: guix-cran/packages/j.scm (guix-cran packages j)
Home page: https://cran.r-project.org/package=json64
Licenses: Expat
Synopsis: 'Base64' Encode/Decode Package with Support for JSON Output/Input and UTF-8
Description:

Encode/Decode base64', with support for JSON format, using two functions: j_encode() and j_decode(). Base64 is a group of similar binary-to-text encoding schemes that represent binary data in an ASCII string format by translating it into a radix-64 representation, used when there is a need to encode binary data that needs to be stored and transferred over media that are designed to deal with textual data, ensuring that the data will remain intact and without modification during transport. <https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64/Base64_encoding_and_decoding> On the other side, JSON (JavaScript Object Notation) is a lightweight data-interchange format. Easy to read, write, parse and generate. It is based on a subset of the JavaScript Programming Language. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. JSON structure is built around name:value pairs and ordered list of values. <https://www.json.org> The first function, j_encode(), let you transform a data.frame or list to a base64 encoded JSON (or JSON string). The j_decode() function takes a base64 string (could be an encoded JSON) and transform it to a data.frame (or list, depending of the JSON structure).

r-prmisc 0.0.3
Channel: guix-cran
Location: guix-cran/packages/p.scm (guix-cran packages p)
Home page: https://github.com/m-Py/prmisc
Licenses: Expat
Synopsis: Miscellaneous Printing of Numeric and Statistical Output in R Markdown and Quarto Documents
Description:

Miscellaneous printing of numeric or statistical results in R Markdown or Quarto documents according to guidelines of the "Publication Manual" of the American Psychological Association (2020, ISBN: 978-1-4338-3215-4). These guidelines are usually referred to as APA style (<https://apastyle.apa.org/>) and include specific rules on the formatting of numbers and statistical test results. APA style has to be implemented when submitting scientific reports in a wide range of research fields, especially in the social sciences. The default output of numbers in the R console or R Markdown and Quarto documents does not meet the APA style requirements, and reformatting results manually can be cumbersome and error-prone. This package covers the automatic conversion of R objects to textual representations that meet the APA style requirements, which can be included in R Markdown or Quarto documents. It covers some basic statistical tests (t-test, ANOVA, correlation, chi-squared test, Wilcoxon test) as well as some basic number printing manipulations (formatting p-values, removing leading zeros for numbers that cannot be greater than one, and others). Other packages exist for formatting numbers and tests according to the APA style guidelines, such as papaja (<https://cran.r-project.org/package=papaja>) and apa (<https://cran.r-project.org/package=apa>), but they do not offer all convenience functionality included in prmisc'. The vignette has an overview of most of the functions included in the package.

r-ciuupi 1.2.3
Propagated dependencies: r-statmod@1.5.1 r-pracma@2.4.6 r-nloptr@2.2.1 r-functional@0.6
Channel: guix-cran
Location: guix-cran/packages/c.scm (guix-cran packages c)
Home page: https://cran.r-project.org/package=ciuupi
Licenses: GPL 2
Synopsis: Confidence Intervals Utilizing Uncertain Prior Information
Description:

Computes a confidence interval for a specified linear combination of the regression parameters in a linear regression model with iid normal errors with known variance when there is uncertain prior information that a distinct specified linear combination of the regression parameters takes a given value. This confidence interval, found by numerical nonlinear constrained optimization, has the required minimum coverage and utilizes this uncertain prior information through desirable expected length properties. This confidence interval has the following three practical applications. Firstly, if the error variance has been accurately estimated from previous data then it may be treated as being effectively known. Secondly, for sufficiently large (dimension of the response vector) minus (dimension of regression parameter vector), greater than or equal to 30 (say), if we replace the assumed known value of the error variance by its usual estimator in the formula for the confidence interval then the resulting interval has, to a very good approximation, the same coverage probability and expected length properties as when the error variance is known. Thirdly, some more complicated models can be approximated by the linear regression model with error variance known when certain unknown parameters are replaced by estimates. This confidence interval is described in Mainzer, R. and Kabaila, P. (2019) <doi:10.32614/RJ-2019-026>, and is a member of the family of confidence intervals proposed by Kabaila, P. and Giri, K. (2009) <doi:10.1016/j.jspi.2009.03.018>.

r-obcost 0.1.0
Propagated dependencies: r-tidyr@1.3.1 r-dplyr@1.1.4
Channel: guix-cran
Location: guix-cran/packages/o.scm (guix-cran packages o)
Home page: https://cran.r-project.org/package=obcost
Licenses: LGPL 2.0+
Synopsis: Obesity Cost Database
Description:

This database contains necessary data relevant to medical costs on obesity throughout the United States. This database, in form of an R package, could output necessary data frames relevant to obesity costs, where the clients could easily manipulate the output using difference parameters, e.g. relative risks for each illnesses. This package contributes to parts of our published journal named "Modeling the Economic Cost of Obesity Risk and Its Relation to the Health Insurance Premium in the United States: A State Level Analysis". Please use the following citation for the journal: Woods Thomas, Tatjana Miljkovic (2022) "Modeling the Economic Cost of Obesity Risk and Its Relation to the Health Insurance Premium in the United States: A State Level Analysis" <doi:10.3390/risks10100197>. The database is composed of the following main tables: 1. Relative_Risks: (constant) Relative risks for a given disease group with a risk factor of obesity; 2. Disease_Cost: (obesity_cost_disease) Supplementary output with all variables related to individual disease groups in a given state and year; 3. Full_Cost: (obesity_cost_full) Complete output with all variables used to make cost calculations, as well as cost calculations in a given state and year; 4. National_Summary: (obesity_cost_national_summary) National summary cost calculations in a given year. Three functions are included to assist users in calling and adjusting the mentioned tables and they are data_load(), data_produce(), and rel_risk_fun().

r-simriv 1.0.7
Propagated dependencies: r-terra@1.8-86 r-mco@1.17
Channel: guix-cran
Location: guix-cran/packages/s.scm (guix-cran packages s)
Home page: https://www.r-project.org
Licenses: GPL 2+
Synopsis: Simulating Multistate Movements in River/Heterogeneous Landscapes
Description:

This package provides functions to generate and analyze spatially-explicit individual-based multistate movements in rivers, heterogeneous and homogeneous spaces. This is done by incorporating landscape bias on local behaviour, based on resistance rasters. Although originally conceived and designed to simulate trajectories of species constrained to linear habitats/dendritic ecological networks (e.g. river networks), the simulation algorithm is built to be highly flexible and can be applied to any (aquatic, semi-aquatic or terrestrial) organism, independently on the landscape in which it moves. Thus, the user will be able to use the package to simulate movements either in homogeneous landscapes, heterogeneous landscapes (e.g. semi-aquatic animal moving mainly along rivers but also using the matrix), or even in highly contrasted landscapes (e.g. fish in a river network). The algorithm and its input parameters are the same for all cases, so that results are comparable. Simulated trajectories can then be used as mechanistic null models (Potts & Lewis 2014, <DOI:10.1098/rspb.2014.0231>) to test a variety of Movement Ecology hypotheses (Nathan et al. 2008, <DOI:10.1073/pnas.0800375105>), including landscape effects (e.g. resources, infrastructures) on animal movement and species site fidelity, or for predictive purposes (e.g. road mortality risk, dispersal/connectivity). The package should be relevant to explore a broad spectrum of ecological phenomena, such as those at the interface of animal behaviour, management, landscape and movement ecology, disease and invasive species spread, and population dynamics.

r-cmfrec 3.5.1-3
Channel: guix-cran
Location: guix-cran/packages/c.scm (guix-cran packages c)
Home page: https://github.com/david-cortes/cmfrec
Licenses: Expat
Synopsis: Collective Matrix Factorization for Recommender Systems
Description:

Collective matrix factorization (a.k.a. multi-view or multi-way factorization, Singh, Gordon, (2008) <doi:10.1145/1401890.1401969>) tries to approximate a (potentially very sparse or having many missing values) matrix X as the product of two low-dimensional matrices, optionally aided with secondary information matrices about rows and/or columns of X', which are also factorized using the same latent components. The intended usage is for recommender systems, dimensionality reduction, and missing value imputation. Implements extensions of the original model (Cortes, (2018) <arXiv:1809.00366>) and can produce different factorizations such as the weighted implicit-feedback model (Hu, Koren, Volinsky, (2008) <doi:10.1109/ICDM.2008.22>), the weighted-lambda-regularization model, (Zhou, Wilkinson, Schreiber, Pan, (2008) <doi:10.1007/978-3-540-68880-8_32>), or the enhanced model with implicit features (Rendle, Zhang, Koren, (2019) <arXiv:1905.01395>), with or without side information. Can use gradient-based procedures or alternating-least squares procedures (Koren, Bell, Volinsky, (2009) <doi:10.1109/MC.2009.263>), with either a Cholesky solver, a faster conjugate gradient solver (Takacs, Pilaszy, Tikk, (2011) <doi:10.1145/2043932.2043987>), or a non-negative coordinate descent solver (Franc, Hlavac, Navara, (2005) <doi:10.1007/11556121_50>), providing efficient methods for sparse and dense data, and mixtures thereof. Supports L1 and L2 regularization in the main models, offers alternative most-popular and content-based models, and implements functionality for cold-start recommendations and imputation of 2D data.

r-pdynmc 0.9.12
Propagated dependencies: r-rdpack@2.6.4 r-optimx@2025-4.9 r-matrix@1.7-4 r-mass@7.3-65 r-data-table@1.17.8
Channel: guix-cran
Location: guix-cran/packages/p.scm (guix-cran packages p)
Home page: https://github.com/markusfritsch/pdynmc
Licenses: GPL 2+
Synopsis: Moment Condition Based Estimation of Linear Dynamic Panel Data Models
Description:

Linear dynamic panel data modeling based on linear and nonlinear moment conditions as proposed by Holtz-Eakin, Newey, and Rosen (1988) <doi:10.2307/1913103>, Ahn and Schmidt (1995) <doi:10.1016/0304-4076(94)01641-C>, and Arellano and Bover (1995) <doi:10.1016/0304-4076(94)01642-D>. Estimation of the model parameters relies on the Generalized Method of Moments (GMM) and instrumental variables (IV) estimation, numerical optimization (when nonlinear moment conditions are employed) and the computation of closed form solutions (when estimation is based on linear moment conditions). One-step, two-step and iterated estimation is available. For inference and specification testing, Windmeijer (2005) <doi:10.1016/j.jeconom.2004.02.005> and doubly corrected standard errors (Hwang, Kang, Lee, 2021 <doi:10.1016/j.jeconom.2020.09.010>) are available. Additionally, serial correlation tests, tests for overidentification, and Wald tests are provided. Functions for visualizing panel data structures and modeling results obtained from GMM estimation are also available. The plot methods include functions to plot unbalanced panel structure, coefficient ranges and coefficient paths across GMM iterations (the latter is implemented according to the plot shown in Hansen and Lee, 2021 <doi:10.3982/ECTA16274>). For a more detailed description of the GMM-based functionality, please see Fritsch, Pua, Schnurbus (2021) <doi:10.32614/RJ-2021-035>. For more details on the IV-based estimation routines, see Fritsch, Pua, and Schnurbus (WP, 2024) and Han and Phillips (2010) <doi:10.1017/S026646660909063X>.

r-wrmisc 1.15.4
Propagated dependencies: r-mass@7.3-65
Channel: guix-cran
Location: guix-cran/packages/w.scm (guix-cran packages w)
Home page: https://cran.r-project.org/package=wrMisc
Licenses: GPL 3
Synopsis: Analyze Experimental High-Throughput (Omics) Data
Description:

The efficient treatment and convenient analysis of experimental high-throughput (omics) data gets facilitated through this collection of diverse functions. Several functions address advanced object-conversions, like manipulating lists of lists or lists of arrays, reorganizing lists to arrays or into separate vectors, merging of multiple entries, etc. Another set of functions provides speed-optimized calculation of standard deviation (sd), coefficient of variance (CV) or standard error of the mean (SEM) for data in matrixes or means per line with respect to additional grouping (eg n groups of replicates). A group of functions facilitate dealing with non-redundant information, by indexing unique, adding counters to redundant or eliminating lines with respect redundancy in a given reference-column, etc. Help is provided to identify very closely matching numeric values to generate (partial) distance matrixes for very big data in a memory efficient manner or to reduce the complexity of large data-sets by combining very close values. Other functions help aligning a matrix or data.frame to a reference using partial matching or to mine an experimental setup to extract patterns of replicate samples. Many times large experimental datasets need some additional filtering, adequate functions are provided. Convenient data normalization is supported in various different modes, parameter estimation via permutations or boot-strap as well as flexible testing of multiple pair-wise combinations using the framework of limma is provided, too. Batch reading (or writing) of sets of files and combining data to arrays is supported, too.

r-optecd 1.0.0
Channel: guix-cran
Location: guix-cran/packages/o.scm (guix-cran packages o)
Home page: https://cran.r-project.org/package=OPTeCD
Licenses: GPL 2+
Synopsis: Optimal Partial Tetra-Allele Cross Designs
Description:

Tetra-allele cross often referred as four-way cross or double cross or four-line cross are those type of mating designs in which every cross is obtained by mating amongst four inbred lines. A tetra-allele cross can be obtained by crossing the resultant of two unrelated diallel crosses. A common triallel cross involving four inbred lines A, B, C and D can be symbolically represented as (A X B) X (C X D) or (A, B, C, D) or (A B C D) etc. Tetra-allele cross can be broadly categorized as Complete Tetra-allele Cross (CTaC) and Partial Tetra-allele Crosses (PTaC). Rawlings and Cockerham (1962)<doi:10.2307/2527461> firstly introduced and gave the method of analysis for tetra-allele cross hybrids using the analysis method of single cross hybrids under the assumption of no linkage. The set of all possible four-way mating between several genotypes (individuals, clones, homozygous lines, etc.) leads to a CTaC. If there are N number of inbred lines involved in a CTaC, the the total number of crosses, T = N*(N-1)*(N-2)*(N-3)/8. When more number of lines are to be considered, the total number of crosses in CTaC also increases. Thus, it is almost impossible for the investigator to carry out the experimentation with limited available resource material. This situation lies in taking a fraction of CTaC with certain underlying properties, known as PTaC.

r-actfts 0.3.0
Propagated dependencies: r-xts@0.14.1 r-tseries@0.10-58 r-reactable@0.4.5 r-plotly@4.11.0 r-openxlsx@4.2.8.1 r-lifecycle@1.0.4 r-forecast@8.24.0
Channel: guix-cran
Location: guix-cran/packages/a.scm (guix-cran packages a)
Home page: https://github.com/SergioFinances/actfts
Licenses: Expat
Synopsis: Autocorrelation Tools Featured for Time Series
Description:

The actfts package provides tools for performing autocorrelation analysis of time series data. It includes functions to compute and visualize the autocorrelation function (ACF) and the partial autocorrelation function (PACF). Additionally, it performs the Dickey-Fuller, KPSS, and Phillips-Perron unit root tests to assess the stationarity of time series. Theoretical foundations are based on Box and Cox (1964) <doi:10.1111/j.2517-6161.1964.tb00553.x>, Box and Jenkins (1976) <isbn:978-0-8162-1234-2>, and Box and Pierce (1970) <doi:10.1080/01621459.1970.10481180>. Statistical methods are also drawn from Kolmogorov (1933) <doi:10.1007/BF00993594>, Kwiatkowski et al. (1992) <doi:10.1016/0304-4076(92)90104-Y>, and Ljung and Box (1978) <doi:10.1093/biomet/65.2.297>. The package integrates functions from forecast (Hyndman & Khandakar, 2008) <https://CRAN.R-project.org/package=forecast>, tseries (Trapletti & Hornik, 2020) <https://CRAN.R-project.org/package=tseries>, xts (Ryan & Ulrich, 2020) <https://CRAN.R-project.org/package=xts>, and stats (R Core Team, 2023) <https://stat.ethz.ch/R-manual/R-devel/library/stats/html/00Index.html>. Additionally, it provides visualization tools via plotly (Sievert, 2020) <https://CRAN.R-project.org/package=plotly> and reactable (Glaz, 2023) <https://CRAN.R-project.org/package=reactable>. The package also incorporates macroeconomic datasets from the U.S. Bureau of Economic Analysis: Disposable Personal Income (DPI) <https://fred.stlouisfed.org/series/DPI>, Gross Domestic Product (GDP) <https://fred.stlouisfed.org/series/GDP>, and Personal Consumption Expenditures (PCEC) <https://fred.stlouisfed.org/series/PCEC>.

Page: 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193119411951196119711981199120012011202120312041205120612071208120912101211121212131214121512161217121812191220122112221223122412251226122712281229123012311232123312341235123612371238123912401241124212431244124512461247124812491250125112521253125412551256125712581259126012611262126312641265126612671268
Total results: 30423