_            _    _        _         _
      /\ \         /\ \ /\ \     /\_\      / /\
      \_\ \       /  \ \\ \ \   / / /     / /  \
      /\__ \     / /\ \ \\ \ \_/ / /     / / /\ \__
     / /_ \ \   / / /\ \ \\ \___/ /     / / /\ \___\
    / / /\ \ \ / / /  \ \_\\ \ \_/      \ \ \ \/___/
   / / /  \/_// / /   / / / \ \ \        \ \ \
  / / /      / / /   / / /   \ \ \   _    \ \ \
 / / /      / / /___/ / /     \ \ \ /_/\__/ / /
/_/ /      / / /____\/ /       \ \_\\ \/___/ /
\_\/       \/_________/         \/_/ \_____\/
r-pooldilutionr 1.0.0
Channel: guix-cran
Location: guix-cran/packages/p.scm (guix-cran packages p)
Home page: https://cran.r-project.org/package=PoolDilutionR
Licenses: Expat
Synopsis: Calculate Gross Biogeochemical Flux Rates from Isotope Pool Dilution Data
Description:

Pool dilution is a isotope tracer technique wherein a biogeochemical pool is artifically enriched with its heavy isotopologue and the gross productive and consumptive fluxes of that pool are quantified by the change in pool size and isotopic composition over time. This package calculates gross production and consumption rates from closed-system isotopic pool dilution time series data. Pool size concentrations and heavy isotope (e.g., 15N) content are measured over time and the model optimizes production rate (P) and the first order rate constant (k) by minimizing error in the model-predicted total pool size, as well as the isotopic signature. The model optimizes rates by weighting information against the signal:noise ratio of concentration and heavy- isotope signatures using measurement precision as well as the magnitude of change over time. The calculations used here are based on von Fischer and Hedin (2002) <doi:10.1029/2001GB001448> with some modifications.

r-combat-enigma 1.1.1
Propagated dependencies: r-nlme@3.1-168 r-matrix@1.7-4 r-caret@7.0-1 r-car@3.1-3
Channel: guix-cran
Location: guix-cran/packages/c.scm (guix-cran packages c)
Home page: https://cran.r-project.org/package=combat.enigma
Licenses: FSDG-compatible
Synopsis: Fit and Apply ComBat, LMM, or Prescaling Harmonization for ENIGMA and Other Multisite MRI Data
Description:

Fit and apply ComBat, linear mixed-effects models (LMM), or prescaling to harmonize magnetic resonance imaging (MRI) data from different sites. Briefly, these methods remove differences between sites due to using different scanning devices, and LMM additionally tests linear hypotheses. As detailed in the manual, the original ComBat function was first modified for the harmonization of MRI data (Fortin et al. (2017) <doi:10.1016/j.neuroimage.2017.11.024>) and then modified again to create separate functions for fitting and applying the harmonization and allow missing values and constant rows for its use within the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) Consortium (Radua et al. (2020) <doi:10.1016/j.neuroimage.2020.116956>); this package includes the latter version. LMM calls "lme" massively considering specific brain imaging details. Finally, prescaling is a good option for fMRI, where different devices can have varying units of measurement.

r-bayessurvival 0.2.0
Propagated dependencies: r-survival@3.8-3 r-ggplot2@4.0.1
Channel: guix-cran
Location: guix-cran/packages/b.scm (guix-cran packages b)
Home page: https://cran.r-project.org/package=BayesSurvival
Licenses: GPL 3
Synopsis: Bayesian Survival Analysis for Right Censored Data
Description:

This package performs unadjusted Bayesian survival analysis for right censored time-to-event data. The main function, BayesSurv(), computes the posterior mean and a credible band for the survival function and for the cumulative hazard, as well as the posterior mean for the hazard, starting from a piecewise exponential (histogram) prior with Gamma distributed heights that are either independent, or have a Markovian dependence structure. A function, PlotBayesSurv(), is provided to easily create plots of the posterior means of the hazard, cumulative hazard and survival function, with a credible band accompanying the latter two. The priors and samplers are described in more detail in Castillo and Van der Pas (2020) "Multiscale Bayesian survival analysis" <arXiv:2005.02889>. In that paper it is also shown that the credible bands for the survival function and the cumulative hazard can be considered confidence bands (under mild conditions) and thus offer reliable uncertainty quantification.

r-fairmaterials 0.4.2.1
Propagated dependencies: r-xml2@1.5.0 r-tidyr@1.3.1 r-stringr@1.6.0 r-readr@2.1.6 r-rdflib@0.2.9 r-jsonld@2.2.1 r-httr@1.4.7 r-dplyr@1.1.4 r-diagrammersvg@0.1 r-diagrammer@1.0.11
Channel: guix-cran
Location: guix-cran/packages/f.scm (guix-cran packages f)
Home page: https://cran.r-project.org/package=FAIRmaterials
Licenses: Modified BSD
Synopsis: Ontology Tools with Data FAIRification in Development
Description:

Translates several CSV files with ontological terms and corresponding data into RDF triples. These RDF triples are stored in OWL and JSON-LD files, facilitating data accessibility, interoperability, and knowledge unification. The triples are also visualized in a graph saved as an SVG. The input CSVs must be formatted with a template from a public Google Sheet; see README or vignette for more information. This is a tool is used by the SDLE Research Center at Case Western Reserve University to create and visualize material science ontologies, and it includes example ontologies to demonstrate its capabilities. This work was supported by the U.S. Department of Energyâ s Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Numbers E-EE0009353 and DE-EE0009347, Department of Energy (National Nuclear Security Administration) under Award Number DE-NA0004104 and Contract number B647887, and U.S. National Science Foundation Award under Award Number 2133576.

r-growthcurveme 0.1.11
Propagated dependencies: r-viridis@0.6.5 r-tidyr@1.3.1 r-tibble@3.3.0 r-stringr@1.6.0 r-saemix@3.4 r-rlang@1.1.6 r-patchwork@1.3.2 r-moments@0.14.1 r-minpack-lm@1.2-4 r-magrittr@2.0.4 r-knitr@1.50 r-investr@1.4.2 r-ggplot2@4.0.1 r-flextable@0.9.10 r-dplyr@1.1.4
Channel: guix-cran
Location: guix-cran/packages/g.scm (guix-cran packages g)
Home page: https://github.com/cancermodels-org/GrowthCurveME
Licenses: GPL 3+
Synopsis: Mixed-Effects Modeling for Growth Data
Description:

Simple and user-friendly wrappers to the saemix package for performing linear and non-linear mixed-effects regression modeling for growth data to account for clustering or longitudinal analysis via repeated measurements. The package allows users to fit a variety of growth models, including linear, exponential, logistic, and Gompertz functions. For non-linear models, starting values are automatically calculated using initial least-squares estimates. The package includes functions for summarizing models, visualizing data and results, calculating doubling time and other key statistics, and generating model diagnostic plots and residual summary statistics. It also provides functions for generating publication-ready summary tables for reports. Additionally, users can fit linear and non-linear least-squares regression models if clustering is not applicable. The mixed-effects modeling methods in this package are based on Comets, Lavenu, and Lavielle (2017) <doi:10.18637/jss.v080.i03> as implemented in the saemix package. Please contact us at models@dfci.harvard.edu with any questions.

r-etasbootstrap 0.2.1
Propagated dependencies: r-spatstat-geom@3.6-1 r-mass@7.3-65 r-etas@0.7.2
Channel: guix-cran
Location: guix-cran/packages/e.scm (guix-cran packages e)
Home page: https://cran.r-project.org/package=ETASbootstrap
Licenses: Expat
Synopsis: Bootstrap Confidence Interval Estimation for 'ETAS' Model Parameters
Description:

The 2-D spatial and temporal Epidemic Type Aftershock Sequence ('ETAS') Model is widely used to decluster earthquake data catalogs. Usually, the calculation of standard errors of the ETAS model parameter estimates is based on the Hessian matrix derived from the log-likelihood function of the fitted model. However, when an ETAS model is fitted to a local data set over a time period that is limited or short, the standard errors based on the Hessian matrix may be inaccurate. It follows that the asymptotic confidence intervals for parameters may not always be reliable. As an alternative, this package allows for the construction of bootstrap confidence intervals based on empirical quantiles for the parameters of the 2-D spatial and temporal ETAS model. This version improves on Version 0.1.0 of the package by enabling the study space window (renamed study region') to be polygonal rather than merely rectangular. A Japan earthquake data catalog is used in a second example to illustrate this new feature.

font-open-relay 0-1.38ecb60
Channel: yewscion
Location: cdr255/fonts.scm (cdr255 fonts)
Home page: http://www.kreativekorp.com/software/fonts/index.shtml
Licenses: SIL OFL 1.1
Synopsis: Free and open source fonts from Kreative Software
Description:

Free and open source fonts from Kreative Software:

Constructium is a fork of SIL Gentium designed specifically to support constructed scripts as encoded in the Under-ConScript Unicode Registry. It is ideal for mixed Latin, Greek, Cyrillic, IPA, and conlang text in web sites and documents.

Fairfax is a 6x12 bitmap font for terminals, text editors, IDEs, etc. It supports many scripts and a large number of Unicode blocks as well as constructed scripts as encoded in the Under-ConScript Unicode Registry, pseudographics and semigraphics, and tons of private use characters. It has been superceded by Fairfax HD but is still maintained.

Fairfax HD is a halfwidth scalable monospace font for terminals, text editors, IDEs, etc. It supports many scripts and a large number of Unicode blocks as well as constructed scripts as encoded in the Under-ConScript Unicode Registry, pseudographics and semigraphics, and tons of private use characters.

Kreative Square is a fullwidth scalable monospace font designed specifically to support pseudographics, semigraphics, and private use characters.

r-phonfieldwork 0.0.17
Propagated dependencies: r-xml2@1.5.0 r-tuner@1.4.7 r-rmarkdown@2.30 r-readr@2.1.6 r-phontools@0.2-2.2 r-mime@0.13
Channel: guix-cran
Location: guix-cran/packages/p.scm (guix-cran packages p)
Home page: https://CRAN.R-project.org/package=phonfieldwork
Licenses: GPL 2+
Synopsis: Linguistic Phonetic Fieldwork Tools
Description:

There are a lot of different typical tasks that have to be solved during phonetic research and experiments. This includes creating a presentation that will contain all stimuli, renaming and concatenating multiple sound files recorded during a session, automatic annotation in Praat TextGrids (this is one of the sound annotation standards provided by Praat software, see Boersma & Weenink 2020 <https://www.fon.hum.uva.nl/praat/>), creating an html table with annotations and spectrograms, and converting multiple formats ('Praat TextGrid, ELAN', EXMARaLDA', Audacity', subtitles .srt', and FLEx flextext). All of these tasks can be solved by a mixture of different tools (any programming language has programs for automatic renaming, and Praat contains scripts for concatenating and renaming files, etc.). phonfieldwork provides a functionality that will make it easier to solve those tasks independently of any additional tools. You can also compare the functionality with other packages: rPraat <https://CRAN.R-project.org/package=rPraat>, textgRid <https://CRAN.R-project.org/package=textgRid>.

r-smoothedlasso 1.6
Propagated dependencies: r-rdpack@2.6.4 r-matrix@1.7-4
Channel: guix-cran
Location: guix-cran/packages/s.scm (guix-cran packages s)
Home page: https://cran.r-project.org/package=smoothedLasso
Licenses: GPL 2+
Synopsis: Framework to Smooth L1 Penalized Regression Operators using Nesterov Smoothing
Description:

We provide full functionality to smooth L1 penalized regression operators and to compute regression estimates thereof. For this, the objective function of a user-specified regression operator is first smoothed using Nesterov smoothing (see Y. Nesterov (2005) <doi:10.1007/s10107-004-0552-5>), resulting in a modified objective function with explicit gradients everywhere. The smoothed objective function and its gradient are minimized via BFGS, and the obtained minimizer is returned. Using Nesterov smoothing, the smoothed objective function can be made arbitrarily close to the original (unsmoothed) one. In particular, the Nesterov approach has the advantage that it comes with explicit accuracy bounds, both on the L1/L2 difference of the unsmoothed to the smoothed objective functions as well as on their respective minimizers (see G. Hahn, S.M. Lutz, N. Laha, C. Lange (2020) <doi:10.1101/2020.09.17.301788>). A progressive smoothing approach is provided which iteratively smoothes the objective function, resulting in more stable regression estimates. A function to perform cross validation for selection of the regularization parameter is provided.

r-carbonpredict 2.0.0
Propagated dependencies: r-progress@1.2.3 r-networkd3@0.4.1 r-lmertest@3.1-3 r-htmlwidgets@1.6.4 r-htmltools@0.5.8.1 r-ggplot2@4.0.1 r-dplyr@1.1.4
Channel: guix-cran
Location: guix-cran/packages/c.scm (guix-cran packages c)
Home page: https://github.com/david-leake/carbonpredict
Licenses: Expat
Synopsis: Predict Carbon Emissions for UK SMEs
Description:

Predict Scope 1, 2 and 3 carbon emissions for UK Small and Medium-sized Enterprises (SMEs), using Standard Industrial Classification (SIC) codes and annual turnover data, as well as Scope 1 carbon emissions for UK farms. The carbonpredict package provides single and batch prediction, plotting, and workflow tools for carbon accounting and reporting. The package utilises pre-trained models, leveraging rich classified transaction data to accurately predict Scope 1, 2 and 3 carbon emissions for UK SMEs as well as identifying emissions hotspots. It also provides Scope 1 carbon emissions predictions for UK farms of types: Cereals ex. rice, Dairy, Mixed farming, Sheep and goats, Cattle & buffaloes, Poultry, Animal production and Support for crop production. The methodology used to produce the estimates in this package is fully detailed in the following peer-reviewed publication in the Journal of Industrial Ecology: Phillpotts, A., Owen. A., Norman, J., Trendl, A., Gathergood, J., Jobst, Norbert., Leake, D. (2025) <doi:10.1111/jiec.70106> "Bridging the SME Reporting Gap: A New Model for Predicting Scope 1 and 2 Emissions".

r-latticedesign 4.0-1
Propagated dependencies: r-nloptr@2.2.1
Channel: guix-cran
Location: guix-cran/packages/l.scm (guix-cran packages l)
Home page: https://cran.r-project.org/package=LatticeDesign
Licenses: LGPL 2.1
Synopsis: Lattice-Based Space-Filling Designs
Description:

Lattice-based space-filling designs with fill or separation distance properties including interleaved lattice-based minimax distance designs proposed in Xu He (2017) <doi:10.1093/biomet/asx036>, interleaved lattice-based maximin distance designs proposed in Xu He (2018) <doi:10.1093/biomet/asy069>, interleaved lattice-based designs with low fill and high separation distance properties proposed in Xu He (2024) <doi:10.1137/23M156940X>, (sliced) rotated sphere packing designs proposed in Xu He (2017) <doi:10.1080/01621459.2016.1222289> and Xu He (2019) <doi:10.1080/00401706.2018.1458655>, densest packing-based maximum projections designs proposed in Xu He (2020) <doi:10.1093/biomet/asaa057> and Xu He (2018) <doi:10.48550/arXiv.1709.02062>, maximin distance designs for mixed continuous, ordinal, and binary variables proposed in Hui Lan and Xu He (2025) <doi:10.48550/arXiv.2507.23405>, and optimized and regularly repeated lattice-based Latin hypercube designs for large-scale computer experiments proposed in Xu He, Junpeng Gong, and Zhaohui Li (2025) <doi:10.48550/arXiv.2506.04582>.

r-precisetadhub 1.18.0
Propagated dependencies: r-experimenthub@3.0.0
Channel: guix-bioc
Location: guix-bioc/packages/p.scm (guix-bioc packages p)
Home page: https://github.com/dozmorovlab/preciseTADhub
Licenses: Expat
Synopsis: Pre-trained random forest models obtained using preciseTAD
Description:

An experimentdata package to supplement the preciseTAD package containing pre-trained models and the variable importances of each genomic annotation used to build the model parsed into list objects and available in ExperimentHub. In total, preciseTADhub provides access to n=84 random forest classification models optimized to predict TAD/chromatin loop boundary regions and stored as .RDS files. The value, n, comes from the fact that we considered l=2 cell lines GM12878, K562, g=2 ground truth boundaries Arrowhead, Peakachu, and c=21 autosomal chromosomes CHR1, CHR2, ..., CHR22 (omitting CHR9). Furthermore, each object is itself a two-item list containing: (1) the model object, and (2) the variable importances for CTCF, RAD21, SMC3, and ZNF143 used to predict boundary regions. Each model is trained via a "holdout" strategy, in which data from chromosomes CHR1, CHR2, ..., CHRi-1, CHRi+1, ..., CHR22 were used to build the model and the ith chromosome was reserved for testing. See https://doi.org/10.1101/2020.09.03.282186 for more detail on the model building strategy.

r-betafunctions 1.9.0
Channel: guix-cran
Location: guix-cran/packages/b.scm (guix-cran packages b)
Home page: https://cran.r-project.org/package=betafunctions
Licenses: CC0
Synopsis: Functions for Working with Two- And Four-Parameter Beta Probability Distributions and Psychometric Analysis of Classifications
Description:

Package providing a number of functions for working with Two- and Four-parameter Beta and closely related distributions (i.e., the Gamma- Binomial-, and Beta-Binomial distributions). Includes, among other things: - d/p/q/r functions for Four-Parameter Beta distributions and Generalized "Binomial" (continuous) distributions, and d/p/r- functions for Beta- Binomial distributions. - d/p/q/r functions for Two- and Four-Parameter Beta distributions parameterized in terms of their means and variances rather than their shape-parameters. - Moment generating functions for Binomial distributions, Beta-Binomial distributions, and observed value distributions. - Functions for estimating classification accuracy and consistency, making use of the Classical Test-Theory based Livingston and Lewis (L&L) and Hanson and Brennan approaches. A shiny app is available, providing a GUI for the L&L approach when used for binary classifications. For url to the app, see documentation for the LL.CA() function. Livingston and Lewis (1995) <doi:10.1111/j.1745-3984.1995.tb00462.x>. Lord (1965) <doi:10.1007/BF02289490>. Hanson (1991) <https://files.eric.ed.gov/fulltext/ED344945.pdf>.

r-mcpmodgeneral 0.1-3
Propagated dependencies: r-mvtnorm@1.3-3 r-mass@7.3-65 r-dosefinding@1.4-1
Channel: guix-cran
Location: guix-cran/packages/m.scm (guix-cran packages m)
Home page: https://cran.r-project.org/package=MCPModGeneral
Licenses: GPL 3
Synopsis: Supplement to the 'DoseFinding' Package for the General Case
Description:

Analyzes non-normal data via the Multiple Comparison Procedures and Modeling approach (MCP-Mod). Many functions rely on the DoseFinding package. This package makes it so the user does not need to provide or calculate the mu vector and S matrix. Instead, the user typically supplies the data in its raw form, and this package will calculate the needed objects and passes them into the DoseFinding functions. If the user wishes to primarily use the functions provided in the DoseFinding package, a singular function (prepareGen()) will provide mu and S. The package currently handles power analysis and the MCP-Mod procedure for negative binomial, Poisson, and binomial data. The MCP-Mod procedure can also be applied to survival data, but power analysis is not available. Bretz, F., Pinheiro, J. C., and Branson, M. (2005) <doi:10.1111/j.1541-0420.2005.00344.x>. Buckland, S. T., Burnham, K. P. and Augustin, N. H. (1997) <doi:10.2307/2533961>. Pinheiro, J. C., Bornkamp, B., Glimm, E. and Bretz, F. (2014) <doi:10.1002/sim.6052>.

r-missinghandle 0.1.1
Propagated dependencies: r-zoo@1.8-14 r-imputets@3.4 r-dplyr@1.1.4
Channel: guix-cran
Location: guix-cran/packages/m.scm (guix-cran packages m)
Home page: https://cran.r-project.org/package=MissingHandle
Licenses: GPL 3
Synopsis: Handles Missing Dates and Data and Converts into Weekly and Monthly from Daily
Description:

Many times, you will not find data for all dates. After first January, 2011 you may have next data on 20th January, 2011 and so on. Also available dates may have zero values. Try to gather all such kinds of data in different excel sheets of a single excel file. Every sheet will contain two columns (1st one is dates and second one is the data). After loading all the sheets into different elements of a list, using this you can fill the gaps for all the sheets and mark all the corresponding values as zeros. Here I am talking about daily data. Finally, it will combine all the filled results into one data frame (first column is date and other columns will be corresponding values of your sheets) and give one combined data frame. Number of columns in the data frame will be number of sheets plus one. Then imputation will be done. Daily to monthly and weekly conversion is also possible. More details can be found in Garai and others (2023) <doi:10.13140/RG.2.2.11977.42087>.

r-ordinalforest 2.4-4
Propagated dependencies: r-verification@1.45 r-rcpp@1.1.0 r-nnet@7.3-20 r-combinat@0.0-8
Channel: guix-cran
Location: guix-cran/packages/o.scm (guix-cran packages o)
Home page: https://cran.r-project.org/package=ordinalForest
Licenses: GPL 2
Synopsis: Ordinal Forests: Prediction and Variable Ranking with Ordinal Target Variables
Description:

The ordinal forest (OF) method allows ordinal regression with high-dimensional and low-dimensional data. After having constructed an OF prediction rule using a training dataset, it can be used to predict the values of the ordinal target variable for new observations. Moreover, by means of the (permutation-based) variable importance measure of OF, it is also possible to rank the covariates with respect to their importance in the prediction of the values of the ordinal target variable. OF is presented in Hornung (2020). NOTE: Starting with package version 2.4, it is also possible to obtain class probability predictions in addition to the class point predictions. Moreover, the variable importance values can also be based on the class probability predictions. Preliminary results indicate that this might lead to a better discrimination between influential and non-influential covariates. The main functions of the package are: ordfor() (construction of OF) and predict.ordfor() (prediction of the target variable values of new observations). References: Hornung R. (2020) Ordinal Forests. Journal of Classification 37, 4â 17. <doi:10.1007/s00357-018-9302-x>.

r-consensusopls 1.1.0
Propagated dependencies: r-reshape2@1.4.5
Channel: guix-cran
Location: guix-cran/packages/c.scm (guix-cran packages c)
Home page: https://cran.r-project.org/package=ConsensusOPLS
Licenses: GPL 3+
Synopsis: Consensus OPLS for Multi-Block Data Fusion
Description:

Merging data from multiple sources is a relevant approach for comprehensively evaluating complex systems. However, the inherent problems encountered when analyzing single tables are amplified with the generation of multi-block datasets, and finding the relationships between data layers of increasing complexity constitutes a challenging task. For that purpose, a generic methodology is proposed by combining the strength of established data analysis strategies, i.e. multi-block approaches and the Orthogonal Partial Least Squares (OPLS) framework to provide an efficient tool for the fusion of data obtained from multiple sources. The package enables quick and efficient implementation of the consensus OPLS model for any horizontal multi-block data structures (observation-based matching). Moreover, it offers an interesting range of metrics and graphics to help to determine the optimal number of components and check the validity of the model through permutation tests. Interpretation tools include score and loading plots, Variable Importance in Projection (VIP), functionality predict for SHAP computing, and performance coefficients such as R2, Q2, and DQ2 coefficients. J. Boccard and D.N. Rutledge (2013) <doi:10.1016/j.aca.2013.01.022>.

r-metabolicsurv 1.1.2
Propagated dependencies: r-tidyr@1.3.1 r-survminer@0.5.1 r-survival@3.8-3 r-superpc@1.12 r-rms@8.1-0 r-rdpack@2.6.4 r-pls@2.8-5 r-matrixstats@1.5.0 r-glmnet@4.1-10 r-ggplot2@4.0.1 r-dplyr@1.1.4
Channel: guix-cran
Location: guix-cran/packages/m.scm (guix-cran packages m)
Home page: https://github.com/OlajumokeEvangelina/MetabolicSurv
Licenses: GPL 3
Synopsis: Biomarker Validation Approach for Classification and Predicting Survival Using Metabolomics Signature
Description:

An approach to identifies metabolic biomarker signature for metabolic data by discovering predictive metabolite for predicting survival and classifying patients into risk groups. Classifiers are constructed as a linear combination of predictive/important metabolites, prognostic factors and treatment effects if necessary. Several methods were implemented to reduce the metabolomics matrix such as the principle component analysis of Wold Svante et al. (1987) <doi:10.1016/0169-7439(87)80084-9> , the LASSO method by Robert Tibshirani (1998) <doi:10.1002/(SICI)1097-0258(19970228)16:4%3C385::AID-SIM380%3E3.0.CO;2-3>, the elastic net approach by Hui Zou and Trevor Hastie (2005) <doi:10.1111/j.1467-9868.2005.00503.x>. Sensitivity analysis on the quantile used for the classification can also be accessed to check the deviation of the classification group based on the quantile specified. Large scale cross validation can be performed in order to investigate the mostly selected predictive metabolites and for internal validation. During the evaluation process, validation is accessed using the hazard ratios (HR) distribution of the test set and inference is mainly based on resampling and permutations technique.

r-taylorrussell 1.2.1
Propagated dependencies: r-shinywidgets@0.9.0 r-shiny@1.11.1 r-mvtnorm@1.3-3
Channel: guix-cran
Location: guix-cran/packages/t.scm (guix-cran packages t)
Home page: https://cran.r-project.org/package=TaylorRussell
Licenses: GPL 2+
Synopsis: Taylor-Russell Function for Multiple Predictors
Description:

The Taylor Russell model is a widely used method for assessing test validity in personnel selection tasks. The three functions in this package extend this model in a number of notable ways. TR() estimates test validity for a single selection test via the original Taylor Russell model. It extends this model by allowing users greater flexibility in argument choice. For example, users can specify any three of the four parameters (base rate, selection ratio, criterion validity, and positive predictive value) of the Taylor Russell model and estimate the remaining parameter (see the help file for examples). The TaylorRussell() function generalizes the original Taylor Russell model to allow for multiple selection tests (predictors). To our knowledge, this is the first generalization of the Taylor Russell model to allow for three or more selection tests (it is also the first to correctly handle models with two selection tests). TRDemo() is a shiny program for illustrating the underlying logic of the Taylor Russell model. Taylor, HC and Russell, JT (1939) "The relationship of validity coefficients to the practical effectiveness of tests in selection: Discussion and tables" <doi:10.1037/h0057079>.

r-psbayesborrow 1.1.0
Propagated dependencies: r-survival@3.8-3 r-stanheaders@2.32.10 r-rstantools@2.5.0 r-rstan@2.32.7 r-rcppparallel@5.1.11-1 r-rcppeigen@0.3.4.0.2 r-rcpp@1.1.0 r-overlapping@2.2 r-optmatch@0.10.8 r-matchit@4.7.2 r-e1071@1.7-16 r-copula@1.1-6 r-boot@1.3-32 r-bh@1.87.0-1
Channel: guix-cran
Location: guix-cran/packages/p.scm (guix-cran packages p)
Home page: https://cran.r-project.org/package=psBayesborrow
Licenses: Expat
Synopsis: Bayesian Information Borrowing with Propensity Score Matching
Description:

Hybrid control design is a way to borrow information from external controls to augment concurrent controls in a randomized controlled trial and is expected to overcome the feasibility issue when adequate randomized controlled trials cannot be conducted. A major challenge in the hybrid control design is its inability to eliminate a prior-data conflict caused by systematic imbalances in measured or unmeasured confounding factors between patients in the concurrent treatment/control group and external controls. To prevent the prior-data conflict, a combined use of propensity score matching and Bayesian commensurate prior has been proposed in the context of hybrid control design. The propensity score matching is first performed to guarantee the balance in baseline characteristics, and then the Bayesian commensurate prior is constructed while discounting the information based on the similarity in outcomes between the concurrent and external controls. psBayesborrow is a package to implement the propensity score matching and the Bayesian analysis with commensurate prior, as well as to conduct a simulation study to assess operating characteristics of the hybrid control design, where users can choose design parameters in flexible and straightforward ways depending on their own application.

r-harmonicmeanp 3.0.1
Propagated dependencies: r-fmstable@0.1-4
Channel: guix-cran
Location: guix-cran/packages/h.scm (guix-cran packages h)
Home page: https://cran.r-project.org/package=harmonicmeanp
Licenses: GPL 3
Synopsis: Harmonic Mean p-Values and Model Averaging by Mean Maximum Likelihood
Description:

The harmonic mean p-value (HMP) test combines p-values and corrects for multiple testing while controlling the strong-sense family-wise error rate. It is more powerful than common alternatives including Bonferroni and Simes procedures when combining large proportions of all the p-values, at the cost of slightly lower power when combining small proportions of all the p-values. It is more stringent than controlling the false discovery rate, and possesses theoretical robustness to positive correlations between tests and unequal weights. It is a multi-level test in the sense that a superset of one or more significant tests is certain to be significant and conversely when the superset is non-significant, the constituent tests are certain to be non-significant. It is based on MAMML (model averaging by mean maximum likelihood), a frequentist analogue to Bayesian model averaging, and is theoretically grounded in generalized central limit theorem. For detailed examples type vignette("harmonicmeanp") after installation. Version 3.0 addresses errors in versions 1.0 and 2.0 that led function p.hmp to control the familywise error rate only in the weak sense, rather than the strong sense as intended.

r-naepirtparams 1.0.0
Channel: guix-cran
Location: guix-cran/packages/n.scm (guix-cran packages n)
Home page: https://cran.r-project.org/package=NAEPirtparams
Licenses: GPL 2
Synopsis: IRT Parameters for the National Assessment of Education Progress
Description:

This data package contains the Item Response Theory (IRT) parameters for the National Center for Education Statistics (NCES) items used on the National Assessment of Education Progress (NAEP) from 1990 to 2015. The values in these tables are used along with NAEP data to turn student item responses into scores and include information about item difficulty, discrimination, and guessing parameter for 3 parameter logit (3PL) items. Parameters for Generalized Partial Credit Model (GPCM) items are also included. The adjustments table contains the information regarding the treatment of items (e.g., deletion of an item or a collapsing of response categories), when these items did not appear to fit the item response models used to describe the NAEP data. Transformation constants change the score estimates that are obtained from the IRT scaling program to the NAEP reporting metric. Values from the years 2000 - 2013 were taken from the NCES website <https://nces.ed.gov/nationsreportcard/> and values from 1990 - 1998 and 2015 were extracted from their NAEP data files. All subtest names were reduced and homogenized to one word (e.g. "Reading to gain information" became "information"). The various subtest names for univariate transformation constants were all homogenized to "univariate".

r-path-analysis 0.1
Propagated dependencies: r-pastecs@1.4.2 r-metan@1.19.0 r-mathjaxr@1.8-0 r-hmisc@5.2-4 r-gplots@3.2.0 r-diagrammer@1.0.11 r-corrr@0.4.5 r-corrplot@0.95 r-complexheatmap@2.26.0
Channel: guix-cran
Location: guix-cran/packages/p.scm (guix-cran packages p)
Home page: https://github.com/abeyran/Path.Analysis
Licenses: GPL 3
Synopsis: Path Coefficient Analysis
Description:

Facilitates the performance of several analyses, including simple and sequential path coefficient analysis, correlation estimate, drawing correlogram, Heatmap, and path diagram. When working with raw data, that includes one or more dependent variables along with one or more independent variables are available, the path coefficient analysis can be conducted. It allows for testing direct effects, which can be a vital indicator in path coefficient analysis. The process of preparing the dataset rule is explained in detail in the vignette file "Path.Analysis_manual.Rmd". You can find this in the folders labelled "data" and "~/inst/extdata". Also see: 1)the lavaan', 2)a sample of sequential path analysis in metan suggested by Olivoto and Lúcio (2020) <doi:10.1111/2041-210X.13384>, 3)the simple PATHSAS macro written in SAS by Cramer et al. (1999) <doi:10.1093/jhered/90.1.260>, and 4)the semPlot() function of OpenMx as initial tools for conducting path coefficient analyses and SEM (Structural Equation Modeling). To gain a comprehensive understanding of path coefficient analysis, both in theory and practice, see a Minitab macro developed by Arminian, A. in the paper by Arminian et al. (2008) <doi:10.1080/15427520802043182>.

ruby-chunky-png 1.4.0
Channel: guix
Location: gnu/packages/ruby-xyz.scm (gnu packages ruby-xyz)
Home page: https://github.com/wvanbergen/chunky_png/wiki
Licenses: Expat
Synopsis: Ruby library to handle PNG images
Description:

ChunkyPNG is a pure Ruby library that can read and write Portable Network Graphics (PNG) images without depending on an external image library. It tries to be memory efficient and reasonably fast. It has features such as:

  • Decoding support for any image that the PNG standard allows. This includes all standard color modes, all bit depths, all transparency, and interlacing and filtering options.

  • Encoding support for images of all color modes (true color, grayscale, and indexed) and transparency for all these color modes. The best color mode is chosen automatically, based on the amount of used colors.

  • Read/write access to the image's pixels.

  • Read/write access to all image metadata that is stored in chunks.

  • Memory efficiency: fixnum are used, i.e. 4 or 8 bytes of memory per pixel, depending on the hardware).

  • Performance: ChunkyPNG is reasonably fast for Ruby standards, by only using integer math and a highly optimized saving routine.

  • Interoperability with RMagick.

ChunkyPNG is vulnerable to decompression bombs and can run out of memory when loading a specifically crafted PNG file. This is hard to fix in pure Ruby. Deal with untrusted images in a separate process, e.g., by using fork or a background processing library.

Page: 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193119411951196119711981199120012011202120312041205120612071208120912101211121212131214121512161217121812191220122112221223122412251226122712281229123012311232123312341235123612371238123912401241124212431244124512461247124812491250125112521253125412551256125712581259126012611262126312641265126612671268
Total results: 30423