Several functions are provided for dose-response (or concentration-response) characterization from omics data. DRomics is especially dedicated to omics data obtained using a typical dose-response design, favoring a great number of tested doses (or concentrations) rather than a great number of replicates (no need of replicates). DRomics provides functions 1) to check, normalize and or transform data, 2) to select monotonic or biphasic significantly responding items (e.g. probes, metabolites), 3) to choose the best-fit model among a predefined family of monotonic and biphasic models to describe each selected item, 4) to derive a benchmark dose or concentration and a typology of response from each fitted curve. In the available version data are supposed to be single-channel microarray data in log2, RNAseq data in raw counts, or already pretreated continuous omics data (such as metabolomic data) in log scale. In order to link responses across biological levels based on a common method, DRomics also handles apical data as long as they are continuous and follow a normal distribution for each dose or concentration, with a common standard error. For further details see Delignette-Muller et al (2023) <DOI:10.24072/pcjournal.325> and Larras et al (2018) <DOI:10.1021/acs.est.8b04752>.
It is used to construct run sequences with minimum changes for half replicate of two level factorial run order. Experimenter can save time and resources by minimizing the number of changes in levels of individual factor and therefore the total number of changes. It consists of the function minimal_hrtlf(). This technique can be employed to any half replicate of two level factorial run order where the number of factors are greater than two. In Design of Experiments (DOE) theory, two level of a factor can be represented as integers e.g. - 1 for low and 1 for high. User is expected to enter total number of factors to be considered in the experiment. minimal_hrtlf() provides the required run sequences for the input number of factors. The output also gives the number of changes of each factor along with total number of changes in the run sequence. Due to restricted randomization the minimally changed run sequences of half replicate of two level factorial run order will be affected by trend effect. The output also provides the Trend Factor value of the run order. Trend factor value will lies between 0 to 1. Higher the values, lesser the influence of trend effects on the run order.
This package provides a versatile package that provides implementation of various methods of Functional Data Analysis (FDA) and Empirical Dynamics. The core of this package is Functional Principal Component Analysis (FPCA), a key technique for functional data analysis, for sparsely or densely sampled random trajectories and time courses, via the Principal Analysis by Conditional Estimation (PACE) algorithm. This core algorithm yields covariance and mean functions, eigenfunctions and principal component (scores), for both functional data and derivatives, for both dense (functional) and sparse (longitudinal) sampling designs. For sparse designs, it provides fitted continuous trajectories with confidence bands, even for subjects with very few longitudinal observations. PACE is a viable and flexible alternative to random effects modeling of longitudinal data. There is also a Matlab version (PACE) that contains some methods not available on fdapace and vice versa. Updates to fdapace were supported by grants from NIH Echo and NSF DMS-1712864 and DMS-2014626. Please cite our package if you use it (You may run the command citation("fdapace") to get the citation format and bibtex entry). References: Wang, J.L., Chiou, J., Müller, H.G. (2016) <doi:10.1146/annurev-statistics-041715-033624>; Chen, K., Zhang, X., Petersen, A., Müller, H.G. (2017) <doi:10.1007/s12561-015-9137-5>.
Download geospatial data available from several federated data sources (mainly sources maintained by the US Federal government). Currently, the package enables extraction from nine datasets: The National Elevation Dataset digital elevation models (<https://www.usgs.gov/3d-elevation-program> 1 and 1/3 arc-second; USGS); The National Hydrography Dataset (<https://www.usgs.gov/national-hydrography/national-hydrography-dataset>; USGS); The Soil Survey Geographic (SSURGO) database from the National Cooperative Soil Survey (<https://websoilsurvey.sc.egov.usda.gov/>; NCSS), which is led by the Natural Resources Conservation Service (NRCS) under the USDA; the Global Historical Climatology Network (<https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily>; GHCN), coordinated by National Climatic Data Center at NOAA; the Daymet gridded estimates of daily weather parameters for North America, version 4, available from the Oak Ridge National Laboratory's Distributed Active Archive Center (<https://daymet.ornl.gov/>; DAAC); the International Tree Ring Data Bank; the National Land Cover Database (<https://www.mrlc.gov/>; NLCD); the Cropland Data Layer from the National Agricultural Statistics Service (<https://www.nass.usda.gov/Research_and_Science/Cropland/SARS1a.php>; NASS); and the PAD-US dataset of protected area boundaries (<https://www.usgs.gov/programs/gap-analysis-project/science/pad-us-data-overview>; USGS).
According to a phenomenon known as "the wisdom of the crowds," combining point estimates from multiple judges often provides a more accurate aggregate estimate than using a point estimate from a single judge. However, if the judges use shared information in their estimates, the simple average will over-emphasize this common component at the expense of the judgesâ private information. Asa Palley & Ville Satopää (2021) "Boosting the Wisdom of Crowds Within a Single Judgment Problem: Selective Averaging Based on Peer Predictions" <https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=3504286> proposes a procedure for calculating a weighted average of the judgesâ individual estimates such that resulting aggregate estimate appropriately combines the judges collective information within a single estimation problem. The authors use both simulation and data from six experimental studies to illustrate that the weighting procedure outperforms existing averaging-like methods, such as the equally weighted average, trimmed average, and median. This aggregate estimate -- know as "the knowledge-weighted estimate" -- inputs a) judges estimates of a continuous outcome (E) and b) predictions of others average estimate of this outcome (P). In this R-package, the function knowledge_weighted_estimate(E,P) implements the knowledge-weighted estimate. Its use is illustrated with a simple stylized example and on real-world experimental data.
Exploits dynamical seasonal forecasts in order to provide information relevant to stakeholders at the seasonal timescale. The package contains process-based methods for forecast calibration, bias correction, statistical and stochastic downscaling, optimal forecast combination and multivariate verification, as well as basic and advanced tools to obtain tailored products. This package was developed in the context of the ERA4CS project MEDSCOPE and the H2020 S2S4E project and includes contributions from ArticXchange project founded by EU-PolarNet 2. Implements methods described in Pérez-Zanón et al. (2022) <doi:10.5194/gmd-15-6115-2022>, Doblas-Reyes et al. (2005) <doi:10.1111/j.1600-0870.2005.00104.x>, Mishra et al. (2018) <doi:10.1007/s00382-018-4404-z>, Sanchez-Garcia et al. (2019) <doi:10.5194/asr-16-165-2019>, Straus et al. (2007) <doi:10.1175/JCLI4070.1>, Terzago et al. (2018) <doi:10.5194/nhess-18-2825-2018>, Torralba et al. (2017) <doi:10.1175/JAMC-D-16-0204.1>, D'Onofrio et al. (2014) <doi:10.1175/JHM-D-13-096.1>, Verfaillie et al. (2017) <doi:10.5194/gmd-10-4257-2017>, Van Schaeybroeck et al. (2019) <doi:10.1016/B978-0-12-812372-0.00010-8>, Yiou et al. (2013) <doi:10.1007/s00382-012-1626-3>.
Phylogenetic comparative methods represent models of continuous trait data associated with the tips of a phylogenetic tree. Examples of such models are Gaussian continuous time branching stochastic processes such as Brownian motion (BM) and Ornstein-Uhlenbeck (OU) processes, which regard the data at the tips of the tree as an observed (final) state of a Markov process starting from an initial state at the root and evolving along the branches of the tree. The PCMBase R package provides a general framework for manipulating such models. This framework consists of an application programming interface for specifying data and model parameters, and efficient algorithms for simulating trait evolution under a model and calculating the likelihood of model parameters for an assumed model and trait data. The package implements a growing collection of models, which currently includes BM, OU, BM/OU with jumps, two-speed OU as well as mixed Gaussian models, in which different types of the above models can be associated with different branches of the tree. The PCMBase package is limited to trait-simulation and likelihood calculation of (mixed) Gaussian phylogenetic models. The PCMFit package provides functionality for inference of these models to tree and trait data. The package web-site <https://venelin.github.io/PCMBase/> provides access to the documentation and other resources.
Generalized meta-analysis is a technique for estimating parameters associated with a multiple regression model through meta-analysis of studies which may have information only on partial sets of the regressors. It estimates the effects of each variable while fully adjusting for all other variables that are measured in at least one of the studies. Using algebraic relationships between regression parameters in different dimensions, a set of moment equations is specified for estimating the parameters of a maximal model through information available on sets of parameter estimates from a series of reduced models available from the different studies. The specification of the equations requires a reference dataset to estimate the joint distribution of the covariates. These equations are solved using the generalized method of moments approach, with the optimal weighting of the equations taking into account uncertainty associated with estimates of the parameters of the reduced models. The proposed framework is implemented using iterated reweighted least squares algorithm for fitting generalized linear regression models. For more details about the method, please see pre-print version of the manuscript on generalized meta-analysis by Prosenjit Kundu, Runlong Tang and Nilanjan Chatterjee (2018) <doi:10.1093/biomet/asz030>.The current version (0.2.0) is updated to address some of the stability issues in the previous version (0.1).
Estimation of the most-left informative set of gross returns (i.e., the informative set). The procedure to compute the informative set adjusts the method proposed by Mariani et al. (2022a) <doi:10.1007/s11205-020-02440-6> and Mariani et al. (2022b) <doi:10.1007/s10287-022-00422-2> to gross returns of financial assets. This is accomplished through an adaptive algorithm that identifies sub-groups of gross returns in each iteration by approximating their distribution with a sequence of two-component log-normal mixtures. These sub-groups emerge when a significant change in the distribution occurs below the median of the financial returns, with their boundary termed as the â change point" of the mixture. The process concludes when no further change points are detected. The outcome encompasses parameters of the leftmost mixture distributions and change points of the analyzed financial time series. The functionalities of the INFOSET package include: (i) modelling asset distribution detecting the parameters which describe left tail behaviour (infoset function), (ii) clustering, (iii) labeling of the financial series for predictive and classification purposes through a Left Risk measure based on the first change point (LR_cp function) (iv) portfolio construction (ptf_construction function). The package also provide a specific function to construct rolling windows of different length size and overlapping time.
Extracts features from biological sequences. It contains most features which are presented in related work and also includes features which have never been introduced before. It extracts numerous features from nucleotide and peptide sequences. Each feature converts the input sequences to discrete numbers in order to use them as predictors in machine learning models. There are many features and information which are hidden inside a sequence. Utilizing the package, users can convert biological sequences to discrete models based on chosen properties. References: iLearn Z. Chen et al. (2019) <DOI:10.1093/bib/bbz041>. iFeature Z. Chen et al. (2018) <DOI:10.1093/bioinformatics/bty140>. <https://CRAN.R-project.org/package=rDNAse>. PseKRAAC Y. Zuo et al. PseKRAAC: a flexible web server for generating pseudo K-tuple reduced amino acids composition (2017) <DOI:10.1093/bioinformatics/btw564>. iDNA6mA-PseKNC P. Feng et al. iDNA6mA-PseKNC: Identifying DNA N6-methyladenosine sites by incorporating nucleotide physicochemical properties into PseKNC (2019) <DOI:10.1016/j.ygeno.2018.01.005>. I. Dubchak et al. Prediction of protein folding class using global description of amino acid sequence (1995) <DOI:10.1073/pnas.92.19.8700>. W. Chen et al. Identification and analysis of the N6-methyladenosine in the Saccharomyces cerevisiae transcriptome (2015) <DOI:10.1038/srep13859>.
This package provides functions to access and download data from the Open Case Studies <https://www.opencasestudies.org/> repositories on GitHub <https://github.com/opencasestudies>. Different functions enable users to grab the data they need at different sections in the case study, as well as download the whole case study repository. All the user needs to do is input the name of the case study being worked on. The package relies on the httr::GET() function to access files through the GitHub API. The functions usethis::use_zip() and usethis::create_from_github() are used to clone and/or download the case study repositories. To cite an individual case study, please see the respective README file at <https://github.com/opencasestudies/>. <https://github.com/opencasestudies/ocs-bp-rural-and-urban-obesity> <https://github.com/opencasestudies/ocs-bp-air-pollution> <https://github.com/opencasestudies/ocs-bp-vaping-case-study> <https://github.com/opencasestudies/ocs-bp-opioid-rural-urban> <https://github.com/opencasestudies/ocs-bp-RTC-wrangling> <https://github.com/opencasestudies/ocs-bp-RTC-analysis> <https://github.com/opencasestudies/ocs-bp-youth-disconnection> <https://github.com/opencasestudies/ocs-bp-youth-mental-health> <https://github.com/opencasestudies/ocs-bp-school-shootings-dashboard> <https://github.com/opencasestudies/ocs-bp-co2-emissions> <https://github.com/opencasestudies/ocs-bp-diet>.
Identification, model fitting and estimation for time series with periodic structure. Additionally, procedures for simulation of periodic processes and real data sets are included. Hurd, H. L., Miamee, A. G. (2007) <doi:10.1002/9780470182833> Box, G. E. P., Jenkins, G. M., Reinsel, G. (1994) <doi:10.1111/jtsa.12194> Brockwell, P. J., Davis, R. A. (1991, ISBN:978-1-4419-0319-8) Bretz, F., Hothorn, T., Westfall, P. (2010, ISBN: 9780429139543) Westfall, P. H., Young, S. S. (1993, ISBN:978-0-471-55761-6) Bloomfield, P., Hurd, H. L.,Lund, R. (1994) <doi:10.1111/j.1467-9892.1994.tb00181.x> Dehay, D., Hurd, H. L. (1994, ISBN:0-7803-1023-3) Vecchia, A. (1985) <doi:10.1080/00401706.1985.10488076> Vecchia, A. (1985) <doi:10.1111/j.1752-1688.1985.tb00167.x> Jones, R., Brelsford, W. (1967) <doi:10.1093/biomet/54.3-4.403> Makagon, A. (1999) <https://www.math.uni.wroc.pl/~pms/files/19.2/Article/19.2.5.pdf> Sakai, H. (1989) <doi:10.1111/j.1467-9892.1991.tb00069.x> Gladyshev, E. G. (1961) <https://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=dan&paperid=24851> Ansley (1979) <doi:10.1093/biomet/66.1.59> Hurd, H. L., Gerr, N. L. (1991) <doi:10.1111/j.1467-9892.1991.tb00088.x>.
This package provides a set of functions to implement decision-making systems based on the W.A.S.P.A.S. method (Weighted Aggregated Sum Product Assessment), Chakraborty and Zavadskas (2012) <doi:10.5755/j01.eee.122.6.1810>. So this package offers functions that analyze and validate the raw data, which must be entered in a determined format; extract specific vectors and matrices from this raw database; normalize the input data; calculate rankings by intermediate methods; apply the lambda parameter for the main method; and a function that does everything at once. The package has an example database called choppers, with which the user can see how the input data should be organized so that everything works as recommended by the decision methods based on multiple criteria that this package solves. Basically, the data are composed of a set of alternatives, which will be ranked, a set of choice criteria, a matrix of values for each Alternative-Criterion relationship, a vector of weights associated with the criteria, since certain criteria are considered more important than others, as well as a vector that defines each criterion as cost or benefit, this determines the calculation formula, as there are those criteria that we want the highest possible value (e.g. durability) and others that we want the lowest possible value (e.g. price).
Analysis of Q methodology, used to identify distinct perspectives existing within a group. This methodology is used across social, health and environmental sciences to understand diversity of attitudes, discourses, or decision-making styles (for more information, see <https://qmethod.org/>). A single function runs the full analysis. Each step can be run separately using the corresponding functions: for automatic flagging of Q-sorts (manual flagging is optional), for statement scores, for distinguishing and consensus statements, and for general characteristics of the factors. The package allows to choose either principal components or centroid factor extraction, manual or automatic flagging, a number of mathematical methods for rotation (or none), and a number of correlation coefficients for the initial correlation matrix, among many other options. Additional functions are available to import and export data (from raw *.CSV, HTMLQ and FlashQ *.CSV, PQMethod *.DAT and easy-htmlq *.JSON files), to print and plot, to import raw data from individual *.CSV files, and to make printable cards. The package also offers functions to print Q cards and to generate Q distributions for study administration. See further details in the package documentation, and in the web pages below, which include a cookbook, guidelines for more advanced analysis (how to perform manual flagging or change the sign of factors), data management, and a graphical user interface (GUI) for online and offline use.
It is assumed that psychological distances between the categories are equal for the measurement instruments consisted of polytomously scored items. According to Muraki, this assumption must be tested. In the examination process of this assumption, the fit indexes are obtained and evaluated. This package provides that this assumption is removed. By with this package, the converted scale values of all items in a measurement instrument can be calculated by estimating a category parameter set for each item. Thus, the calculations can be made without any need to usage of the common category parameter set. Through this package, the psychological distances of the items are scaled. The scaling of a category parameter set for each item cause differentiation of score of the categories will be got from items. Also, the total measurement instrument score of an individual can be calculated according to the scaling of item score categories by with this package.This package provides that the place of individuals related to the structure to be measured with a measurement instrument consisted of polytomously scored items can be reveal more accurately. In this way, it is thought that the results obtained about individuals can be made more sensitive, and the differences between individuals can be revealed more accurately. On the other hand, it can be argued that more accurate evidences can be obtained regarding the psychometric properties of the measurement instruments.
Allow to compute and visualise convective parameters commonly used in the operational prediction of severe convective storms. Core algorithm is based on a highly optimized C++ code linked into R via Rcpp'. Highly efficient engine allows to derive thermodynamic and kinematic parameters from large numerical datasets such as reanalyses or operational Numerical Weather Prediction models in a reasonable amount of time. Package has been developed since 2017 by research meteorologists specializing in severe thunderstorms. The most relevant methods used in the package based on the following publications Stipanuk (1973) <https://apps.dtic.mil/sti/pdfs/AD0769739.pdf>, McCann et al. (1994) <doi:10.1175/1520-0434(1994)009%3C0532:WNIFFM%3E2.0.CO;2>, Bunkers et al. (2000) <doi:10.1175/1520-0434(2000)015%3C0061:PSMUAN%3E2.0.CO;2>, Corfidi et al. (2003) <doi:10.1175/1520-0434(2003)018%3C0997:CPAMPF%3E2.0.CO;2>, Showalter (1953) <doi:10.1175/1520-0477-34.6.250>, Coffer et al. (2019) <doi:10.1175/WAF-D-19-0115.1>, Gropp and Davenport (2019) <doi:10.1175/WAF-D-17-0150.1>, Czernecki et al. (2019) <doi:10.1016/j.atmosres.2019.05.010>, Taszarek et al. (2020) <doi:10.1175/JCLI-D-20-0346.1>, Sherburn and Parker (2014) <doi:10.1175/WAF-D-13-00041.1>, Romanic et al. (2022) <doi:10.1016/j.wace.2022.100474>.
Framework to facilitate patient subtyping with similarity network fusion and meta clustering. The similarity network fusion (SNF) algorithm was introduced by Wang et al. (2014) in <doi:10.1038/nmeth.2810>. SNF is a data integration approach that can transform high-dimensional and diverse data types into a single similarity network suitable for clustering with minimal loss of information from each initial data source. The meta clustering approach was introduced by Caruana et al. (2006) in <doi:10.1109/ICDM.2006.103>. Meta clustering involves generating a wide range of cluster solutions by adjusting clustering hyperparameters, then clustering the solutions themselves into a manageable number of qualitatively similar solutions, and finally characterizing representative solutions to find ones that are best for the user's specific context. This package provides a framework to easily transform multi-modal data into a wide range of similarity network fusion-derived cluster solutions as well as to visualize, characterize, and validate those solutions. Core package functionality includes easy customization of distance metrics, clustering algorithms, and SNF hyperparameters to generate diverse clustering solutions; calculation and plotting of associations between features, between patients, and between cluster solutions; and standard cluster validation approaches including resampled measures of cluster stability, standard metrics of cluster quality, and label propagation to evaluate generalizability in unseen data. Associated vignettes guide the user through using the package to identify patient subtypes while adhering to best practices for unsupervised learning.
SCANVIS is a set of annotation-dependent tools for analyzing splice junctions and their read support as predetermined by an alignment tool of choice (for example, STAR aligner). SCANVIS assesses each junction's relative read support (RRS) by relating to the context of local split reads aligning to annotated transcripts. SCANVIS also annotates each splice junction by indicating whether the junction is supported by annotation or not, and if not, what type of junction it is (e.g. exon skipping, alternative 5 or 3 events, Novel Exons). Unannotated junctions are also futher annotated by indicating whether it induces a frame shift or not. SCANVIS includes a visualization function to generate static sashimi-style plots depicting relative read support and number of split reads using arc thickness and arc heights, making it easy for users to spot well-supported junctions. These plots also clearly delineate unannotated junctions from annotated ones using designated color schemes, and users can also highlight splice junctions of choice. Variants and/or a read profile are also incoroporated into the plot if the user supplies variants in bed format and/or the BAM file. One further feature of the visualization function is that users can submit multiple samples of a certain disease or cohort to generate a single plot - this occurs via a "merge" function wherein junction details over multiple samples are merged to generate a single sashimi plot, which is useful when contrasting cohorots (eg. disease vs control).
To date, thousands of single nucleotide polymorphisms (SNPs) have been found to be associated with complex traits and diseases. However, the vast majority of these disease-associated SNPs lie in the non-coding part of the genome, and are likely to affect regulatory elements, such as enhancers and promoters, rather than function of a protein. Thus, to understand the molecular mechanisms underlying genetic traits and diseases, it becomes increasingly important to study the effect of a SNP on nearby molecular traits such as chromatin environment or transcription factor (TF) binding. Towards this aim, we developed SNPhood, a user-friendly *Bioconductor* R package to investigate and visualize the local neighborhood of a set of SNPs of interest for NGS data such as chromatin marks or transcription factor binding sites from ChIP-Seq or RNA- Seq experiments. SNPhood comprises a set of easy-to-use functions to extract, normalize and summarize reads for a genomic region, perform various data quality checks, normalize read counts using additional input files, and to cluster and visualize the regions according to the binding pattern. The regions around each SNP can be binned in a user-defined fashion to allow for analysis of very broad patterns as well as a detailed investigation of specific binding shapes. Furthermore, SNPhood supports the integration with genotype information to investigate and visualize genotype-specific binding patterns. Finally, SNPhood can be employed for determining, investigating, and visualizing allele-specific binding patterns around the SNPs of interest.
This package provides a user-friendly tool to fit Bayesian regression models. It can fit 3 types of Bayesian models using individual-level, summary-level, and individual plus pedigree-level (single-step) data for both Genomic prediction/selection (GS) and Genome-Wide Association Study (GWAS), it was designed to estimate joint effects and genetic parameters for a complex trait, including: (1) fixed effects and coefficients of covariates, (2) environmental random effects, and its corresponding variance, (3) genetic variance, (4) residual variance, (5) heritability, (6) genomic estimated breeding values (GEBV) for both genotyped and non-genotyped individuals, (7) SNP effect size, (8) phenotype/genetic variance explained (PVE) for single or multiple SNPs, (9) posterior probability of association of the genomic window (WPPA), (10) posterior inclusive probability (PIP). The functions are not limited, we will keep on going in enriching it with more features. References: Lilin Yin et al. (2025) <doi:10.18637/jss.v114.i06>; Meuwissen et al. (2001) <doi:10.1093/genetics/157.4.1819>; Gustavo et al. (2013) <doi:10.1534/genetics.112.143313>; Habier et al. (2011) <doi:10.1186/1471-2105-12-186>; Yi et al. (2008) <doi:10.1534/genetics.107.085589>; Zhou et al. (2013) <doi:10.1371/journal.pgen.1003264>; Moser et al. (2015) <doi:10.1371/journal.pgen.1004969>; Lloyd-Jones et al. (2019) <doi:10.1038/s41467-019-12653-0>; Henderson (1976) <doi:10.2307/2529339>; Fernando et al. (2014) <doi:10.1186/1297-9686-46-50>.
This package provides tools for transport planning with an emphasis on spatial transport data and non-motorized modes. The package was originally developed to support the Propensity to Cycle Tool', a publicly available strategic cycle network planning tool (Lovelace et al. 2017) <doi:10.5198/jtlu.2016.862>, but has since been extended to support public transport routing and accessibility analysis (Moreno-Monroy et al. 2017) <doi:10.1016/j.jtrangeo.2017.08.012> and routing with locally hosted routing engines such as OSRM (Lowans et al. 2023) <doi:10.1016/j.enconman.2023.117337>. The main functions are for creating and manipulating geographic "desire lines" from origin-destination (OD) data (building on the od package); calculating routes on the transport network locally and via interfaces to routing services such as <https://cyclestreets.net/> (Desjardins et al. 2021) <doi:10.1007/s11116-021-10197-1>; and calculating route segment attributes such as bearing. The package implements the travel flow aggregration method described in Morgan and Lovelace (2020) <doi:10.1177/2399808320942779> and the OD jittering method described in Lovelace et al. (2022) <doi:10.32866/001c.33873>. Further information on the package's aim and scope can be found in the vignettes and in a paper in the R Journal (Lovelace and Ellison 2018) <doi:10.32614/RJ-2018-053>, and in a paper outlining the landscape of open source software for geographic methods in transport planning (Lovelace, 2021) <doi:10.1007/s10109-020-00342-2>.
Hospitals, hospital systems, and even trauma systems that provide care to injured patients may not be aware of robust metrics that can help gauge the efficacy of their programs in saving the lives of injured patients. traumar provides robust functions driven by the academic literature to automate the calculation of relevant metrics to individuals desiring to measure the performance of their trauma center or even a trauma system. traumar also provides some helper functions for the data analysis journey. Users can refer to the following publications for descriptions of the methods used in traumar'. TRISS methodology, including probability of survival, and the W, M, and Z Scores - Flora (1978) <doi:10.1097/00005373-197810000-00003>, Boyd et al. (1987, PMID:3106646), Llullaku et al. (2009) <doi:10.1186/1749-7922-4-2>, Singh et al. (2011) <doi:10.4103/0974-2700.86626>, Baker et al. (1974, PMID:4814394), and Champion et al. (1989) <doi:10.1097/00005373-198905000-00017>. For the Relative Mortality Metric, see Napoli et al. (2017) <doi:10.1080/24725579.2017.1325948>, Schroeder et al. (2019) <doi:10.1080/10903127.2018.1489021>, and Kassar et al. (2016) <doi:10.1177/00031348221093563>. For more information about methods to calculate over- and under-triage in trauma hospital populations and samples, please see the following publications - Peng & Xiang (2016) <doi:10.1016/j.ajem.2016.08.061>, Beam et al. (2022) <doi:10.23937/2474-3674/1510136>, Roden-Foreman et al. (2017) <doi:10.1097/JTN.0000000000000283>.
This package provides tools for exploring the topography of 3d triangle meshes. The functions were developed with dental surfaces in mind, but could be applied to any triangle mesh of class mesh3d'. More specifically, doolkit allows to isolate the border of a mesh, or a subpart of the mesh using the polygon networks method; crop a mesh; compute basic descriptors (elevation, orientation, footprint area); compute slope, angularity and relief index (Ungar and Williamson (2000) <https://palaeo-electronica.org/2000_1/gorilla/issue1_00.htm>; Boyer (2008) <doi:10.1016/j.jhevol.2008.08.002>), inclination and occlusal relief index or gamma (Guy et al. (2013) <doi:10.1371/journal.pone.0066142>), OPC (Evans et al. (2007) <doi:10.1038/nature05433>), OPCR (Wilson et al. (2012) <doi:10.1038/nature10880>), DNE (Bunn et al. (2011) <doi:10.1002/ajpa.21489>; Pampush et al. (2016) <doi:10.1007/s10914-016-9326-0>), form factor (Horton (1932) <doi:10.1029/TR013i001p00350>), basin elongation (Schum (1956) <doi:10.1130/0016-7606(1956)67[597:EODSAS]2.0.CO;2>), lemniscate ratio (Chorley et al; (1957) <doi:10.2475/ajs.255.2.138>), enamel-dentine distance (Guy et al. (2015) <doi:10.1371/journal.pone.0138802>; Thiery et al. (2017) <doi:10.3389/fphys.2017.00524>), absolute crown strength (Schwartz et al. (2020) <doi:10.1098/rsbl.2019.0671>), relief rate (Thiery et al. (2019) <doi:10.1002/ajpa.23916>) and area-relative curvature; draw cumulative profiles of a topographic variable; and map a variable over a 3d triangle mesh.
The R-package bayespm implements Bayesian Statistical Process Control and Monitoring (SPC/M) methodology. These methods utilize available prior information and/or historical data, providing efficient online quality monitoring of a process, in terms of identifying moderate/large transient shifts (i.e., outliers) or persistent shifts of medium/small size in the process. These self-starting, sequentially updated tools can also run under complete absence of any prior information. The Predictive Control Charts (PCC) are introduced for the quality monitoring of data from any discrete or continuous distribution that is a member of the regular exponential family. The Predictive Ratio CUSUMs (PRC) are introduced for the Binomial, Poisson and Normal data (a later version of the library will cover all the remaining distributions from the regular exponential family). The PCC targets transient process shifts of typically large size (a.k.a. outliers), while PRC is focused in detecting persistent (structural) shifts that might be of medium or even small size. Apart from monitoring, both PCC and PRC provide the sequentially updated posterior inference for the monitored parameter. Bourazas K., Kiagias D. and Tsiamyrtzis P. (2022) "Predictive Control Charts (PCC): A Bayesian approach in online monitoring of short runs" <doi:10.1080/00224065.2021.1916413>, Bourazas K., Sobas F. and Tsiamyrtzis, P. 2023. "Predictive ratio CUSUM (PRC): A Bayesian approach in online change point detection of short runs" <doi:10.1080/00224065.2022.2161434>, Bourazas K., Sobas F. and Tsiamyrtzis, P. 2023. "Design and properties of the predictive ratio cusum (PRC) control charts" <doi:10.1080/00224065.2022.2161435>.