The Model Disability Survey (MDS) <https://www.who.int/activities/collection-of-data-on-disability> is a World Health Organization (WHO) general population survey instrument to assess the distribution of disability within a country or region, grounded in the International Classification of Functioning, Disability and Health <https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health>. This package provides fit-for-purpose functions for calculating and presenting the results from this survey, as used by the WHO. The package primarily provides functions for implementing Rasch Analysis (see Andrich (2011) <doi:10.1586/erp.11.59>) to calculate a metric scale for disability.
An implementation of z-curves - a method for estimating expected discovery and replicability rates on the bases of test-statistics of published studies. The package provides functions for fitting the density, EM, and censored EM version (Bartoš & Schimmack, 2022, <doi:10.15626/MP.2021.2720>; Schimmack & Bartoš, 2023, <doi: 10.1371/journal.pone.0290084>), as well as the original density z-curve (Brunner & Schimmack, 2020, <doi:10.15626/MP.2018.874>). Furthermore, the package provides summarizing and plotting functions for the fitted z-curve objects. See the aforementioned articles for more information about the z-curves, expected discovery and replicability rates, validation studies, and limitations.
This package implements `import()` and `export()` standard generics for importing and exporting biological data formats. `import()` supports whole-file as well as chunk-wise iterative import. The `import()` interface optionally provides a standard mechanism for 'lazy' access via `filter()` (on row or element-like components of the file resource), `select()` (on column-like components of the file resource) and `collect()`. The `import()` interface optionally provides transparent access to remote (e.g. via https) as well as local access. Developers can register a file extension, e.g., `.loom` for dispatch from character-based URIs to specific `import()` / `export()` methods based on classes representing file types, e.g., `LoomFile()`.
This package provides a programmatic interface to the Finnish Biodiversity Information Facility ('FinBIF') API (<https://api.laji.fi>). FinBIF aggregates Finnish biodiversity data from multiple sources in a single open access portal for researchers, citizen scientists, industry and government. FinBIF allows users of biodiversity information to find, access, combine and visualise data on Finnish plants, animals and microorganisms. The finbif package makes the publicly available data in FinBIF easily accessible to programmers. Biodiversity information is available on taxonomy and taxon occurrence. Occurrence data can be filtered by taxon, time, location and other variables. The data accessed are conveniently preformatted for subsequent analyses.
Estimation of generalized linear models with correlated/clustered observations by use of generalized estimating equations (GEE). See e.g. Halekoh and Højsgaard, (2005, <doi:10.18637/jss.v015.i02>), for details. Several types of clustering are supported, including exchangeable variance structures, AR1 structures, M-dependent, user-specified variance structures and more. The model fitting computations are performed using modified code from the geeM package, while the interface and output objects have been written to resemble the geepack package. The package also contains additional tools for working with and inspecting results from the geepack package, e.g. a confint method for geeglm objects from geepack'.
This package provides functions to prepare, visualize, and analyse diachronic network data on local political actors, with a particular focus on the development of local party systems and identification of actor groups. Formalizes and automates a continuity diagram method that has been previously applied in research on Czech local politics, e.g. Bubenicek and Kubalek (2010, ISSN:1803-8220), Kubalek and Bubenicek (2012, ISSN:1803-8220), and Cmejrek, Bubenicek, and Copik (2010, ISBN:978-80-247-3061-5). The package also includes several example datasets derived from Czech municipal elections, compiled from official election results, field research, and previously published case studies on Czech local politics.
This package provides a suite of tools for estimating, assessing model fit, simulating from, and visualizing location dependent marked point processes characterized by regularity in the pattern. You provide a reference marked point process, a set of raster images containing location specific covariates, and select the estimation algorithm and type of mark model. ldmppr estimates the process and mark models and allows you to check the appropriateness of the model using a variety of diagnostic tools. Once a satisfactory model fit is obtained, you can simulate from the model and visualize the results. Documentation for the package ldmppr is available in the form of a vignette.
The Moving Epidemic Method, created by T Vega and JE Lozano (2012, 2015) <doi:10.1111/j.1750-2659.2012.00422.x>, <doi:10.1111/irv.12330>, allows the weekly assessment of the epidemic and intensity status to help in routine respiratory infections surveillance in health systems. Allows the comparison of different epidemic indicators, timing and shape with past epidemics and across different regions or countries with different surveillance systems. Also, it gives a measure of the performance of the method in terms of sensitivity and specificity of the alert week. memapp is a web application created in the Shiny framework for the mem R package.
This package provides a toolbox for working with public opinion data from Argentina. It facilitates access to microdata and the calculation of indicators of the Trust in Government Index (ICG), prepared by the Torcuato Di Tella University. Although we will try to document everything possible in English, by its very nature Spanish will be the main language. El paquete fue pensado como una caja de herramientas para el trabajo con datos de opinión pública de Argentina. El mismo facilita el acceso a los microdatos y el cálculos de indicadores del à ndice de Confianza en el Gobierno (ICG), elaborado por la Universidad Torcuato Di Tella.
The main attribute of PopVar is the prediction of genetic variance in bi-parental populations, from which the package derives its name. PopVar contains a set of functions that use phenotypic and genotypic data from a set of candidate parents to 1) predict the mean, genetic variance, and superior progeny value of all, or a defined set of pairwise bi-parental crosses, and 2) perform cross-validation to estimate genome-wide prediction accuracy of multiple statistical models. More details are available in Mohammadi, Tiede, and Smith (2015, <doi:10.2135/cropsci2015.01.0030>). A dataset think_barley.rda is included for reference and examples.
The purpose of this package is to provide a lightweight and unified Future API for sequential and parallel processing of R expression via futures. This package implements sequential, multicore, multisession, and cluster futures. With these, R expressions can be evaluated on the local machine, in parallel a set of local machines, or distributed on a mix of local and remote machines. Extensions to this package implement additional backends for processing futures via compute cluster schedulers etc. Because of its unified API, there is no need to modify any code in order to switch from sequential on the local machine to, say, distributed processing on a remote compute cluster.
This package enables the detection of driver chromosomal imbalances including loss of heterozygosity (LOH) from array comparative genomic hybridization (aCGH) data. VegaMC performs a joint segmentation of a dataset and uses a statistical framework to distinguish between driver and passenger mutation. VegaMC has been implemented so that it can be immediately integrated with the output produced by PennCNV tool. In addition, VegaMC produces in output two web pages that allows a rapid navigation between both the detected regions and the altered genes. In the web page that summarizes the altered genes, the link to the respective Ensembl gene web page is reported.
Diagnostics for linear L1 regression (also known as LAD - Least Absolute Deviations), including: estimation, confidence intervals, tests of hypotheses, measures of leverage, methods of diagnostics for L1 regression, special diagnostics graphs and measures of leverage. The algorithms are based in Dielman (2005) <doi:10.1080/0094965042000223680>, Elian et al. (2000) <doi:10.1080/03610920008832518> and Dodge (1997) <doi:10.1006/jmva.1997.1666>. This package builds on the quantreg package, which is a well-established package for tuning quantile regression models. There are also tests to verify if the errors have a Laplace distribution based on the work of Puig and Stephens (2000) <doi:10.2307/1270952>.
This package performs emulation of dynamic simulators using Gaussian process via one-step ahead approach. The package implements a flexible framework for approximating time-dependent outputs from computationally expensive dynamic systems. It is specifically designed for nonlinear dynamic systems where full simulations may be costly. The underlying Gaussian process model accounts for temporal dependency through the one-step-ahead formulation, allowing for accurate emulation of complex dynamics. Hyperparameters are estimated via maximum likelihood. For methodological details, see Heo (2025, <doi:10.48550/arXiv.2503.20250>) for exact method, and Mohammadi, Challenor, and Goodfellow (2019, <doi:10.1016/j.csda.2019.05.006>) for Monte Carlo method.
All datasets and functions required for the examples and exercises of the book "Data Science for Psychologists" (by Hansjoerg Neth, Konstanz University, 2025, <doi:10.5281/zenodo.7229812>), freely available at <https://bookdown.org/hneth/ds4psy/>. The book and corresponding courses introduce principles and methods of data science to students of psychology and other biological or social sciences. The ds4psy package primarily provides datasets, but also functions for data generation and manipulation (e.g., of text and time data) and graphics that are used in the book and its exercises. All functions included in ds4psy are designed to be explicit and instructive, rather than efficient or elegant.
Efficient algorithms for fitting the regularization path of linear regression, GLM, and Cox regression models with grouped penalties. This includes group selection methods such as group lasso, group MCP, and group SCAD as well as bi-level selection methods such as the group exponential lasso, the composite MCP, and the group bridge. For more information, see Breheny and Huang (2009) <doi:10.4310/sii.2009.v2.n3.a10>, Huang, Breheny, and Ma (2012) <doi:10.1214/12-sts392>, Breheny and Huang (2015) <doi:10.1007/s11222-013-9424-2>, and Breheny (2015) <doi:10.1111/biom.12300>, or visit the package homepage <https://pbreheny.github.io/grpreg/>.
The social network literature features numerous methods for assigning value to paths as a function of their ties. gretel systemizes these approaches, casting them as instances of a generalized path value function indexed by a penalty parameter. The package also calculates probabilistic path value and identifies optimal paths in either value framework. Finally, proximity matrices can be generated in these frameworks that capture high-order connections overlooked in primitive adjacency sociomatrices. Novel methods are described in Buch (2019) <https://davidbuch.github.io/analyzing-networks-with-gretel.html>. More traditional methods are also implemented, as described in Yang, Knoke (2001) <doi:10.1016/S0378-8733(01)00043-0>.
Display processing results using the GWR (Geographically Weighted Regression) method, display maps, and show the results of the Mixed GWR (Mixed Geographically Weighted Regression) model which automatically selects global variables based on variability between regions. This function refers to Yasin, & Purhadi. (2012). "Mixed Geographically Weighted Regression Model (Case Study the Percentage of Poor Households in Mojokerto 2008)". European Journal of Scientific Research, 188-196. <https://www.researchgate.net/profile/Hasbi-Yasin-2/publication/289689583_Mixed_geographically_weighted_regression_model_case_study_The_percentage_of_poor_households_in_Mojokerto_2008/links/58e46aa40f7e9bbe9c94d641/Mixed-geographically-weighted-regression-model-case-study-The-percentage-of-poor-households-in-Mojokerto-2008.pdf>.
N>=3 methods are used to measure each of n items. The data are used to estimate simultaneously systematic error (bias) and random error (imprecision). Observed measurements for each method or device are assumed to be linear functions of the unknown true values and the errors are assumed normally distributed. Pairwise calibration curves and plots can be easily generated. Unlike the ncb.od function, the omx function builds a one-factor measurement error model using OpenMx and allows missing values, uses full information maximum likelihood to estimate parameters, and provides both likelihood-based and bootstrapped confidence intervals for all parameters, in addition to Wald-type intervals.
Compute bending energies, principal warps, partial warp scores, and the non-affine component of shape variation for 2D landmark configurations, as well as Mardia-Dryden distributions and self-similar distributions of landmarks, as described in Mitteroecker et al. (2020) <doi:10.1093/sysbio/syaa007>. Working examples to decompose shape variation into small-scale and large-scale components, and to decompose the total shape variation into outline and residual shape components are provided. Two landmark datasets are provided, that quantify skull morphology in humans and papionin primates, respectively from Mitteroecker et al. (2020) <doi:10.5061/dryad.j6q573n8s> and Grunstra et al. (2020) <doi:10.5061/dryad.zkh189373>.
This package provides functions to estimate the size-controlled phenotypic integration index, a novel method by Torices & Méndez (2014) <doi:10.1086/676622> to solve problems due to individual size when estimating integration (namely, larger individuals have larger components, which will drive a correlation between components only due to resource availability that might obscure the observed measures of integration). In addition, the package also provides the classical estimation by Wagner (1984) <doi:10.1007/BF00275224>, bootstrapping and jackknife methods to calculate confidence intervals and a significance test for both integration indices. Further details can be found in Torices & Muñoz-Pajares <doi:10.3732/apps.1400104>.
In clinical trials, endpoints are sometimes evaluated with uncertainty. Adjudication is commonly adopted to ensure the study integrity. We propose to use multiple imputation (MI) introduced by Robin (1987) <doi:10.1002/9780470316696> to incorporate these uncertainties if reasonable event probabilities were provided. The method has been applied to Cox Proportional Hazard (PH) model, Kaplan-Meier (KM) estimation and Log-rank test in this package. Moreover, weighted estimations discussed in Cook (2004) <doi:10.1016/S0197-2456(00)00053-2> were also implemented with weights calculated from event probabilities. In conclusion, this package can handle time-to-event analysis if events presented with uncertainty by different methods.
cosmiq is a tool for the preprocessing of liquid- or gas - chromatography mass spectrometry (LCMS/GCMS) data with a focus on metabolomics or lipidomics applications. To improve the detection of low abundant signals, cosmiq generates master maps of the mZ/RT space from all acquired runs before a peak detection algorithm is applied. The result is a more robust identification and quantification of low-intensity MS signals compared to conventional approaches where peak picking is performed in each LCMS/GCMS file separately. The cosmiq package builds on the xcmsSet object structure and can be therefore integrated well with the package xcms as an alternative preprocessing step.
Computerized tomography (CT) can be used to assess certain wood properties when wood disks or logs are scanned. Wood density profiles (i.e. variations of wood density from pith to bark) can yield important information used for studies in forest resource assessment, wood quality and dendrochronology studies. The first step consists in transforming grey values from the scan images to density values. The packages then proposes a unique method to automatically locate the pith by combining an adapted Hough Transform method and a one-dimensional edge detector. Tree ring profiles (average ring density, earlywood and latewood density, ring width and percent latewood for each ring) are then obtained.