Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Conduct random forests-based meta-analysis, obtain partial dependence plots for metaforest and classic meta-analyses, and cross-validate and tune metaforest- and classic meta-analyses in conjunction with the caret package. A requirement of classic meta-analysis is that the studies being aggregated are conceptually similar, and ideally, close replications. However, in many fields, there is substantial heterogeneity between studies on the same topic. Classic meta-analysis lacks the power to assess more than a handful of univariate moderators. MetaForest, by contrast, has substantial power to explore heterogeneity in meta-analysis. It can identify important moderators from a larger set of potential candidates (Van Lissa, 2020). This is an appealing quality, because many meta-analyses have small sample sizes. Moreover, MetaForest yields a measure of variable importance which can be used to identify important moderators, and offers partial prediction plots to explore the shape of the marginal relationship between moderators and effect size.
Causal moderated mediation analysis using the methods proposed by Qin and Wang (2023) <doi:10.3758/s13428-023-02095-4>. Causal moderated mediation analysis is crucial for investigating how, for whom, and where a treatment is effective by assessing the heterogeneity of mediation mechanism across individuals and contexts. This package enables researchers to estimate and test the conditional and moderated mediation effects, assess their sensitivity to unmeasured pre-treatment confounding, and visualize the results. The package is built based on the quasi-Bayesian Monte Carlo method, because it has relatively better performance at small sample sizes, and its running speed is the fastest. The package is applicable to a treatment of any scale, a binary or continuous mediator, a binary or continuous outcome, and one or more moderators of any scale.
This package contains the data sets for the first and second editions of the textbook "Mathematical Modeling and Applied Calculus" by Joel Kilty and Alex M. McAllister. The first edition of the book was published by Oxford University Press in 2018 with ISBN-13: 978-019882472. The second edition is expected to be published in January 2027.
This package produces clean and neat Markdown log file and also provide an argument to include the function call inside the Markdown log.
Generates Muller plot from parental/genealogy/phylogeny information and population/abundance/frequency dynamics data. Muller plots are plots which combine information about succession of different OTUs (genotypes, phenotypes, species, ...) and information about dynamics of their abundances (populations or frequencies) over time. They are powerful and fascinating tools to visualize evolutionary dynamics. They may be employed also in study of diversity and its dynamics, i.e. how diversity emerges and how changes over time. They are called Muller plots in honor of Hermann Joseph Muller which used them to explain his idea of Muller's ratchet (Muller, 1932, American Naturalist). A big difference between Muller plots and normal box plots of abundances is that a Muller plot depicts not only the relative abundances but also succession of OTUs based on their genealogy/phylogeny/parental relation. In a Muller plot, horizontal axis is time/generations and vertical axis represents relative abundances of OTUs at the corresponding times/generations. Different OTUs are usually shown with polygons with different colors and each OTU originates somewhere in the middle of its parent area in order to illustrate their succession in evolutionary process. To generate a Muller plot one needs the genealogy/phylogeny/parental relation of OTUs and their abundances over time. MullerPlot package has the tools to generate Muller plots which clearly depict the origin of successors of OTUs.
Calculate a multivariate functional principal component analysis for data observed on different dimensional domains. The estimation algorithm relies on univariate basis expansions for each element of the multivariate functional data (Happ & Greven, 2018) <doi:10.1080/01621459.2016.1273115>. Multivariate and univariate functional data objects are represented by S4 classes for this type of data implemented in the package funData'. For more details on the general concepts of both packages and a case study, see Happ-Kurz (2020) <doi:10.18637/jss.v093.i05>.
Estimation of multivariate normal (MVN) and student-t data of arbitrary dimension where the pattern of missing data is monotone. See Pantaleo and Gramacy (2010) <doi:10.48550/arXiv.0907.2135>. Through the use of parsimonious/shrinkage regressions (plsr, pcr, lasso, ridge, etc.), where standard regressions fail, the package can handle a nearly arbitrary amount of missing data. The current version supports maximum likelihood inference and a full Bayesian approach employing scale-mixtures for Gibbs sampling. Monotone data augmentation extends this Bayesian approach to arbitrary missingness patterns. A fully functional standalone interface to the Bayesian lasso (from Park & Casella), Normal-Gamma (from Griffin & Brown), Horseshoe (from Carvalho, Polson, & Scott), and ridge regression with model selection via Reversible Jump, and student-t errors (from Geweke) is also provided.
Extract, transform and load MITRE standards. This package gives you an approach to cybersecurity data sets. All data sets are build on runtime downloading raw data from MITRE public services. MITRE <https://www.mitre.org/> is a government-funded research organization based in Bedford and McLean. Current version includes most used standards as data frames. It also provide a list of nodes and edges with all relationships.
Fused lasso method to cluster and estimate regression coefficients of the same covariate across different data sets when a large number of independent data sets are combined. Package supports Gaussian, binomial, Poisson and Cox PH models.
Measures niche breadth and overlap of microbial taxa from large matrices. Niche breadth measurements include Levins niche breadth (Bn) index, Hurlbert's Bn and Feinsinger's proportional similarity (PS) index. (Feinsinger, P., Spears, E.E., Poole, R.W. (1981) <doi:10.2307/1936664>). Niche overlap measurements include Levin's Overlap (Ludwig, J.A. and Reynolds, J.F. (1988, ISBN:0471832359)) and a Jaccard similarity index of Feinsinger's PS values between taxa pairs, as Proportional Overlap.
Maximum likelihood estimation for generalized linear mixed models via Monte Carlo EM. For a description of the algorithm see Brian S. Caffo, Wolfgang Jank and Galin L. Jones (2005) <DOI:10.1111/j.1467-9868.2005.00499.x>.
An interface to the Microsoft 365 (formerly known as Office 365') suite of cloud services, building on the framework supplied by the AzureGraph package. Enables access from R to data stored in Teams', SharePoint Online and OneDrive', including the ability to list drive folder contents, upload and download files, send messages, and retrieve data lists. Also provides a full-featured Outlook email client, with the ability to send emails and manage emails and mail folders.
Multi-step adaptive elastic-net (MSAENet) algorithm for feature selection in high-dimensional regressions proposed in Xiao and Xu (2015) <DOI:10.1080/00949655.2015.1016944>, with support for multi-step adaptive MCP-net (MSAMNet) and multi-step adaptive SCAD-net (MSASNet) methods.
The Macroeconomics-at-Risk (MaR) approach is based on a two-step semi-parametric estimation procedure that allows to forecast the full conditional distribution of an economic variable at a given horizon, as a function of a set of factors. These density forecasts are then be used to produce coherent forecasts for any downside risk measure, e.g., value-at-risk, expected shortfall, downside entropy. Initially introduced by Adrian et al. (2019) <doi:10.1257/aer.20161923> to reveal the vulnerability of economic growth to financial conditions, the MaR approach is currently extensively used by international financial institutions to provide Value-at-Risk (VaR) type forecasts for GDP growth (Growth-at-Risk) or inflation (Inflation-at-Risk). This package provides methods for estimating these models. Datasets for the US and the Eurozone are available to allow testing of the Adrian et al (2019) model. This package constitutes a useful toolbox (data and functions) for private practitioners, scholars as well as policymakers.
This package provides a generalization of the Synth package that is designed for data at a more granular level (e.g., micro-level). Provides functions to construct weights (including propensity score-type weights) and run analyses for synthetic control methods with micro- and meso-level data; see Robbins, Saunders, and Kilmer (2017) <doi:10.1080/01621459.2016.1213634> and Robbins and Davenport (2021) <doi:10.18637/jss.v097.i02>.
Fits multi-way component models via alternating least squares algorithms with optional constraints. Fit models include N-way Canonical Polyadic Decomposition, Individual Differences Scaling, Multiway Covariates Regression, Parallel Factor Analysis (1 and 2), Simultaneous Component Analysis, and Tucker Factor Analysis.
Uses recursive partitioning to create homogeneous subgroups based on structural equation models fit in Mplus', a stand-alone program developed by Muthen and Muthen.
Generate the optimal maximin distance, minimax distance (only for low dimensions), and maximum projection designs within the class of Latin hypercube designs efficiently for computer experiments. Generate Pareto front optimal designs for each two of the three criteria and all the three criteria within the class of Latin hypercube designs efficiently. Provide criterion computing functions. References of this package can be found in Morris, M. D. and Mitchell, T. J. (1995) <doi:10.1016/0378-3758(94)00035-T>, Lu Lu and Christine M. Anderson-CookTimothy J. Robinson (2011) <doi:10.1198/Tech.2011.10087>, Joseph, V. R., Gul, E., and Ba, S. (2015) <doi:10.1093/biomet/asv002>.
Runs a Shiny web application that merges raw qPCR fluorescence data with related metadata to visualize species presence/absence detection patterns and assess data quality. The application calculates threshold values from raw fluorescence data using a method based on the second derivative method, Luu-The et al (2005) <doi:10.2144/05382RR05>, and utilizes the âchipPCRâ package by Rödiger, Burdukiewicz, & Schierack (2015) <doi:10.1093/bioinformatics/btv205> to calculate Cq values. The application has the ability to connect to a custom developed MySQL database to populate the applications interface. The application allows users to interact with visualizations such as a dynamic map, amplification curves and standard curves, that allow for zooming and/or filtering. It also enables the generation of customized exportable reports based on filtered mapping data.
An implementation of the multilevel (also known as mixed or random effects) hidden Markov model using Bayesian estimation in R. The multilevel hidden Markov model (HMM) is a generalization of the well-known hidden Markov model, for the latter see Rabiner (1989) <doi:10.1109/5.18626>. The multilevel HMM is tailored to accommodate (intense) longitudinal data of multiple individuals simultaneously, see e.g., de Haan-Rietdijk et al. <doi:10.1080/00273171.2017.1370364>. Using a multilevel framework, we allow for heterogeneity in the model parameters (transition probability matrix and conditional distribution), while estimating one overall HMM. The model can be fitted on multivariate data with either a categorical, normal, or Poisson distribution, and include individual level covariates (allowing for e.g., group comparisons on model parameters). Parameters are estimated using Bayesian estimation utilizing the forward-backward recursion within a hybrid Metropolis within Gibbs sampler. Missing data (NA) in the dependent variables is accommodated assuming MAR. The package also includes various visualization options, a function to simulate data, and a function to obtain the most likely hidden state sequence for each individual using the Viterbi algorithm.
This package provides functions for simulating, estimating and forecasting stationary Vector Autoregressive (VAR) models for multiple subject data using the penalized multi-VAR framework in Fisher, Kim and Pipiras (2020) <arXiv:2007.05052>.
Train and make predictions from a multi-layer perceptron neural network with optional partial monotonicity constraints.
Fit and simulate mixtures of von Mises-Fisher distributions.
Modular implementation of Multiobjective Evolutionary Algorithms based on Decomposition (MOEA/D) [Zhang and Li (2007), <DOI:10.1109/TEVC.2007.892759>] for quick assembling and testing of new algorithmic components, as well as easy replication of published MOEA/D proposals. The full framework is documented in a paper published in the Journal of Statistical Software [<doi:10.18637/jss.v092.i06>].