Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Framework for the Item Response Theory analysis of dichotomous and ordinal polytomous outcomes under the assumption of multidimensionality and discreteness of the latent traits. The fitting algorithms allow for missing responses and for different item parameterizations and are based on the Expectation-Maximization paradigm. Individual covariates affecting the class weights may be included in the new version (since 2.1).
This package provides a simple and the early stage package for matrix profile based on the paper of Chin-Chia Michael Yeh, Yan Zhu, Liudmila Ulanova, Nurjahan Begum, Yifei Ding, Hoang Anh Dau, Diego Furtado Silva, Abdullah Mueen, and Eamonn Keogh (2016) <DOI:10.1109/ICDM.2016.0179>. This package calculates all-pairs-similarity for a given window size for time series data.
This package provides constrained joint maximum likelihood estimation algorithms for item factor analysis (IFA) based on multidimensional item response theory models. So far, we provide functions for exploratory and confirmatory IFA based on the multidimensional two parameter logistic (M2PL) model for binary response data. Comparing with traditional estimation methods for IFA, the methods implemented in this package scale better to data with large numbers of respondents, items, and latent factors. The computation is facilitated by multiprocessing OpenMP API. For more information, please refer to: 1. Chen, Y., Li, X., & Zhang, S. (2018). Joint Maximum Likelihood Estimation for High-Dimensional Exploratory Item Factor Analysis. Psychometrika, 1-23. <doi:10.1007/s11336-018-9646-5>; 2. Chen, Y., Li, X., & Zhang, S. (2019). Structured Latent Factor Analysis for Large-scale Data: Identifiability, Estimability, and Their Implications. Journal of the American Statistical Association, <doi: 10.1080/01621459.2019.1635485>.
If results from a meta-GWAS are used for validation in one of the cohorts that was included in the meta-analysis, this will yield biased (i.e. too optimistic) results. The validation cohort needs to be independent from the meta-Genome-Wide-Association-Study (meta-GWAS) results. MetaSubtract will subtract the results of the respective cohort from the meta-GWAS results analytically without having to redo the meta-GWAS analysis using the leave-one-out methodology. It can handle different meta-analyses methods and takes into account if single or double genomic control correction was applied to the original meta-analysis. It can also handle different meta-analysis methods. It can be used for whole GWAS, but also for a limited set of genetic markers. See for application: Nolte I.M. et al. (2017); <doi: 10.1038/ejhg.2017.50>.
Identifying important factors from a large number of potentially important factors of a highly nonlinear and computationally expensive black box model is a difficult problem. Xiao, Joseph, and Ray (2022) <doi:10.1080/00401706.2022.2141897> proposed Maximum One-Factor-at-a-Time (MOFAT) designs for doing this. A MOFAT design can be viewed as an improvement to the random one-factor-at-a-time (OFAT) design proposed by Morris (1991) <doi:10.1080/00401706.1991.10484804>. The improvement is achieved by exploiting the connection between Morris screening designs and Monte Carlo-based Sobol designs, and optimizing the design using a space-filling criterion. This work is supported by a U.S. National Science Foundation (NSF) grant CMMI-1921646 <https://www.nsf.gov/awardsearch/showAward?AWD_ID=1921646>.
Compute case-wise and cluster-wise derivative for mixed effects models with respect to fixed effects parameter, random effect (co)variances, and residual variance. This material is partially based on work supported by the National Science Foundation under Grant Number 1460719.
Generates Muller plot from parental/genealogy/phylogeny information and population/abundance/frequency dynamics data. Muller plots are plots which combine information about succession of different OTUs (genotypes, phenotypes, species, ...) and information about dynamics of their abundances (populations or frequencies) over time. They are powerful and fascinating tools to visualize evolutionary dynamics. They may be employed also in study of diversity and its dynamics, i.e. how diversity emerges and how changes over time. They are called Muller plots in honor of Hermann Joseph Muller which used them to explain his idea of Muller's ratchet (Muller, 1932, American Naturalist). A big difference between Muller plots and normal box plots of abundances is that a Muller plot depicts not only the relative abundances but also succession of OTUs based on their genealogy/phylogeny/parental relation. In a Muller plot, horizontal axis is time/generations and vertical axis represents relative abundances of OTUs at the corresponding times/generations. Different OTUs are usually shown with polygons with different colors and each OTU originates somewhere in the middle of its parent area in order to illustrate their succession in evolutionary process. To generate a Muller plot one needs the genealogy/phylogeny/parental relation of OTUs and their abundances over time. MullerPlot package has the tools to generate Muller plots which clearly depict the origin of successors of OTUs.
Requires rooted phylogeny as input and creates a table of genera, their monophyly-status, which taxa cause problems in monophyly etc. Different information can be extracted from the output and a plot function allows visualization of the results in a number of ways. "MonoPhy: a simple R package to find and visualize monophyly issues." Schwery, O. & O'Meara, B.C. (2016) <doi:10.7717/peerj-cs.56>.
This package provides readers for easy and consistent importing of Mouse Genome Informatics (MGI) report files: <https://www.informatics.jax.org/downloads/reports/index.html>. These data are provided by Baldarelli RM, Smith CL, Ringwald M, Richardson JE, Bult CJ, Mouse Genome Informatics Group (2024) <doi:10.1093/genetics/iyae031>.
Allows practitioners and researchers a wholesale approach for deriving magnitude-based inferences from raw data. A major goal of mbir is to programmatically detect appropriate statistical tests to run in lieu of relying on practitioners to determine correct stepwise procedures independently.
Flexible and informed regression with Multiple Change Points. mcp can infer change points in means, variances, autocorrelation structure, and any combination of these, as well as the parameters of the segments in between. All parameters are estimated with uncertainty and prediction intervals are supported - also near the change points. mcp supports hypothesis testing via Savage-Dickey density ratio, posterior contrasts, and cross-validation. mcp is described in Lindeløv (submitted) <doi:10.31219/osf.io/fzqxv> and generalizes the approach described in Carlin, Gelfand, & Smith (1992) <doi:10.2307/2347570> and Stephens (1994) <doi:10.2307/2986119>.
Maximum a posteriori (MAP) estimation for topic models (i.e., Latent Dirichlet Allocation) in text analysis, as described in Taddy (2012) On estimation and selection for topic models'. Previous versions of this code were included as part of the textir package. If you want to take advantage of openmp parallelization, uncomment the relevant flags in src/MAKEVARS before compiling.
The following methods are implemented to evaluate how sensitive the results of a meta-analysis are to potential bias in meta-analysis and to support Schwarzer et al. (2015) <DOI:10.1007/978-3-319-21416-0>, Chapter 5 Small-Study Effects in Meta-Analysis': - Copas selection model described in Copas & Shi (2001) <DOI:10.1177/096228020101000402>; - limit meta-analysis by Rücker et al. (2011) <DOI:10.1093/biostatistics/kxq046>; - upper bound for outcome reporting bias by Copas & Jackson (2004) <DOI:10.1111/j.0006-341X.2004.00161.x>; - imputation methods for missing binary data by Gamble & Hollis (2005) <DOI:10.1016/j.jclinepi.2004.09.013> and Higgins et al. (2008) <DOI:10.1177/1740774508091600>; - LFK index test and Doi plot by Furuya-Kanamori et al. (2018) <DOI:10.1097/XEB.0000000000000141>.
Model fitting and simulation for Gaussian and logistic inner product MultiNeSS models for multiplex networks. The package implements a convex fitting algorithm with fully adaptive parameter tuning, including options for edge cross-validation. For more details see MacDonald et al. (2020).
The monotone package contains a fast up-and-down-blocks implementation for the pool-adjacent-violators algorithm for simple linear ordered monotone regression, including two spin-off functions for unimodal and bivariate monotone regression (see <doi:10.18637/jss.v102.c01>).
Detect outlying observations in functional data sets based on the minimum regularized covariance trace (MRCT) estimator. Includes implementation of Oguamalam et al. (2023) <arXiv:2307.13509>.
Allows users to simulate matrix population models with particular characteristics based on aspects of life history such as mortality trajectories and fertility trajectories. Also allows the exploration of sampling error due to small sample size.
Many times, you will not find data for all dates. After first January, 2011 you may have next data on 20th January, 2011 and so on. Also available dates may have zero values. Try to gather all such kinds of data in different excel sheets of a single excel file. Every sheet will contain two columns (1st one is dates and second one is the data). After loading all the sheets into different elements of a list, using this you can fill the gaps for all the sheets and mark all the corresponding values as zeros. Here I am talking about daily data. Finally, it will combine all the filled results into one data frame (first column is date and other columns will be corresponding values of your sheets) and give one combined data frame. Number of columns in the data frame will be number of sheets plus one. Then imputation will be done. Daily to monthly and weekly conversion is also possible. More details can be found in Garai and others (2023) <doi:10.13140/RG.2.2.11977.42087>.
Data class for increased interoperability working with spatial-temporal data together with corresponding functions and methods (conversions, basic calculations and basic data manipulation). The class distinguishes between spatial, temporal and other dimensions to facilitate the development and interoperability of tools build for it. Additional features are name-based addressing of data and internal consistency checks (e.g. checking for the right data order in calculations).
Estimates the sample size needed to detect microbial contamination in a lot with a user-specified detection probability and user-specified analytical sensitivity. Various patterns of microbial contamination are accounted for: homogeneous (Poisson), heterogeneous (Poisson-Gamma) or localized(Zero-inflated Poisson). Ida Jongenburger et al. (2010) <doi:10.1016/j.foodcont.2012.02.004> "Impact of microbial distributions on food safety". Leroy Simon (1963) <doi:10.1017/S0515036100001975> "Casualty Actuarial Society - The Negative Binomial and Poisson Distributions Compared".
Calculates the Most Probable Number (MPN) to quantify the concentration (density) of microbes in serial dilutions of a laboratory sample (described in Jarvis, 2010 <doi:10.1111/j.1365-2672.2010.04792.x>). Also calculates the Aerobic Plate Count (APC) for similar microbial enumeration experiments.
Compute and select tuning parameters for the MRCE estimator proposed by Rothman, Levina, and Zhu (2010) <doi:10.1198/jcgs.2010.09188>. This estimator fits the multiple output linear regression model with a sparse estimator of the error precision matrix and a sparse estimator of the regression coefficient matrix.
It is often challenging to strongly control the family-wise type-1 error rate in the group-sequential trials with multiple endpoints (hypotheses). The inflation of type-1 error rate comes from two sources (S1) repeated testing individual hypothesis and (S2) simultaneous testing multiple hypotheses. The MultiGroupSequential package is intended to help researchers to tackle this challenge. The procedures provided include the sequential procedures described in Luo and Quan (2023) <doi:10.1080/19466315.2023.2191989> and the graphical procedure proposed by Maurer and Bretz (2013) <doi:10.1080/19466315.2013.807748>. Luo and Quan (2013) describes three procedures, and the functions to implement these procedures are (1) seqgspgx() implements a sequential graphical procedure based on the group-sequential p-values; (2) seqgsphh() implements a sequential Hochberg/Hommel procedure based on the group-sequential p-values; and (3) seqqvalhh() implements a sequential Hochberg/Hommel procedure based on the q-values. In addition, seqmbgx() implements the sequential graphical procedure described in Maurer and Bretz (2013).
Data sets in the book entitled "Multivariate Statistical Methods with R Applications", H.Bulut (2018). The book was published in Turkish and the original name of this book will be "R Uygulamalari ile Cok Degiskenli Istatistiksel Yontemler".