Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a method of clustering functional data using subregion information of the curves. It is intended to supplement the fda and fda.usc packages in functional data object clustering. It also facilitates the printing and plotting of the results in a tree format and limits the partitioning candidates into a specific set of subregions.
Fast and Accurate Randomized Singular Value Decomposition (RSVD) methods proposed in the PCAone paper by Li (2023) <https://genome.cshlp.org/content/33/9/1599>.
This package provides functions and graphics for projecting daily incidence based on past incidence, and estimates of the serial interval and reproduction number. Projections are based on a branching process using a Poisson-distributed number of new cases per day, similar to the model used for estimating R in EpiEstim or in earlyR', and described by Nouvellet et al. (2017) <doi:10.1016/j.epidem.2017.02.012>. The package provides the S3 class projections which extends matrix', with accessors and additional helpers for handling, subsetting, merging, or adding these objects, as well as dedicated printing and plotting methods.
Bayesian supervised predictive classifiers, hypothesis testing, and parametric estimation under Partition Exchangeability are implemented. The two classifiers presented are the marginal classifier (that assumes test data is i.i.d.) next to a more computationally costly but accurate simultaneous classifier (that finds a labelling for the entire test dataset at once based on simultanous use of all the test data to predict each label). We also provide the Maximum Likelihood Estimation (MLE) of the only underlying parameter of the partition exchangeability generative model as well as hypothesis testing statistics for equality of this parameter with a single value, alternative, or multiple samples. We present functions to simulate the sequences from Ewens Sampling Formula as the realisation of the Poisson-Dirichlet distribution and their respective probabilities.
Computes pseudo-realizations from the posterior distribution of a Gaussian Process (GP) with the method described in Azzimonti et al. (2016) <doi:10.1137/141000749>. The realizations are obtained from simulations of the field at few well chosen points that minimize the expected distance in measure between the true excursion set of the field and the approximate one. Also implements a R interface for (the main function of) Distance Transform of sampled Functions (<https://cs.brown.edu/people/pfelzens/dt/index.html>).
Generates design matrix for analysing real paired comparisons and derived paired comparison data (Likert type items/ratings or rankings) using a loglinear approach. Fits loglinear Bradley-Terry model (LLBT) exploiting an eliminate feature. Computes pattern models for paired comparisons, rankings, and ratings. Some treatment of missing values (MCAR and MNAR). Fits latent class (mixture) models for paired comparison, rating and ranking patterns using a non-parametric ML approach.
This package performs bivariate composite likelihood and full information maximum likelihood estimation for polytomous logit-normit (graded logistic) item response theory (IRT) models.
Large-scale phenotypic data processing is essential in research. Researchers need to eliminate outliers from the data in order to obtain true and reliable results. Best linear unbiased prediction (BLUP) is a standard method for estimating random effects of a mixed model. This method can be used to process phenotypic data under different conditions and is widely used in animal and plant breeding. The Phenotype can remove outliers from phenotypic data and performs the best linear unbiased prediction (BLUP), help researchers quickly complete phenotypic data analysis. H.P.Piepho. (2008) <doi:10.1007/s10681-007-9449-8>.
This package provides data set and function for exploration of Multiple Indicator Cluster Survey 2014 Household Listing questionnaire data for Punjab, Pakistan.
Permutation based non-parametric analysis of CRISPR screen data. Details about this algorithm are published in the following paper published on BMC genomics, Jia et al. (2017) <doi:10.1186/s12864-017-3938-5>: A permutation-based non-parametric analysis of CRISPR screen data. Please cite this paper if you use this algorithm for your paper.
Performant interactive scatterplot for ~ 1 million points. Zoom, pan, and pick points. Includes tooltips, labels, a grid overlay, legend, and coupled interactions across multiple plots.
Kernel density estimation on the polysphere, (hyper)sphere, and circle. Includes functions for density estimation, regression estimation, ridge estimation, bandwidth selection, kernels, samplers, and homogeneity tests. Companion package to Garcà a-Portugués and Meilán-Vila (2025) <doi:10.1080/01621459.2025.2521898> and Garcà a-Portugués and Meilán-Vila (2023) <doi:10.1007/978-3-031-32729-2_4>.
Compute detailed and aggregated performance spectrum for event data. The detailed performance spectrum describes the event data in terms of segments, where the performance of each segment is measured and plotted for any occurrences of this segment over time and can be classified, e.g., regarding the overall population. The aggregated performance spectrum visualises the amount of cases of particular performance over time. Denisov, V., Fahland, D., & van der Aalst, W. M. P. (2018) <doi:10.1007/978-3-319-98648-7_9>.
The constructs used to study the human psychology have many definitions and corresponding instructions for eliciting and coding qualitative data pertaining to constructs content and for measuring the constructs. This plethora of definitions and instructions necessitates unequivocal reference to specific definitions and instructions in empirical and secondary research. This package implements a human- and machine-readable standard for specifying construct definitions and instructions for measurement and qualitative research based on YAML'. This standard facilitates systematic unequivocal reference to specific construct definitions and corresponding instructions in a decentralized manner (i.e. without requiring central curation; Peters (2020) <doi:10.31234/osf.io/xebhn>).
Bayesian hierarchical methods for pathway analysis of genomewide association data: Normal/Bayes factors and Sparse Normal/Adaptive lasso. The Frequentist Fisher's product method is included as well.
This package provides functions to compute and plot power levels, minimum detectable effect sizes, and minimum required sample sizes for the test of the overall average effect size in meta-analysis of dependent effect sizes.
Check a data frame for personal information, including names, location, disability status, and geo-coordinates.
The purpose of PH1XBAR is to build a Phase I Shewhart control chart for the basic Shewhart, the variance components and the ARMA models in R for subgrouped and individual data. More details can be found: Yao and Chakraborti (2020) <doi: 10.1002/qre.2793>, Yao and Chakraborti (2021) <doi: 10.1080/08982112.2021.1878220>, and Yao et al. (2023) <doi: 10.1080/00224065.2022.2139783>.
The name of the package is derived from the French, pour ridge, and provides functionality for ridge-type estimation of a potpourri of models. Currently, this estimation concerns that of various Gaussian graphical models from different study designs. Among others it considers the regular Gaussian graphical model and a mixture of such models. The porridge-package implements the estimation of the former either from i) data with replicated observations by penalized loglikelihood maximization using the regular ridge penalty on the parameters (van Wieringen, Chen, 2021) or ii) from non-replicated data by means of either a ridge estimator with multiple shrinkage targets (as presented in van Wieringen et al. 2020, <doi:10.1016/j.jmva.2020.104621>) or the generalized ridge estimator that allows for both the inclusion of quantitative and qualitative prior information on the precision matrix via element-wise penalization and shrinkage (van Wieringen, 2019, <doi:10.1080/10618600.2019.1604374>). Additionally, the porridge-package facilitates the ridge penalized estimation of a mixture of Gaussian graphical models (Aflakparast et al., 2018). On another note, the package also includes functionality for ridge-type estimation of the generalized linear model (as presented in van Wieringen, Binder, 2022, <doi:10.1080/10618600.2022.2035231>).
Weighted Deming regression, also known as "errors-in-variable" regression, is applied with suitable weights. Weights are modeled via a precision profile; functions are provided for implementing it in both known and unknown precision profile situations. The package provides tools for precision profile weighted Deming (PWD) regression. It covers two settings â one where the precision profiles are known either from external studies or from adequate replication of the X and Y readings, and one in which there is a plausible functional form for the precision profiles but the exact function must be estimated from the (generally singlicate) readings. The function set includes tools for: estimated standard errors (via jackknifing); standardized-residual analysis function with regression diagnostic tools for normality, linearity and constant variance; and an outlier analysis identifying significant outliers for closer investigation. Further information on mathematical derivations and applications can be found on arXiv: Hawkins and Kraker (2025) <doi:10.48550/arXiv.2508.02888>.
Aims to utilize model-based clustering (unsupervised) for high dimensional and ultra large data, especially in a distributed manner. The code employs pbdMPI to perform a expectation-gathering-maximization algorithm for finite mixture Gaussian models. The unstructured dispersion matrices are assumed in the Gaussian models. The implementation is default in the single program multiple data programming model. The code can be executed through pbdMPI and MPI implementations such as OpenMPI and MPICH'. See the High Performance Statistical Computing website <https://snoweye.github.io/hpsc/> for more information, documents and examples.
This package implements fast, safe, and customizable assertions routines, which can be used in place of base::stopifnot().
Generates chronological and ordered p-plots for data vectors or vectors of p-values. The p-plot visualizes the evolution of the p-value of a significance test across the sampled data. It allows for assessing the consistency of the observed effects, for detecting the presence of potential moderator variables, and for estimating the influence of outlier values on the observed results. For non-significant findings, it can diagnose patterns indicative of underpowered study designs. The p-plot can thus either back the binary accept-vs-reject decision of common null-hypothesis significance tests, or it can qualify this decision and stimulate additional empirical work to arrive at more robust and replicable statistical inferences.
This package provides functions for causal structure learning and causal inference using graphical models. The main algorithms for causal structure learning are PC (for observational data without hidden variables), FCI and RFCI (for observational data with hidden variables), and GIES (for a mix of data from observational studies (i.e. observational data) and data from experiments involving interventions (i.e. interventional data) without hidden variables). For causal inference the IDA algorithm, the Generalized Backdoor Criterion (GBC), the Generalized Adjustment Criterion (GAC) and some related functions are implemented. Functions for incorporating background knowledge are provided.