Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package contains all of the functions necessary for the complete analysis of a continuous glucose monitoring study and can be applied to data measured by various existing CGM devices such as FreeStyle Libre', Glutalor', Dexcom and Medtronic CGM'. It reads a series of data files, is able to convert various formats of time stamps, can deal with missing values, calculates both regular statistics and nonlinear statistics, and conducts group comparison. It also displays results in a concise format. Also contains two unique features new to CGM analysis: one is the implementation of strictly standard mean difference and the class of effect size; the other is the development of a new type of plot called antenna plot. It corresponds to Zhang XD'(2018)<doi:10.1093/bioinformatics/btx826>'s article CGManalyzer: an R package for analyzing continuous glucose monitoring studies'.
This package provides a specialized tool is designed for assessing contextual bandit algorithms, particularly those aimed at handling overdispersed and zero-inflated count data. It offers a simulated testing environment that includes various models like Poisson, Overdispersed Poisson, Zero-inflated Poisson, and Zero-inflated Overdispersed Poisson. The package is capable of executing five specific algorithms: Linear Thompson sampling with log transformation on the outcome, Thompson sampling Poisson, Thompson sampling Negative Binomial, Thompson sampling Zero-inflated Poisson, and Thompson sampling Zero-inflated Negative Binomial. Additionally, it can generate regret plots to evaluate the performance of contextual bandit algorithms. This package is based on the algorithms by Liu et al. (2023) <arXiv:2311.14359>.
Significance tests are provided for canonical correlation analysis, including asymptotic tests and a Monte Carlo method.
The reliability of assessment tools is a crucial aspect of monitoring student performance in various educational settings. It ensures that the assessment outcomes accurately reflect a student's true level of performance. However, when assessments are combined, determining composite reliability can be challenging, especially for naturalistic and unbalanced datasets in nested design as is often the case for Workplace-Based Assessments. This package is designed to estimate composite reliability in nested designs using multivariate generalizability theory and enhance the analysis of assessment data. The package allows for the inclusion of weight per assessment type and produces extensive G- and D-study results with graphical interpretations, and options to find the set of weights that maximizes the composite reliability or minimizes the standard error of measurement (SEM).
Implementation of Clarke's distribution-free test of non-nested models. Currently supported model functions are: lm(), glm() ('binomial', poisson', negative binomial links), polr() ('MASS'), clm() ('ordinal'), and multinom() ('nnet'). For more information on the test, see Clarke (2007) <doi:10.1093/pan/mpm004>.
This package provides a graphical user interface for simulating the effects of mergers, tariffs, and quotas under an assortment of different economic models. The interface is powered by the Shiny web application framework from RStudio'.
Connectome Predictive Modelling (CPM) (Shen et al. (2017) <doi:10.1038/nprot.2016.178>) is a method to predict individual differences in behaviour from brain functional connectivity. cpmr provides a simple yet efficient implementation of this method.
Generate balance tables and plots for covariates of groups preprocessed through matching, weighting or subclassification, for example, using propensity scores. Includes integration with MatchIt', WeightIt', MatchThem', twang', Matching', optmatch', CBPS', ebal', cem', sbw', and designmatch for assessing balance on the output of their preprocessing functions. Users can also specify data for balance assessment not generated through the above packages. Also included are methods for assessing balance in clustered or multiply imputed data sets or data sets with multi-category, continuous, or longitudinal treatments.
In computer experiments space-filling designs are having great impact. Most popularly used space-filling designs are Uniform designs (UDs), Latin hypercube designs (LHDs) etc. For further references one can see Mckay (1979) <DOI:10.1080/00401706.1979.10489755> and Fang (1980) <https://cir.nii.ac.jp/crid/1570291225616774784>. In this package, we have provided algorithms for generate efficient LHDs and UDs. Here, generated LHDs are efficient as they possess lower value of Maxpro measure, Phi_p value and Maximum Absolute Correlation (MAC) value based on the weightage given to each criterion. On the other hand, the produced UDs are having good space-filling property as they always attain the lower bound of Discrete Discrepancy measure. Further, some useful functions added in this package for adding more value to this package.
Calculate a set of corrected test statistics for cases when samples are not independent, such as when classification accuracy values are obtained over resamples or through k-fold cross-validation, as proposed by Nadeau and Bengio (2003) <doi:10.1023/A:1024068626366> and presented in Bouckaert and Frank (2004) <doi:10.1007/978-3-540-24775-3_3>.
Allows users to identify similar cases for qualitative case studies using statistical matching methods.
Estimation of gas transport properties (viscosity, diffusion, thermal conductivity) using Chapman-Enskok theory (Chapman and Larmor 1918, <doi:10.1098/rsta.1918.0005>) and of the second virial coefficient (Vargas et al. 2001, <doi:10.1016/s0378-4371(00)00362-9>) using the Lennard-Jones (12-6) potential. Up to the third order correction is taken into account for viscosity and thermal conductivity. It is also possible to calculate the binary diffusion coefficients of polar and non-polar gases in non-polar bath gases (Brown et al. 2011, <doi:10.1016/j.pecs.2010.12.001>). 16 collision integrals are calculated with four digit accuracy over the reduced temperature range [0.3, 400] using an interpolation function of Kim and Monroe (2014, <doi:10.1016/j.jcp.2014.05.018>).
Code for a variety of nonlinear conditional independence tests: Kernel conditional independence test (Zhang et al., UAI 2011, <arXiv:1202.3775>), Residual Prediction test (based on Shah and Buehlmann, <arXiv:1511.03334>), Invariant environment prediction, Invariant target prediction, Invariant residual distribution test, Invariant conditional quantile prediction (all from Heinze-Deml et al., <arXiv:1706.08576>).
Imports conversation transcripts into R, concatenates them into a single dataframe appending event identifiers, cleans and formats the text, then yokes user-specified psycholinguistic database values to each word. ConversationAlign then computes alignment indices between two interlocutors across each transcript for >40 possible semantic, lexical, and affective dimensions. In addition to alignment, ConversationAlign also produces a table of analytics (e.g., token count, type-token-ratio) in a summary table describing your particular text corpus.
Convex Partition is a black-box optimisation algorithm for single objective real-parameters functions. The basic principle is to progressively estimate and exploit a regression tree similar to a CART (Classification and Regression Tree) of the objective function. For more details see de Paz (2024) <doi:10.1007/978-3-031-62836-8_3> and Loh (2011) <doi:10.1002/widm.8> .
The data and meta data from Statistics Netherlands (<https://www.cbs.nl>) can be browsed and downloaded. The client uses the open data API of Statistics Netherlands.
Calculating silhouette information for clusters on circular or linear data using fast algorithms. These algorithms run in linear time on sorted data, in contrast to quadratic time by the definition of silhouette. When used together with the fast and optimal circular clustering method FOCC (Debnath & Song 2021) <doi:10.1109/TCBB.2021.3077573> implemented in R package OptCirClust', circular silhouette can be maximized to find the optimal number of circular clusters; it can also be used to estimate the period of noisy periodical data.
Supervised learning from a source distribution (with known segmentation into cell sub-populations) to fit a target distribution with unknown segmentation. It relies regularized optimal transport to directly estimate the different cell population proportions from a biological sample characterized with flow cytometry measurements. It is based on the regularized Wasserstein metric to compare cytometry measurements from different samples, thus accounting for possible mis-alignment of a given cell population across sample (due to technical variability from the technology of measurements). Supervised learning technique based on the Wasserstein metric that is used to estimate an optimal re-weighting of class proportions in a mixture model Details are presented in Freulon P, Bigot J and Hejblum BP (2023) <doi:10.1214/22-AOAS1660>.
Estimation of changepoints using an "S-curve" approximation. Formation of confidence intervals for changepoint locations and magnitudes. Both abrupt and gradual changes can be modeled.
Implementations of canonical associative learning models, with tools to run experiment simulations, estimate model parameters, and compare model representations. Experiments and results are represented using S4 classes and methods.
Biotechnology in spatial omics has advanced rapidly over the past few years, enhancing both throughput and resolution. However, existing annotation pipelines in spatial omics predominantly rely on clustering methods, lacking the flexibility to integrate extensive annotated information from single-cell RNA sequencing (scRNA-seq) due to discrepancies in spatial resolutions, species, or modalities. Here we introduce the CAESAR suite, an open-source software package that provides image-based spatial co-embedding of locations and genomic features. It uniquely transfers labels from scRNA-seq reference, enabling the annotation of spatial omics datasets across different technologies, resolutions, species, and modalities, based on the conserved relationship between signature genes and cells/locations at an appropriate level of granularity. Notably, CAESAR enriches location-level pathways, allowing for the detection of gradual biological pathway activation within spatially defined domain types. More details on the methods related to our paper currently under submission. A full reference to the paper will be provided in future versions once the paper is published.
This package provides functions to work with data frames to prepare data for further analysis. The functions for imputation, encoding, partitioning, and other manipulation can produce log files to keep track of process.
This package implements a basis function or functional data analysis framework for several techniques of multivariate analysis in continuous-time setting. Specifically, we introduced continuous-time analogues of several classical techniques of multivariate analysis, such as principal component analysis, canonical correlation analysis, Fisher linear discriminant analysis, K-means clustering, and so on. Details are in Biplab Paul, Philip T. Reiss, Erjia Cui and Noemi Foa (2025) "Continuous-time multivariate analysis" <doi: 10.1080/10618600.2024.2374570>.
Estimate the direct and indirect (mediation) effects of treatment on the outcome when intermediate variables (mediators) are compositional and high-dimensional. Sohn, M.B. and Li, H. (2017). Compositional Mediation Analysis for Microbiome Studies. (AOAS: In revision).