Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An R interface to version 0.3 of the ROPTLIB optimization library (see <https://www.math.fsu.edu/~whuang2/> for more information). Optimize real- valued functions over manifolds such as Stiefel, Grassmann, and Symmetric Positive Definite matrices. For details see Martin et. al. (2020) <doi:10.18637/jss.v093.i01>. Note that the optional ldr package used in some of this package's examples can be obtained from either JSS <https://www.jstatsoft.org/index.php/jss/article/view/v061i03/2886> or from the CRAN archives <https://cran.r-project.org/src/contrib/Archive/ldr/ldr_1.3.3.tar.gz>.
This package provides functions to calculate the minimum and maximum possible values of Cronbach's alpha when item-level missing data are present. Cronbach's alpha (Cronbach, 1951 <doi:10.1007/BF02310555>) is one of the most widely used measures of internal consistency in the social, behavioral, and medical sciences (Bland & Altman, 1997 <doi:10.1136/bmj.314.7080.572>; Tavakol & Dennick, 2011 <doi:10.5116/ijme.4dfb.8dfd>). However, conventional implementations assume complete data, and listwise deletion is often applied when missingness occurs, which can lead to biased or overly optimistic reliability estimates (Enders, 2003 <doi:10.1037/1082-989X.8.3.322>). This package implements computational strategies including enumeration, Monte Carlo sampling, and optimization algorithms (e.g., Genetic Algorithm, Differential Evolution, Sequential Least Squares Programming) to obtain sharp lower and upper bounds of Cronbach's alpha under arbitrary missing data patterns. The approach is motivated by Manski's partial identification framework and pessimistic bounding ideas from optimization literature.
Generalized Egger tests for detecting publication bias in meta-analysis for diagnostic accuracy test (Noma (2020) <doi:10.1111/biom.13343>, Noma (2022) <doi:10.48550/arXiv.2209.07270>). These publication bias tests are generally more powerful compared with the conventional univariate publication bias tests and can incorporate correlation information between the outcome variables.
This package provides functions to compute and visualize movement-based kernel density estimates (MKDEs) for animal utilization distributions in 2 or 3 spatial dimensions.
Implementation of adaptive assessment procedures based on Knowledge Space Theory (KST, Doignon & Falmagne, 1999 <ISBN:9783540645016>) and Formal Psychological Assessment (FPA, Spoto, Stefanutti & Vidotto, 2010 <doi:10.3758/BRM.42.1.342>) frameworks. An adaptive assessment is a type of evaluation that adjusts the difficulty and nature of subsequent questions based on the test taker's responses to previous ones. The package contains functions to perform and simulate an adaptive assessment. Moreover, it is integrated with two Shiny interfaces, making it both accessible and user-friendly. The package has been partially funded by the European Union - NextGenerationEU and by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), Mission 4, Component 2, Investment 1.5, project â RAISE - Robotics and AI for Socio-economic Empowermentâ (ECS00000035).
This package provides mailmerge methods for reading spreadsheets of addresses and other relevant information to create standardized but customizable letters. Provides a method for mapping US ZIP codes, including those of letter recipients. Provides a method for parsing and processing html code from online job postings of the American Political Science Association.
Quickly make tables of descriptive statistics (i.e., counts, means, confidence intervals) for continuous variables. This package is designed to work in a Tidyverse pipeline, and consideration has been given to get results from R to Microsoft Word ® with minimal pain.
Information of the centroids and geographical limits of the regions, departments, provinces and districts of Peru.
Dichotomous responses having two categories can be analyzed with stats::glm() or lme4::glmer() using the family=binomial option. Unfortunately, polytomous responses with three or more unordered categories cannot be analyzed similarly because there is no analogous family=multinomial option. For between-subjects data, nnet::multinom() can address this need, but it cannot handle random factors and therefore cannot handle repeated measures. To address this gap, we transform nominal response data into counts for each categorical alternative. These counts are then analyzed using (mixed) Poisson regression as per Baker (1994) <doi:10.2307/2348134>. Omnibus analyses of variance can be run along with post hoc pairwise comparisons. For users wishing to analyze nominal responses from surveys or experiments, the functions in this package essentially act as though stats::glm() or lme4::glmer() provide a family=multinomial option.
This package provides a HTML widget rendering the Monaco editor. The Monaco editor is the code editor which powers VS Code'. It is particularly well developed for JavaScript'. In addition to the built-in features of the Monaco editor, the widget allows to prettify multiple languages, to view the HTML rendering of Markdown code, and to view and resize SVG images.
This package provides functions for analyzing the association between one single response categorical variable (SRCV) and one multiple response categorical variable (MRCV), or between two or three MRCVs. A modified Pearson chi-square statistic can be used to test for marginal independence for the one or two MRCV case, or a more general loglinear modeling approach can be used to examine various other structures of association for the two or three MRCV case. Bootstrap- and asymptotic-based standardized residuals and model-predicted odds ratios are available, in addition to other descriptive information. Statisical methods implemented are described in Bilder et al. (2000) <doi:10.1080/03610910008813665>, Bilder and Loughin (2004) <doi:10.1111/j.0006-341X.2004.00147.x>, Bilder and Loughin (2007) <doi:10.1080/03610920600974419>, and Koziol and Bilder (2014) <https://journal.r-project.org/articles/RJ-2014-014/>.
Functions, data sets and examples for the book: Yves Croissant (2025) "Microeconometrics with R", Chapman and Hall/CRC The R Series <doi:10.1201/9781003100263>. The package includes a set of estimators for models used in microeconometrics, especially for count data and limited dependent variables. Test functions include score test, Hausman test, Vuong test, Sargan test and conditional moment test. A small subset of the data set used in the book is also included.
Bindings for hierarchical regression models for use with the parsnip package. Models include longitudinal generalized linear models (Liang and Zeger, 1986) <doi:10.1093/biomet/73.1.13>, and mixed-effect models (Pinheiro and Bates) <doi:10.1007/978-1-4419-0318-1_1>.
Estimation/multiple imputation programs for mixed categorical and continuous data.
This package provides a framework for multiple imputation for proteomics is proposed by Marie Chion, Christine Carapito and Frederic Bertrand (2021) <doi:10.1371/journal.pcbi.1010420>. It is dedicated to dealing with multiple imputation for proteomics.
This package provides functions for calculating the point and interval estimates of the natural indirect effect (NIE), total effect (TE), and mediation proportion (MP), based on the product approach. We perform the methods considered in Cheng, Spiegelman, and Li (2021) Estimating the natural indirect effect and the mediation proportion via the product method.
This package provides routines for multivariate measurement error correction. Includes procedures for linear, logistic and Cox regression models. Bootstrapped standard errors and confidence intervals can be obtained for corrected estimates.
Run multiple Large Language Model predictions against a table. The predictions run row-wise over a specified column. It works using a one-shot prompt, along with the current row's content. The prompt that is used will depend of the type of analysis needed.
This package provides a PC Algorithm with the Principle of Mendelian Randomization. This package implements the MRPC (PC with the principle of Mendelian randomization) algorithm to infer causal graphs. It also contains functions to simulate data under a certain topology, to visualize a graph in different ways, and to compare graphs and quantify the differences. See Badsha and Fu (2019) <doi:10.3389/fgene.2019.00460>, Badsha, Martin and Fu (2021) <doi:10.3389/fgene.2021.651812>, Kvamme and Badsha, et al. (2025) <doi:10.1093/genetics/iyaf064>.
Given a CSV file with titles and abstracts, the package creates a document-term matrix that is lemmatized and stemmed and can directly be used to train machine learning methods for automatic title-abstract screening in the preparation of a meta analysis.
Facilitates creation and manipulation of metric graphs, such as street or river networks. Further facilitates operations and visualizations of data on metric graphs, and the creation of a large class of random fields and stochastic partial differential equations on such spaces. These random fields can be used for simulation, prediction and inference. In particular, linear mixed effects models including random field components can be fitted to data based on computationally efficient sparse matrix representations. Interfaces to the R packages INLA and inlabru are also provided, which facilitate working with Bayesian statistical models on metric graphs. The main references for the methods are Bolin, Simas and Wallin (2024) <doi:10.3150/23-BEJ1647>, Bolin, Kovacs, Kumar and Simas (2023) <doi:10.1090/mcom/3929> and Bolin, Simas and Wallin (2023) <doi:10.48550/arXiv.2304.03190> and <doi:10.48550/arXiv.2304.10372>.
To determine the number of quantitative assays needed for a sample of data using pooled testing methods, which include mini-pooling (MP), MP with algorithm (MPA), and marker-assisted MPA (mMPA). To estimate the number of assays needed, the package also provides a tool to conduct Monte Carlo (MC) to simulate different orders in which the sample would be collected to form pools. Using MC avoids the dependence of the estimated number of assays on any specific ordering of the samples to form pools.
Mica is a server application used to create data web portals for large-scale epidemiological studies or multiple-study consortia. Mica helps studies to provide scientifically robust data visibility and web presence without significant information technology effort. Mica provides a structured description of consortia, studies, annotated and searchable data dictionaries, and data access request management. This Mica client allows to perform data extraction for reporting purposes.
Aggregates a set of trees with the same leaves to create a consensus tree. The trees are typically obtained via hierarchical clustering, hence the hclust format is used to encode both the aggregated trees and the final consensus tree. The method is exact and proven to be O(nqlog(n)), n being the individuals and q being the number of trees to aggregate.