Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Interface to interact with the modelling framework SIMPLACE and to parse the results of simulations.
Fits the regularization path of regression models (linear and logistic) with additively combined penalty terms. All possible combinations with Least Absolute Shrinkage and Selection Operator (LASSO), Smoothly Clipped Absolute Deviation (SCAD), Minimax Concave Penalty (MCP) and Exponential Penalty (EP) are supported. This includes Sparse Group LASSO (SGL), Sparse Group SCAD (SGS), Sparse Group MCP (SGM) and Sparse Group EP (SGE). For more information, see Buch, G., Schulz, A., Schmidtmann, I., Strauch, K., & Wild, P. S. (2024) <doi:10.1002/bimj.202200334>.
Tidies up the forecasting modeling and prediction work flow, extends the broom package with sw_tidy', sw_glance', sw_augment', and sw_tidy_decomp functions for various forecasting models, and enables converting forecast objects to "tidy" data frames with sw_sweep'.
This package provides a collection of tools and functions to adjust a variety of stochastic blockmodels (SBM). Supports at the moment Simple, Bipartite, Multipartite and Multiplex SBM (undirected or directed with Bernoulli, Poisson or Gaussian emission laws on the edges, and possibly covariate for Simple and Bipartite SBM). See Léger (2016) <doi:10.48550/arXiv.1602.07587>, Barbillon et al. (2020) <doi:10.1111/rssa.12193> and Bar-Hen et al. (2020) <doi:10.48550/arXiv.1807.10138>.
Estimation and inference methods for large-scale mean and quantile regression models via stochastic (sub-)gradient descent (S-subGD) algorithms. The inference procedure handles cross-sectional data sequentially: (i) updating the parameter estimate with each incoming "new observation", (ii) aggregating it as a Polyak-Ruppert average, and (iii) computing an asymptotically pivotal statistic for inference through random scaling. The methodology used in the SGDinference package is described in detail in the following papers: (i) Lee, S., Liao, Y., Seo, M.H. and Shin, Y. (2022) <doi:10.1609/aaai.v36i7.20701> "Fast and robust online inference with stochastic gradient descent via random scaling". (ii) Lee, S., Liao, Y., Seo, M.H. and Shin, Y. (2023) <arXiv:2209.14502> "Fast Inference for Quantile Regression with Tens of Millions of Observations".
Efficient coordinate ascent algorithm for fitting regularization paths for linear models penalized by Spike-and-Slab LASSO of Rockova and George (2018) <doi:10.1080/01621459.2016.1260469>.
This package provides a set of functions that can be used to spatially thin species occurrence data. The resulting thinned data can be used in ecological modeling, such as ecological niche modeling.
This package provides utilities for conducting specification curve analyses (Simonsohn, Simmons & Nelson (2020, <doi: 10.1038/s41562-020-0912-z>) or multiverse analyses (Steegen, Tuerlinckx, Gelman & Vanpaemel, 2016, <doi: 10.1177/1745691616658637>) including functions to setup, run, evaluate, and plot all specifications.
The straightforward filtering index (SFINX) identifies true positive protein interactions in a fast, user-friendly, and highly accurate way. It is not only useful for the filtering of affinity purification - mass spectrometry (AP-MS) data, but also for similar types of data resulting from other co-complex interactomics technologies, such as TAP-MS, Virotrap and BioID. SFINX can also be used via the website interface at <http://sfinx.ugent.be>.
Interface to sigma.js graph visualization library including animations, plugins and shiny proxies.
The sdrt() function is designed for estimating subspaces for Sufficient Dimension Reduction (SDR) in time series, with a specific focus on the Time Series Central Mean subspace (TS-CMS). The package employs the Fourier transformation method proposed by Samadi and De Alwis (2023) <doi:10.48550/arXiv.2312.02110> and the Nadaraya-Watson kernel smoother method proposed by Park et al. (2009) <doi:10.1198/jcgs.2009.08076> for estimating the TS-CMS. The package provides tools for estimating distances between subspaces and includes functions for selecting model parameters using the Fourier transformation method.
This package provides predictive accuracy tools to evaluate time-to-event survival models. This includes calculating the concordance probability estimate that incorporates the follow-up time for a particular study developed by Devlin, Gonen, Heller (2020)<doi:10.1007/s10985-020-09503-3>. It also evaluates the concordance probability estimate for nested Cox proportional hazards models using a projection-based approach by Heller and Devlin (under review).
This package provides a novel semi-supervised machine learning algorithm to predict phenotype event times using Electronic Health Record (EHR) data.
The SoundexBR package provides an algorithm for decoding names into phonetic codes, as pronounced in Portuguese. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. The algorithm mainly encodes consonants; a vowel will not be encoded unless it is the first letter. The soundex code resultant consists of a four digits long string composed by one letter followed by three numerical digits: the letter is the first letter of the name, and the digits encode the remaining consonants.
This package provides an imputation pipeline for single-cell RNA sequencing data. The scISR method uses a hypothesis-testing technique to identify zero-valued entries that are most likely affected by dropout events and estimates the dropout values using a subspace regression model (Tran et.al. (2022) <DOI:10.1038/s41598-022-06500-4>).
Allows the user to estimate a vector logistic smooth transition autoregressive model via maximum log-likelihood or nonlinear least squares. It further permits to test for linearity in the multivariate framework against a vector logistic smooth transition autoregressive model with a single transition variable. The estimation method is discussed in Terasvirta and Yang (2014, <doi:10.1108/S0731-9053(2013)0000031008>). Also, realized covariances can be constructed from stock market prices or returns, as explained in Andersen et al. (2001, <doi:10.1016/S0304-405X(01)00055-1>).
Short and understandable commands that generate tabulated, formatted, and rounded survey estimates. Mostly a wrapper for the survey package (Lumley (2004) <doi:10.18637/jss.v009.i08> <https://CRAN.R-project.org/package=survey>) that identifies low-precision estimates using the National Center for Health Statistics (NCHS) presentation standards (Parker et al. (2017) <https://www.cdc.gov/nchs/data/series/sr_02/sr02_175.pdf>, Parker et al. (2023) <doi:10.15620/cdc:124368>).
Similarity regression, evaluating the probability of association between sets of ontological terms and binary response vector. A no-association model is compared with one in which the log odds of a true response is linked to the semantic similarity between terms and a latent characteristic ontological profile - Phenotype Similarity Regression for Identifying the Genetic Determinants of Rare Diseases', Greene et al 2016 <doi:10.1016/j.ajhg.2016.01.008>.
Full text, in data frames containing one row per verse, of the Standard Works of The Church of Jesus Christ of Latter-day Saints (LDS). These are the Old Testament, (KJV), the New Testament (KJV), the Book of Mormon, the Doctrine and Covenants, and the Pearl of Great Price.
Implementation of uniformity tests on the circle and (hyper)sphere. The main function of the package is unif_test(), which conveniently collects more than 35 tests for assessing uniformity on S^p-1 = x in R^p : ||x|| = 1, p >= 2. The test statistics are implemented in the unif_stat() function, which allows computing several statistics for different samples within a single call, thus facilitating Monte Carlo experiments. Furthermore, the unif_stat_MC() function allows parallelizing them in a simple way. The asymptotic null distributions of the statistics are available through the function unif_stat_distr(). The core of sphunif is coded in C++ by relying on the Rcpp package. The package also provides several novel datasets and gives the replicability for the data applications/simulations in Garcà a-Portugués et al. (2021) <doi:10.1007/978-3-030-69944-4_12>, Garcà a-Portugués et al. (2023) <doi:10.3150/21-BEJ1454>, Fernández-de-Marcos and Garcà a-Portugués (2024) <doi:10.1016/j.spl.2024.110218>, and Garcà a-Portugués et al. (2025) <doi:10.1080/01621459.2025.2566414>.
This package provides a single, phenome-wide permutation of large-scale biobank data. When a large number of phenotypes are analyzed in parallel, a single permutation across all phenotypes followed by genetic association analyses of the permuted data enables estimation of false discovery rates (FDRs) across the phenome. These FDR estimates provide a significance criterion for interpreting genetic associations in a biobank context. For the basic permutation of unrelated samples, this package takes a sample-by-variable file with ID, genotypic covariates, phenotypic covariates, and phenotypes as input. For data with related samples, it also takes a file with sample pair-wise identity-by-descent information. The function outputs a permuted sample-by-variable file ready for genome-wide association analysis. See Annis et al. (2021) <doi:10.21203/rs.3.rs-873449/v1> for details.
Vignettes for the survival package. Split from the survival package since the vignettes were getting large. Also, since survival is a recommended package it cannot make use of other packages outside of base+recommended (e.g. rmarkdown').
This package provides a flexible framework combining variable screening and random projection techniques for fitting ensembles of predictive generalized linear models to high-dimensional data. Designed for extensibility, the package implements key techniques as S3 classes with user-friendly constructors, enabling easy integration and development of new procedures for high-dimensional applications. For more details see Parzer et al (2024a) <doi:10.48550/arXiv.2312.00130> and Parzer et al (2024b) <doi:10.48550/arXiv.2410.00971>.
The Structural Topic and Sentiment-Discourse (STS) model allows researchers to estimate topic models with document-level metadata that determines both topic prevalence and sentiment-discourse. The sentiment-discourse is modeled as a document-level latent variable for each topic that modulates the word frequency within a topic. These latent topic sentiment-discourse variables are controlled by the document-level metadata. The STS model can be useful for regression analysis with text data in addition to topic modelingâ s traditional use of descriptive analysis. The method was developed in Chen and Mankad (2024) <doi:10.1287/mnsc.2022.00261>.