Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Easily talk to Google's BigQuery Storage API from R (<https://cloud.google.com/bigquery/docs/reference/storage/rpc>).
Fits Cox model via stochastic gradient descent. This implementation avoids computational instability of the standard Cox Model when dealing large datasets. Furthermore, it scales up with large datasets that do not fit the memory. It also handles large sparse datasets using proximal stochastic gradient descent algorithm. For more details about the method, please see Aliasghar Tarkhan and Noah Simon (2020) <arXiv:2003.00116v2>.
This package provides a lossless compressed data format that uses a combination of the LZ77 algorithm and Huffman coding <https://www.rfc-editor.org/rfc/rfc7932>. Brotli is similar in speed to deflate (gzip) but offers more dense compression.
It makes the creation of networks from sequences of RNA, with this is done the abstraction of characteristics of these networks with a methodology of threshold for the purpose of making a classification between the classes of the sequences. There are four data present in the BASiNET package, "sequences", "sequences2", "sequences-predict" and "sequences2-predict" with 11, 10, 11 and 11 sequences respectively. These sequences were taken from the data set used in the article (LI, Aimin; ZHANG, Junying; ZHOU, Zhongyin, 2014) <doi:10.1186/1471-2105-15-311>, these sequences are used to run examples. The BASiNET was published on Nucleic Acids Research, (ITO, Eric; KATAHIRA, Isaque; VICENTE, Fábio; PEREIRA, Felipe; LOPES, Fabrà cio, 2018) <doi:10.1093/nar/gky462>.
When samples contain missing data, are small, or are suspected of bias, estimation of scale reliability may not be trustworthy. A recommended solution for this common problem has been Bayesian model estimation. Bayesian methods rely on user specified information from historical data or researcher intuition to more accurately estimate the parameters. This package provides a user friendly interface for estimating test reliability. Here, reliability is modeled as a beta distributed random variable with shape parameters alpha=true score variance and beta=error variance (Tanzer & Harlow, 2020) <doi:10.1080/00273171.2020.1854082>.
This package implements the Beta Kernel Process (BKP) for nonparametric modeling of spatially varying binomial probabilities, together with its extension, the Dirichlet Kernel Process (DKP), for categorical or multinomial data. The package provides functions for model fitting, predictive inference with uncertainty quantification, posterior simulation, and visualization in one-and two-dimensional input spaces. Multiple kernel functions (Gaussian, Matern 5/2, and Matern 3/2) are supported, with hyperparameters optimized through multi-start gradient-based search. For more details, see Zhao, Qing, and Xu (2025) <doi:10.48550/arXiv.2508.10447>.
This package implements Bayesian dynamic factor analysis with Stan'. Dynamic factor analysis is a dimension reduction tool for multivariate time series. bayesdfa extends conventional dynamic factor models in several ways. First, extreme events may be estimated in the latent trend by modeling process error with a student-t distribution. Second, alternative constraints (including proportions are allowed). Third, the estimated dynamic factors can be analyzed with hidden Markov models to evaluate support for latent regimes.
This package provides a set of Boolean operators which accept integers of any size, in any base from 2 to 36, including 2's complement format, and perform actions like "AND," "OR", "NOT", "SHIFTR/L" etc. The output can be in any base specified. A direct base to base converter is included.
This package contains Bayesian implementations of the Mixed-Effects Accelerated Failure Time (MEAFT) models for censored data. Those can be not only right-censored but also interval-censored, doubly-interval-censored or misclassified interval-censored. The methods implemented in the package have been published in Komárek and Lesaffre (2006, Stat. Modelling) <doi:10.1191/1471082X06st107oa>, Komárek, Lesaffre and Legrand (2007, Stat. in Medicine) <doi:10.1002/sim.3083>, Komárek and Lesaffre (2007, Stat. Sinica) <https://www3.stat.sinica.edu.tw/statistica/oldpdf/A17n27.pdf>, Komárek and Lesaffre (2008, JASA) <doi:10.1198/016214507000000563>, Garcà a-Zattera, Jara and Komárek (2016, Biometrics) <doi:10.1111/biom.12424>.
This package provides a light-weight object-oriented system with python'-like syntax which supports multiple inheritances and incorporates a python'-like method resolution order.
Create a hierarchical acoustic event species classifier out of multiple call type detectors as described in Rankin et al (2017) <doi:10.1111/mms.12381>.
Set of functions to perform various bootstrap unit root tests for both individual time series (including augmented Dickey-Fuller test and union tests), multiple time series and panel data; see Smeekes and Wilms (2023) <doi:10.18637/jss.v106.i12>, Palm, Smeekes and Urbain (2008) <doi:10.1111/j.1467-9892.2007.00565.x>, Palm, Smeekes and Urbain (2011) <doi:10.1016/j.jeconom.2010.11.010>, Moon and Perron (2012) <doi:10.1016/j.jeconom.2012.01.008>, Smeekes and Taylor (2012) <doi:10.1017/S0266466611000387> and Smeekes (2015) <doi:10.1111/jtsa.12110> for key references.
These functions provide a convenient interface for downloading data from the U.S. Bureau of Labor Statistics <https://www.bls.gov>. The functions in this package utilize flat files produced by the Bureau of Labor Statistics, which contain full series history. These files include employment, unemployment, wages, prices, industry and occupational data at a national, state, and sub-state level, depending on the series. Individual functions are included for those programs which have data available at the state level. The core functions provide direct access to the Current Employment Statistics (CES) <https://www.bls.gov/ces/>, Local Area Unemployment Statistics (LAUS) <https://www.bls.gov/lau/>, Occupational Employment and Wage Statistics (OEWS) <https://www.bls.gov/oes/> and Alternative Measures of Labor Underutilization (SALT) <https://www.bls.gov/lau/stalt.htm> data produced by the Bureau of Labor Statistics.
Fork-safe, raw access to the Amazon Web Services ('AWS') SDK via the boto3 Python module, and convenient helper functions to query the Simple Storage Service ('S3') and Key Management Service ('KMS'), partial support for IAM', the Systems Manager Parameter Store and Secrets Manager'.
Compose and send out responsive HTML email messages that render perfectly across a range of email clients and device sizes. Helper functions let the user insert embedded images, web link buttons, and ggplot2 plot objects into the message body. Messages can be sent through an SMTP server, through the Posit Connect service, or through the Mailgun API service <https://www.mailgun.com/>.
The mixed model for repeated measures (MMRM) is a popular model for longitudinal clinical trial data with continuous endpoints, and brms is a powerful and versatile package for fitting Bayesian regression models. The brms.mmrm R package leverages brms to run MMRMs, and it supports a simplified interfaced to reduce difficulty and align with the best practices of the life sciences. References: Bürkner (2017) <doi:10.18637/jss.v080.i01>, Mallinckrodt (2008) <doi:10.1177/009286150804200402>.
Extend the bigmemory package with various analytics. Functions bigkmeans and binit may also be used with native R objects. For tapply'-like functions, the bigtabulate package may also be helpful. For linear algebra support, see bigalgebra'. For mutex (locking) support for advanced shared-memory usage, see synchronicity'.
Gibbs sampling for Bayesian spatial blind source separation (BSP-BSS). BSP-BSS is designed for spatially dependent signals in high dimensional and large-scale data, such as neuroimaging. The method assumes the expectation of the observed images as a linear mixture of multiple sparse and piece-wise smooth latent source signals, and constructs a Bayesian nonparametric prior by thresholding Gaussian processes. Details can be found in our paper: Wu, B., Guo, Y., & Kang, J. (2024). Bayesian spatial blind source separation via the thresholded gaussian process. Journal of the American Statistical Association, 119(545), 422-433.
Binomial Haar-Fisz transforms for Gaussianization as in Nunes and Nason (2009).
This package provides a set of R functions and data sets for the book "Understanding Computational Bayesian Statistics." This book was written by Bill (WM) Bolstad and published in 2009 by John Wiley & Sons (ISBN 978-0470046098).
Propagate uncertainty from several estimates when combining these estimates via a function. This is done by using the parametric bootstrap to simulate values from the distribution of each estimate to build up an empirical distribution of the combined parameter. Finally either the percentile method is used or the highest density interval is chosen to derive a confidence interval for the combined parameter with the desired coverage. Gaussian copulas are used for when parameters are assumed to be dependent / correlated. References: Davison and Hinkley (1997,ISBN:0-521-57471-4) for the parametric bootstrap and percentile method, Gelman et al. (2014,ISBN:978-1-4398-4095-5) for the highest density interval, Stockdale et al. (2020)<doi:10.1016/j.jhep.2020.04.008> for an example of combining conditional prevalences.
This package provides a set of user-friendly functions designed to fill gaps in existing introductory biostatistics R tools, making it easier for newcomers to perform basic biostatistical analyses without needing advanced programming skills. The methods implemented in this package are based on the works: Connor (1987) <doi:10.2307/2531961> Fleiss, Levin, & Paik (2013, ISBN:978-1-118-62561-3) Levin & Chen (1999) <doi:10.1080/00031305.1999.10474431> McNemar (1947) <doi:10.1007/BF02295996>.
Modelling of population growth under static and dynamic environmental conditions. Includes functions for model fitting and making prediction under isothermal and dynamic conditions. The methods (algorithms & models) are based on predictive microbiology (See Perez-Rodriguez and Valero (2012, ISBN:978-1-4614-5519-6)).
Create a blended curve from two survival curves, which is particularly useful for survival extrapolation in health technology assessment. The main idea is to mix a flexible model that fits the observed data well with a parametric model that encodes assumptions about long-term survival. The two curves are blended into a single survival curve that is identical to the first model over the range of observed times and gradually approaches the parametric model over the extrapolation period based on a given weight function. This approach allows for the inclusion of external information, such as data from registries or expert opinion, to guide long-term extrapolations, especially when dealing with immature trial data. See Che et al. (2022) <doi:10.1177/0272989X221134545>.