Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a user friendly way to create patient level prediction models using the Observational Medical Outcomes Partnership Common Data Model. Given a cohort of interest and an outcome of interest, the package can use data in the Common Data Model to build a large set of features. These features can then be used to fit a predictive model with a number of machine learning algorithms. This is further described in Reps (2017) <doi:10.1093/jamia/ocy032>.
This package produces odds ratio analyses with comprehensive reporting tools. Generates plots, summary tables, and diagnostic checks for logistic regression models fitted with glm() using binomial family. Provides visualisation methods, formatted reporting tables via gt', and tools to assess logistic regression model assumptions.
An R package for polygenic trait analysis.
This package provides an object type and associated tools for storing and wrangling panel data. Implements several methods for creating regression models that take advantage of the unique aspects of panel data. Among other capabilities, automates the "within-between" (also known as "between-within" and "hybrid") panel regression specification that combines the desirable aspects of both fixed effects and random effects econometric models and fits them as multilevel models (Allison, 2009 <doi:10.4135/9781412993869.d33>; Bell & Jones, 2015 <doi:10.1017/psrm.2014.7>). These models can also be estimated via generalized estimating equations (GEE; McNeish, 2019 <doi:10.1080/00273171.2019.1602504>) and Bayesian estimation is (optionally) supported via Stan'. Supports estimation of asymmetric effects models via first differences (Allison, 2019 <doi:10.1177/2378023119826441>) as well as a generalized linear model extension thereof using GEE.
This package implements the methods proposed by Olley, G.S. and Pakes, A. (1996) <doi:10.2307/2171831>, Levinsohn, J. and Petrin, A. (2003) <doi:10.1111/1467-937X.00246>, Ackerberg, D.A. and Caves, K. and Frazer, G. (2015) <doi:10.3982/ECTA13408> and Wooldridge, J.M. (2009) <doi:10.1016/j.econlet.2009.04.026> for structural productivity estimation .
Various functions for computing pseudo-observations for censored data regression. Computes pseudo-observations for modeling: competing risks based on the cumulative incidence function, survival function based on the restricted mean, survival function based on the Kaplan-Meier estimator see Klein et al. (2008) <doi:10.1016/j.cmpb.2007.11.017>.
This package provides access to a high performant random distribution sampler for the Polya Gamma Distribution using either C++ headers for Rcpp or RcppArmadillo and R'.
This package provides tools for retrieving and analyzing air quality data from PurpleAir sensors through their API. Functions enable downloading historical measurements, accessing sensor metadata, and managing API request limitations through chunked data retrieval. For more information about the PurpleAir API, see <https://api.purpleair.com/>.
We fit causal models using proxies. We implement two stage proximal least squares estimator. E.J. Tchetgen Tchetgen, A. Ying, Y. Cui, X. Shi, and W. Miao. (2020). An Introduction to Proximal Causal Learning. arXiv e-prints, arXiv-2009 <arXiv:2009.10982>.
An implementation of prediction intervals for random-effects meta-analysis: Higgins et al. (2009) <doi:10.1111/j.1467-985X.2008.00552.x>, Partlett and Riley (2017) <doi:10.1002/sim.7140>, and Nagashima et al. (2019) <doi:10.1177/0962280218773520>, <arXiv:1804.01054>.
This package provides functions to get prediction intervals and prediction points of future observations from any continuous distribution.
Enables computation of epidemiological statistics, including those where counts or mortality rates of the reference population are used. Currently supported: excess hazard models (Dickman, Sloggett, Hills, and Hakulinen (2012) <doi:10.1002/sim.1597>), rates, mean survival times, relative/net survival (in particular the Ederer II (Ederer and Heise (1959)) and Pohar Perme (Pohar Perme, Stare, and Esteve (2012) <doi:10.1111/j.1541-0420.2011.01640.x>) estimators), and standardized incidence and mortality ratios, all of which can be easily adjusted for by covariates such as age. Fast splitting and aggregation of Lexis objects (from package Epi') and other computations achieved using data.table'.
This package provides a comprehensive collection of tools for creating, manipulating and visualising pedigrees and genetic marker data. Pedigrees can be read from text files or created on the fly with built-in functions. A range of utilities enable modifications like adding or removing individuals, breaking loops, and merging pedigrees. An online tool for creating pedigrees interactively, based on pedtools', is available at <https://magnusdv.shinyapps.io/quickped>. pedtools is the hub of the pedsuite', a collection of packages for pedigree analysis. A detailed presentation of the pedsuite is given in the book Pedigree Analysis in R (Vigeland, 2021, ISBN:9780128244302).
Interactively annotate base R graphics plots with freehand drawing, symbols (points, lines, arrows, rectangles, circles, ellipses), and text. This is useful for teaching, for example to visually explain certain plot elements, and creating quick sketches.
This package produces power spectral density estimates through iterative refinement of the optimal number of sine-tapers at each frequency. This optimization procedure is based on the method of Riedel and Sidorenko (1995), which minimizes the Mean Square Error (sum of variance and bias) at each frequency, but modified for computational stability. The same procedure can now be used to calculate the cross spectrum (multivariate analyses).
This package provides functions and data sets for the text Probability and Statistics with R.
Calculates profile repeatability for replicate stress response curves, or similar time-series data. Profile repeatability is an individual repeatability metric that uses the variances at each timepoint, the maximum variance, the number of crossings (lines that cross over each other), and the number of replicates to compute the repeatability score. For more information see Reed et al. (2019) <doi:10.1016/j.ygcen.2018.09.015>.
Enables the removal of training data from fitted R models while retaining predict functionality. The purged models are more portable as their memory footprints do not scale with the training sample size.
This package provides a multiway method to decompose a tensor (array) of any order, as a generalisation of SVD also supporting non-identity metrics and penalisations. 2-way SVD with these extensions is also available. The package includes also some other multiway methods: PCAn (Tucker-n) and PARAFAC/CANDECOMP with these extensions.
This package provides quasi-Newton methods to minimize partially separable functions. The methods are largely described by Nocedal and Wright (2006) <doi:10.1007/978-0-387-40065-5>.
An implementation of the "Design Analysis" proposed by Gelman and Carlin (2014) <doi:10.1177/1745691614551642>. It combines the evaluation of Power-Analysis with other inferential-risks as Type-M error (i.e. Magnitude) and Type-S error (i.e. Sign). See also Altoè et al. (2020) <doi:10.3389/fpsyg.2019.02893> and Bertoldo et al. (2020) <doi:10.31234/osf.io/q9f86>.
An implementation of prediction intervals for overdispersed count data, for overdispersed binomial data and for linear random effects models.
When working with big data sets, RAM conservation is critically important. However, it is not always enough to just monitor the size of the objects created. So-called "copy-on-modify" behavior, characteristic of R, means that some expressions or functions may require an unexpectedly large amount of RAM overhead. For example, replacing a single value in a matrix duplicates that matrix in the back-end, making this task require twice as much RAM as that used by the matrix itself. This package makes it easy to monitor the total and peak RAM used so that developers can quickly identify and eliminate RAM hungry code.
Estimates (and controls for) phylogenetic signal through phylogenetic eigenvectors regression (PVR) and phylogenetic signal-representation (PSR) curve, along with some plot utilities.