Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Power analysis for AB testing. The calculations are based on the Welch's unequal variances t-test, which is generally preferred over the Student's t-test when sample sizes and variances of the two groups are unequal, which is frequently the case in AB testing. In such situations, the Student's t-test will give biased results due to using the pooled standard deviation, unlike the Welch's t-test.
An embedded proximal interior point quadratic programming solver, which can solve dense and sparse quadratic programs, described in Schwan, Jiang, Kuhn, and Jones (2023) <doi:10.48550/arXiv.2304.00290>. Combining an infeasible interior point method with the proximal method of multipliers, the algorithm can handle ill-conditioned convex quadratic programming problems without the need for linear independence of the constraints. The solver is written in header only C++ 14 leveraging the Eigen library for vectorized linear algebra. For small dense problems, vectorized instructions and cache locality can be exploited more efficiently. Allocation free problem updates and re-solves are also provided.
XKCD described a supposedly "bad" colormap that it called a "Painbow" (see <https://xkcd.com/2537/>). But simple tests demonstrate that under some circumstances, the colormap can perform very well, and people can find information that is difficult to detect with the ggplot2 default and even supposedly "good" colormaps like viridis. This library let's you use the Painbow in your own ggplot graphs.
Genotyping arrays enable the direct measurement of an individuals genotype at thousands of markers. plinkQC facilitates genotype quality control for genetic association studies as described by Anderson and colleagues (2010) <doi:10.1038/nprot.2010.116>. It makes PLINK basic statistics (e.g. missing genotyping rates per individual, allele frequencies per genetic marker) and relationship functions accessible from R and generates a per-individual and per-marker quality control report. Individuals and markers that fail the quality control can subsequently be removed to generate a new, clean dataset. Removal of individuals based on relationship status is optimised to retain as many individuals as possible in the study. Additionally, there is a trained classifier to predict genomic ancestry of human samples.
R API for Pathling', a tool for querying and transforming electronic health record data that is represented using the Fast Healthcare Interoperability Resources (FHIR) standard - see <https://pathling.csiro.au/docs>.
This package provides tools for retrieving and analyzing air quality data from PurpleAir sensors through their API. Functions enable downloading historical measurements, accessing sensor metadata, and managing API request limitations through chunked data retrieval. For more information about the PurpleAir API, see <https://api.purpleair.com/>.
An implementation of prediction intervals for random-effects meta-analysis: Higgins et al. (2009) <doi:10.1111/j.1467-985X.2008.00552.x>, Partlett and Riley (2017) <doi:10.1002/sim.7140>, and Nagashima et al. (2019) <doi:10.1177/0962280218773520>, <arXiv:1804.01054>.
The spatial interpolation of genetic distances between samples is based on a modified kriging method that accepts a genetic distance matrix and generates a map of probability of lineage presence. This package also offers tools to generate a map of potential contact zones between groups with user-defined thresholds in the tree to account for old and recent divergence. Additionally, it has functions for IDW interpolation using genetic data and midpoints.
This package contains functions for data preparation, prediction of transition probabilities, estimating semi-parametric regression models and for implementing nonparametric estimators for other quantities. See Meira-Machado and Roca-Pardiñas (2011) <doi:10.18637/jss.v038.i03>.
The algorithm combines the most predictive variable, such as count of the main International Classification of Diseases (ICD) codes, and other Electronic Health Record (EHR) features (e.g. health utilization and processed clinical note data), to obtain a score for accurate risk prediction and disease classification. In particular, it normalizes the surrogate to resemble gaussian mixture and leverages the remaining features through random corruption denoising. Background and details about the method can be found at Yu et al. (2018) <doi:10.1093/jamia/ocx111>.
Collection of tutorials for working with Positron'. Covers scripts, Quarto documents, Git', GitHub', and Quarto websites. Makes extensive use of the tools in the tutorial.helpers package.
This package provides functions to prepare rankings data and fit the Plackett-Luce model jointly attributed to Plackett (1975) <doi:10.2307/2346567> and Luce (1959, ISBN:0486441369). The standard Plackett-Luce model is generalized to accommodate ties of any order in the ranking. Partial rankings, in which only a subset of items are ranked in each ranking, are also accommodated in the implementation. Disconnected/weakly connected networks implied by the rankings may be handled by adding pseudo-rankings with a hypothetical item. Optionally, a multivariate normal prior may be set on the log-worth parameters and ranker reliabilities may be incorporated as proposed by Raman and Joachims (2014) <doi:10.1145/2623330.2623654>. Maximum a posteriori estimation is used when priors are set. Methods are provided to estimate standard errors or quasi-standard errors for inference as well as to fit Plackett-Luce trees. See the package website or vignette for further details.
This package performs bivariate composite likelihood and full information maximum likelihood estimation for polytomous logit-normit (graded logistic) item response theory (IRT) models.
Evaluation of the pdf and the cdf of the univariate, noncentral, p-generalized normal distribution. Sampling from the univariate, noncentral, p-generalized normal distribution using either the p-generalized polar method, the p-generalized rejecting polar method, the Monty Python method, the Ziggurat method or the method of Nardon and Pianca. The package also includes routines for the simulation of the bivariate, p-generalized uniform distribution and the simulation of the corresponding angular distribution.
Extends ggplot2 to help replace points in a scatter plot with pie-chart glyphs showing the relative proportions of different categories. The pie glyphs are independent of the axes and plot dimensions, to prevent distortions when the plot dimensions are changed.
This package provides classes and methods for modelling and simulation of periodically correlated (PC) and periodically integrated time series. Compute theoretical periodic autocovariances and related properties of PC autoregressive moving average models. Some original methods including Boshnakov & Iqelan (2009) <doi:10.1111/j.1467-9892.2009.00617.x>, Boshnakov (1996) <doi:10.1111/j.1467-9892.1996.tb00281.x>.
This package implements data processing described in <doi:10.1126/sciadv.abk3283> to align modern differentially private data with formatting of older US Census data releases. The primary goal is to read in Census Privacy Protected Microdata Files data in a reproducible way. This includes tools for aggregating to relevant levels of geography by creating geographic identifiers which match the US Census Bureau's numbering. Additionally, there are tools for grouping race numeric identifiers into categories, consistent with OMB (Office of Management and Budget) classifications. Functions exist for downloading and linking to existing sources of privacy protected microdata.
Computes the minimum sample size required for the external validation of an existing multivariable prediction model using the criteria proposed by Archer (2020) <doi:10.1002/sim.8766> and Riley (2021) <doi:10.1002/sim.9025>.
This package provides methods to detect genetic markers involved in biological adaptation. pcadapt provides statistical tools for outlier detection based on Principal Component Analysis. Implements the method described in (Luu, 2016) <DOI:10.1111/1755-0998.12592> and later revised in (Privé, 2020) <DOI:10.1093/molbev/msaa053>.
This package provides a dataset containing properties for chemical elements. Helper functions are also provided to access some atomic properties.
This work is an extension of the state space model for Poisson count data, Poisson-Gamma model, towards a semiparametric specification. Just like the generalized additive models (GAM), cubic splines are used for covariate smoothing. The semiparametric models are fitted by an iterative process that combines maximization of likelihood and backfitting algorithm.
Figures rendered on graphics devices are usually rescaled to fit pre-determined device dimensions. plotscale implements the reverse: desired plot dimensions are specified and device dimensions are calculated to accommodate marginal material, giving consistent proportions for plot elements. Default methods support grid graphics such as lattice and ggplot. See "example('devsize')" and "vignette('plotscale')".
Log-multiplicative association models (LMA) are models for cross-classifications of categorical variables where interactions are represented by products of category scale values and an association parameter. Maximum likelihood estimation (MLE) fails for moderate to large numbers of categorical variables. The pleLMA package overcomes this limitation of MLE by using pseudo-likelihood estimation to fit the models to small or large cross-classifications dichotomous or multi-category variables. Originally proposed by Besag (1974, <doi:10.1111/j.2517-6161.1974.tb00999.x>), pseudo-likelihood estimation takes large complex models and breaks it down into smaller ones. Rather than maximizing the likelihood of the joint distribution of all the variables, a pseudo-likelihood function, which is the product likelihoods from conditional distributions, is maximized. LMA models can be derived from a number of different frameworks including (but not limited to) graphical models and uni-dimensional and multi-dimensional item response theory models. More details about the models and estimation can be found in the vignette.
Access the data of the Catalogue of the Timber Forest Species of the Peruvian Amazon Vásquez Martà nez, R., & Rojas Gonzáles, R.D.P.(2022)<doi:10.21704/rfp.v37i3.1956>.