Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for modeling and forecasting time series data. Forecasting is based on the innovations algorithm. A description of the innovations algorithm can be found in the textbook "Introduction to Time Series and Forecasting" by Peter J. Brockwell and Richard A. Davis.
The general workflow of most imputation methods is quite similar. The aim of this package is to provide parts of this general workflow to make the implementation of imputation methods easier. The heart of an imputation method is normally the used model. These models can be defined using the parsnip package or customized specifications. The rest of an imputation method are more technical specification e.g. which columns and rows should be used for imputation and in which order. These technical specifications can be set inside the imputation functions.
Simulate an inhomogeneous self-exciting process (IHSEP), or Hawkes process, with a given (possibly time-varying) baseline intensity and an excitation function. Calculate the likelihood of an IHSEP with given baseline intensity and excitation functions for an (increasing) sequence of event times. Calculate the point process residuals (integral transforms of the original event times). Calculate the mean intensity process.
This package implements Individual Conditional Expectation (ICE) plots, a tool for visualizing the model estimated by any supervised learning algorithm. ICE plots refine Friedman's partial dependence plot by graphing the functional relationship between the predicted response and a covariate of interest for individual observations. Specifically, ICE plots highlight the variation in the fitted values across the range of a covariate of interest, suggesting where and to what extent they may exist.
Generalised linear models via the iteratively reweighted least squares algorithm. The functions perform logistic, Poisson and Gamma regression (ISBN:9780412317606), either for a single model or many regression models in a column-wise fashion.
Simple plotting function(s) for exploratory data analysis with flexible options allowing for easy plot customisation. The goal is to make it easy for beginners to start exploring a dataset through simple R function calls, as well as provide a similar interface to summary statistics and inference information. Includes functionality to generate interactive HTML-driven graphs. Used by iNZight', a graphical user interface providing easy exploration and visualisation of data for students of statistics, available in both desktop and online versions.
This package provides a suite of functions for conducting and interpreting analysis of statistical interaction in regression models that was formerly part of the jtools package. Functionality includes visualization of two- and three-way interactions among continuous and/or categorical variables as well as calculation of "simple slopes" and Johnson-Neyman intervals (see e.g., Bauer & Curran, 2005 <doi:10.1207/s15327906mbr4003_5>). These capabilities are implemented for generalized linear models in addition to the standard linear regression context.
Allows an interactive assessment of the timing of interim analyses. The algorithm simulates both the recruitment and treatment/event phase of a clinical trial based on the package interim'.
An implementation to reconstruct individual patient data from Kaplan-Meier (K-M) survival curves, visualize and assess the accuracy of the reconstruction, then perform secondary analysis on the reconstructed data. We involve a simple function to extract the coordinates form the published K-M curves. The function is developed based on Poisot T. â s digitize package (2011) <doi:10.32614/RJ-2011-004> . For more complex and tangled together graphs, digitizing software, such as DigitizeIt (for MAC or windows) or ScanIt'(for windows) can be used to get the coordinates. Additional information should also be involved to increase the accuracy, like numbers of patients at risk (often reported at 5-10 time points under the x-axis of the K-M graph), total number of patients, and total number of events. The package implements the modified iterative K-M estimation algorithm (modified-iKM) improved upon the approach proposed by Guyot (2012) <doi:10.1186/1471-2288-12-9> with some modifications.
The Dynamic Time Warping (DTW) distance measure for time series allows non-linear alignments of time series to match similar patterns in time series of different lengths and or different speeds. IncDTW is characterized by (1) the incremental calculation of DTW (reduces runtime complexity to a linear level for updating the DTW distance) - especially for life data streams or subsequence matching, (2) the vector based implementation of DTW which is faster because no matrices are allocated (reduces the space complexity from a quadratic to a linear level in the number of observations) - for all runtime intensive DTW computations, (3) the subsequence matching algorithm runDTW, that efficiently finds the k-NN to a query pattern in a long time series, and (4) C++ in the heart. For details about DTW see the original paper "Dynamic programming algorithm optimization for spoken word recognition" by Sakoe and Chiba (1978) <DOI:10.1109/TASSP.1978.1163055>. For details about this package, Dynamic Time Warping and Incremental Dynamic Time Warping please see "IncDTW: An R Package for Incremental Calculation of Dynamic Time Warping" by Leodolter et al. (2021) <doi:10.18637/jss.v099.i09>.
The functions compute the double-entry intraclass correlation, which is an index of profile similarity (Furr, 2010; McCrae, 2008). The double-entry intraclass correlation is a more precise index of the agreement of two empirically observed profiles than the often-used intraclass correlation (McCrae, 2008). Profiles comprising correlations are automatically transformed according to the Fisher z-transformation before the double-entry intraclass correlation is calculated. If the profiles comprise scores such as sum scores from various personality scales, it is recommended to standardize each individual score prior to computation of the double-entry intraclass correlation (McCrae, 2008). See Furr (2010) <doi:10.1080/00223890903379134> or McCrae (2008) <doi:10.1080/00223890701845104> for details.
Option is a one of the financial derivatives and its pricing is an important problem in practice. The process of stock prices are represented as Geometric Brownian motion [Black (1973) <doi:10.1086/260062>] or jump diffusion processes [Kou (2002) <doi:10.1287/mnsc.48.8.1086.166>]. In this package, algorithms and visualizations are implemented by Monte Carlo method in order to calculate European option price for three equations by Geometric Brownian motion and jump diffusion processes and furthermore a model that presents jumps among companies affect each other.
Fits the joint model proposed by Henderson and colleagues (2000) <doi:10.1093/biostatistics/1.4.465>, but extended to the case of multiple continuous longitudinal measures. The time-to-event data is modelled using a Cox proportional hazards regression model with time-varying covariates. The multiple longitudinal outcomes are modelled using a multivariate version of the Laird and Ware linear mixed model. The association is captured by a multivariate latent Gaussian process. The model is estimated using a Monte Carlo Expectation Maximization algorithm. This project was funded by the Medical Research Council (Grant number MR/M013227/1).
Joint analysis and imputation of incomplete data in the Bayesian framework, using (generalized) linear (mixed) models and extensions there of, survival models, or joint models for longitudinal and survival data, as described in Erler, Rizopoulos and Lesaffre (2021) <doi:10.18637/jss.v100.i20>. Incomplete covariates, if present, are automatically imputed. The package performs some preprocessing of the data and creates a JAGS model, which will then automatically be passed to JAGS <https://mcmc-jags.sourceforge.io/> with the help of the package rjags'.
Computes the Jackknife Mutual Information (JMI) between two random vectors and provides the p-value for dependence tests. See Zeng, X., Xia, Y. and Tong, H. (2018) <doi:10.1073/pnas.1715593115>.
This package contains functions for fitting a joinpoint proportional hazards model to relative survival or cause-specific survival data, including estimates of joinpoint years at which survival trends have changed and trend measures in the hazard and cumulative survival scale. See Yu et al.(2009) <doi:10.1111/j.1467-985X.2009.00580.x>.
The age is estimated by calculating the Dirichlet Normal Energy (DNE) on the whole auricular surface and the apex of the auricular surface. It involves three estimation methods: principal component discriminant analysis (PCQDA), and principal component logistic regression analysis (PCLR) methods, principal component regression analysis with Southeast Asian (A_PCR), and principal component regression analysis with multipopulation (M_PCR). The package is created with the data from the Louis Lopes Collection in Lisbon, the 21st Century Identified Human Remains Collection in Coimbra, and the CAL Milano Cemetery Skeletal Collection in Milan, and the skeletal collection at Khon Kaen University (KKU) Human Skeletal Research Centre (HSRC), housed in the Department of Anatomy in the Faculty of Medicine at KKU in Khon Kaen.
Fits univariate and joint N-mixture models for data on two unmarked site-associated species. Includes functions to estimate latent abundances through empirical Bayes methods.
This package provides statistical methods for auditing as implemented in JASP for Audit (Derks et al., 2021 <doi:10.21105/joss.02733>). First, the package makes it easy for an auditor to plan a statistical sample, select the sample from the population, and evaluate the misstatement in the sample compliant with international auditing standards. Second, the package provides statistical methods for auditing data, including tests of digit distributions and repeated values. Finally, the package includes methods for auditing algorithms on the aspect of fairness and bias. Next to classical statistical methodology, the package implements Bayesian equivalents of these methods whose statistical underpinnings are described in Derks et al. (2021) <doi:10.1111/ijau.12240>, Derks et al. (2024) <doi:10.2308/AJPT-2021-086>, Derks et al. (2022) <doi:10.31234/osf.io/8nf3e> Derks et al. (2024) <doi:10.31234/osf.io/tgq5z>, and Derks et al. (2025) <doi:10.31234/osf.io/b8tu2>.
Metaprogramming utilities for converting R regression model formulae to equivalents in Julia <doi:10.1137/141000671>, via modifications to the abstract syntax tree. Supports translations in zero correlation random effects syntax, protection of expressions to be evaluated as-is, interaction terms, and more. Accepts strings or R formula objects and returns modified R formula objects where possible (or a modified string, if not a valid formula in R).
Set of common functions used for manipulating colors, detecting and interacting with RStudio', modeling, formatting, determining users operating system, feature scaling, and more!
Simply and efficiently simulates (i) variants from reference genomes and (ii) reads from both Illumina <https://www.illumina.com/> and Pacific Biosciences (PacBio) <https://www.pacb.com/> platforms. It can either read reference genomes from FASTA files or simulate new ones. Genomic variants can be simulated using summary statistics, phylogenies, Variant Call Format (VCF) files, and coalescent simulationsâ the latter of which can include selection, recombination, and demographic fluctuations. jackalope can simulate single, paired-end, or mate-pair Illumina reads, as well as PacBio reads. These simulations include sequencing errors, mapping qualities, multiplexing, and optical/polymerase chain reaction (PCR) duplicates. Simulating Illumina sequencing is based on ART by Huang et al. (2012) <doi:10.1093/bioinformatics/btr708>. PacBio sequencing simulation is based on SimLoRD by Stöcker et al. (2016) <doi:10.1093/bioinformatics/btw286>. All outputs can be written to standard file formats.
The function jskm() creates publication quality Kaplan-Meier plot with at risk tables below. svyjskm() provides plot for weighted Kaplan-Meier estimator.
The goal of jetty is to execute R functions and code snippets in an isolated R subprocess within a Docker container and return the evaluated results to the local R session. jetty can install necessary packages at runtime and seamlessly propagates errors and outputs from the Docker subprocess back to the main session. jetty is primarily designed for sandboxed testing and quick execution of example code.