Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Accuracy metrics are commonly used to assess the discriminating ability of diagnostic tests or biomarkers. Among them, metrics based on the ROC framework are particularly popular. When classification involves subclasses, the package CompClassMetrics includes functions that can provide the point estimate, confidence interval as well as true values if a parametric setting is known. For more details see Nan and Tian (2025) <doi:10.1177/09622802251343600>, Nan and Tian (2023) <doi:10.1002/sim.9908>, Feng and Tian (2020) <doi:10.1177/0962280220938077> and Wang et al (2016) <doi:10.1002/sim.6843>.
Data manipulation for Coupled Model Intercomparison Project, Phase-6 (CMIP6) hydroclimatic data. The files are archived in the Federated Research Data Repository (FRDR) (Rajulapati et al, 2024, <doi:10.20383/103.0829>). The data set is described in Abdelmoaty et al. (2025, <doi:10.1038/s41597-025-04396-z>).
This small library contains a series of simple tools for constructing and manipulating confounded and fractional factorial designs.
Changing the name of an existing R package is annoying but common task especially in the early stages of package development. This package (mostly) automates this task.
Utilities to make your clinical collaborations easier if not fun. It contains functions for designing studies such as Simon 2-stage and group sequential designs and for data analysis such as Jonckheere-Terpstra test and estimating survival quantiles.
Predicts anticancer peptides using random forests trained on the n-gram encoded peptides. The implemented algorithm can be accessed from both the command line and shiny-based GUI. The CancerGram model is too large for CRAN and it has to be downloaded separately from the repository: <https://github.com/BioGenies/CancerGramModel>. For more information see: Burdukiewicz et al. (2020) <doi:10.3390/pharmaceutics12111045>.
This package provides methods of computerized adaptive testing for survey researchers. See Montgomery and Rossiter (2020) <doi:10.1093/jssam/smz027>. Includes functionality for data fit with the classic item response methods including the latent trait model, the Birnbaum three parameter model, the graded response, and the generalized partial credit model. Additionally, includes several ability parameter estimation and item selection routines. During item selection, all calculations are done in compiled C++ code.
Estimation of changepoints using an "S-curve" approximation. Formation of confidence intervals for changepoint locations and magnitudes. Both abrupt and gradual changes can be modeled.
This package provides a Bayesian approach to using predictive probability in an ANOVA construct with a continuous normal response, when threshold values must be obtained for the question of interest to be evaluated as successful (Sieck and Christensen (2021) <doi:10.1002/qre.2802>). The Bayesian Mission Mean (BMM) is used to evaluate a question of interest (that is, a mean that randomly selects combination of factor levels based on their probability of occurring instead of averaging over the factor levels, as in the grand mean). Under this construct, in contrast to a Gibbs sampler (or Metropolis-within-Gibbs sampler), a two-stage sampling method is required. The nested sampler determines the conditional posterior distribution of the model parameters, given Y, and the outside sampler determines the marginal posterior distribution of Y (also commonly called the predictive distribution for Y). This approach provides a sample from the joint posterior distribution of Y and the model parameters, while also accounting for the threshold value that must be obtained in order for the question of interest to be evaluated as successful.
Robust regression methods for compositional data. The distribution of the estimates can be approximated with various bootstrap methods. These bootstrap methods are available for the compositional as well as for standard robust regression estimates. This allows for direct comparison between them.
Create correlation (or partial correlation) matrices. Correlation matrices are formatted with significance stars based on user preferences. Matrices of coefficients, p-values, and number of pairwise observations are returned. Send resultant formatted matrices to the clipboard to be pasted into excel and other programs. A plot method allows users to visualize correlation matrices created with corx'.
To optimize clinical trial designs and data analysis methods consistently through trial simulation, we need to simulate multivariate mixed-type virtual patient data independent of designs and analysis methods under evaluation. To make the outcome of optimization more realistic, relevant empirical patient level data should be utilized when itâ s available. However, a few problems arise in simulating trials based on small empirical data, where the underlying marginal distributions and their dependence structure cannot be understood or verified thoroughly due to the limited sample size. To resolve this issue, we use the copula invariance property, which can generate the joint distribution without making a strong parametric assumption. The function copula.sim can generate virtual patient data with optional data validation methods that are based on energy distance and ball divergence measurement. The function compare.copula.sim can conduct comparison of marginal mean and covariance of simulated data. To simulate patient-level data from a hypothetical treatment arm that would perform differently from the observed data, the function new.arm.copula.sim can be used to generate new multivariate data with the same dependence structure of the original data but with a shifted mean vector.
Computes a confidence interval for a specified linear combination of the regression parameters in a linear regression model with iid normal errors with unknown variance when there is uncertain prior information that a distinct specified linear combination of the regression parameters takes a specified number. This confidence interval, found by numerical nonlinear constrained optimization, has the required minimum coverage and utilizes this uncertain prior information through desirable expected length properties. This confidence interval is proposed by Kabaila, P. and Giri, K. (2009) <doi:10.1016/j.jspi.2009.03.018>.
Various statistical methods for survival analysis in comparing survival curves between two groups, including overall hypothesis tests described in Li et al. (2015) <doi:10.1371/journal.pone.0116774> and Huang et al. (2020) <doi:10.1080/03610918.2020.1753075>, fixed-point tests in Klein et al. (2007) <doi:10.1002/sim.2864>, short-term tests, and long-term tests in Logan et al. (2008) <doi:10.1111/j.1541-0420.2007.00975.x>. Some commonly used descriptive statistics and plots are also included.
There are several non-functional-form-based interaction tests for testing interaction in unreplicated two-way layouts. However, no single test can detect all patterns of possible interaction and the tests are sensitive to a particular pattern of interaction. This package combines six non-functional-form-based interaction tests for testing additivity. These six tests were proposed by Boik (1993) <doi:10.1080/02664769300000004>, Piepho (1994), Kharrati-Kopaei and Sadooghi-Alvandi (2007) <doi:10.1080/03610920701386851>, Franck et al. (2013) <doi:10.1016/j.csda.2013.05.002>, Malik et al. (2016) <doi:10.1080/03610918.2013.870196> and Kharrati-Kopaei and Miller (2016) <doi:10.1080/00949655.2015.1057821>. The p-values of these six tests are combined by Bonferroni, Sidak, Jacobi polynomial expansion, and the Gaussian copula methods to provide researchers with a testing approach which leverages many existing methods to detect disparate forms of non-additivity. This package is based on the following published paper: Shenavari and Kharrati-Kopaei (2018) "A Method for Testing Additivity in Unreplicated Two-Way Layouts Based on Combining Multiple Interaction Tests". In addition, several sentences in help files or descriptions were copied from that paper.
This package provides a convenient interface for making requests directly to the Civis Platform API <https://www.civisanalytics.com/platform>. Full documentation available here <https://civisanalytics.github.io/civis-r/>.
Allows users to input their data, segmentation and function used for the segmentation (and additional arguments) and the package calculates the influence of the data on the changepoint locations, see Wilms et al. (2022) <doi:10.1080/10618600.2021.2000873>. Currently this can only be used with the changepoint package functions to identify changes, but we plan to extend this. There are options for different types of graphics to assess the influence.
Calculation of distances, shortest paths and isochrones on weighted graphs using several variants of Dijkstra algorithm. Proposed algorithms are unidirectional Dijkstra (Dijkstra, E. W. (1959) <doi:10.1007/BF01386390>), bidirectional Dijkstra (Goldberg, Andrew & Fonseca F. Werneck, Renato (2005) <https://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/EPP%20shortest%20path%20algorithms.pdf>), A* search (P. E. Hart, N. J. Nilsson et B. Raphael (1968) <doi:10.1109/TSSC.1968.300136>), new bidirectional A* (Pijls & Post (2009) <https://repub.eur.nl/pub/16100/ei2009-10.pdf>), Contraction hierarchies (R. Geisberger, P. Sanders, D. Schultes and D. Delling (2008) <doi:10.1007/978-3-540-68552-4_24>), PHAST (D. Delling, A.Goldberg, A. Nowatzyk, R. Werneck (2011) <doi:10.1016/j.jpdc.2012.02.007>). Algorithms for solving the traffic assignment problem are All-or-Nothing assignment, Method of Successive Averages, Frank-Wolfe algorithm (M. Fukushima (1984) <doi:10.1016/0191-2615(84)90029-8>), Conjugate and Bi-Conjugate Frank-Wolfe algorithms (M. Mitradjieva, P. O. Lindberg (2012) <doi:10.1287/trsc.1120.0409>), Algorithm-B (R. B. Dial (2006) <doi:10.1016/j.trb.2006.02.008>).
This package provides a new method for interpretable heterogeneous treatment effects characterization in terms of decision rules via an extensive exploration of heterogeneity patterns by an ensemble-of-trees approach, enforcing high stability in the discovery. It relies on a two-stage pseudo-outcome regression, and it is supported by theoretical convergence guarantees. Bargagli-Stoffi, F. J., Cadei, R., Lee, K., & Dominici, F. (2023) Causal rule ensemble: Interpretable Discovery and Inference of Heterogeneous Treatment Effects. arXiv preprint <doi:10.48550/arXiv.2009.09036>.
This package provides a collection of ergonomic large language model assistants designed to help you complete repetitive, hard-to-automate tasks quickly. After selecting some code, press the keyboard shortcut you've chosen to trigger the package app, select an assistant, and watch your chore be carried out. While the package ships with a number of chore helpers for R package development, users can create custom helpers just by writing some instructions in a markdown file.
Create, edit, and remove cron jobs on your unix-alike system. The package provides a set of easy-to-use wrappers to crontab'. It also provides an RStudio add-in to easily launch and schedule your scripts.
Collection of indices and tools relating to clinical research that aid epidemiological cohort or retrospective chart review with big data. All indices and tools take commonly used lab values, patient demographics, and clinical measurements to compute various risk and predictive values for survival or further classification/stratification. References to original literature and validation contained in each function documentation. Includes all commonly available calculators available online.
Supports quantitative research in scientometrics and bibliometrics. Provides various tools for preprocessing bibliographic data retrieved, e.g., from Elsevier's Scopus, computing bibliometric impact of individuals, or modelling phenomena encountered in the social sciences. This package is deprecated; see agop instead.
Computes solutions for linear and logistic regression models with potentially high-dimensional categorical predictors. This is done by applying a nonconvex penalty (SCOPE) and computing solutions in an efficient path-wise fashion. The scaling of the solution paths is selected automatically. Includes functionality for selecting tuning parameter lambda by k-fold cross-validation and early termination based on information criteria. Solutions are computed by cyclical block-coordinate descent, iterating an innovative dynamic programming algorithm to compute exact solutions for each block.