Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a program for Bayesian analysis of univariate normal mixtures with an unknown number of components, following the approach of Richardson and Green (1997) <doi:10.1111/1467-9868.00095>. This makes use of reversible jump Markov chain Monte Carlo methods that are capable of jumping between the parameter sub-spaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution.
This package performs combination tests and sample size calculation for fixed design with survival endpoints using combination tests under either proportional hazards or non-proportional hazards. The combination tests include maximum weighted log-rank test and projection test. The sample size calculation procedure is very flexible, allowing for user-defined hazard ratio function and considering various trial conditions like staggered entry, drop-out etc. The sample size calculation also applies to various cure models such as proportional hazards cure model, cure model with (random) delayed treatments effects. Trial simulation function is also provided to facilitate the empirical power calculation. The references for projection test and maximum weighted logrank test include Brendel et al. (2014) <doi:10.1111/sjos.12059> and Cheng and He (2021) <arXiv:2110.03833>. The references for sample size calculation under proportional hazard include Schoenfeld (1981) <doi:10.1093/biomet/68.1.316> and Freedman (1982) <doi:10.1002/sim.4780010204>. The references for calculation under non-proportional hazards include Lakatos (1988) <doi:10.2307/2531910> and Cheng and He (2023) <doi:10.1002/bimj.202100403>.
This package implements likelihood inference based on higher order approximations for nonlinear models with possibly non constant variance.
This package provides functions for nonlinear time series analysis. This package permits the computation of the most-used nonlinear statistics/algorithms including generalized correlation dimension, information dimension, largest Lyapunov exponent, sample entropy and Recurrence Quantification Analysis (RQA), among others. Basic routines for surrogate data testing are also included. Part of this work was based on the book "Nonlinear time series analysis" by Holger Kantz and Thomas Schreiber (ISBN: 9780521529020).
Data sets and nonlinear regression models dedicated to predictive microbiology.
Nonparametric tests for clustered data in pre-post intervention design documented in Cui and Harrar (2021) <doi:10.1002/bimj.201900310> and Harrar and Cui (2022) <doi:10.1016/j.jspi.2022.05.009>. Other than the main test results mentioned in the reference paper, this package also provides a function to calculate the sample size allocations for the input long format data set, and also a function for adjusted/unadjusted confidence intervals calculations. There are also functions to visualize the distribution of data across different intervention groups over time, and also the adjusted/unadjusted confidence intervals.
Stochastic collapsed variational inference on mixed-membership stochastic blockmodel for networks, incorporating node-level predictors of mixed-membership vectors, as well as dyad-level predictors. For networks observed over time, the model defines a hidden Markov process that allows the effects of node-level predictors to evolve in discrete, historical periods. In addition, the package offers a variety of utilities for exploring results of estimation, including tools for conducting posterior predictive checks of goodness-of-fit and several plotting functions. The package implements methods described in Olivella, Pratt and Imai (2019) Dynamic Stochastic Blockmodel Regression for Social Networks: Application to International Conflicts', available at <https://www.santiagoolivella.info/pdfs/socnet.pdf>.
Setup, run and analyze NetLogo (<https://www.netlogo.org>) model simulations in R'. nlrx experiments use a similar structure as NetLogos Behavior Space experiments. However, nlrx offers more flexibility and additional tools for running and analyzing complex simulation designs and sensitivity analyses. The user defines all information that is needed in an intuitive framework, using class objects. Experiments are submitted from R to NetLogo via XML files that are dynamically written, based on specifications defined by the user. By nesting model calls in future environments, large simulation design with many runs can be executed in parallel. This also enables simulating NetLogo experiments on remote high performance computing machines. In order to use this package, Java and NetLogo (>= 5.3.1) need to be available on the executing system.
Calculate Overall Survival or Recurrence-Free Survival for breast cancer patients, using NHS Predict'. The time interval for the estimation can be set up to 15 years, with default at 10. Incremental therapy benefits are estimated for hormone therapy, chemotherapy, trastuzumab, and bisphosphonates. An additional function, suited for SCAN audits, features a more user-friendly version of the code, with fewer inputs, but necessitates the correct standardised inputs. This work is not affiliated with the development of NHS Predict and its underlying statistical model. Details on NHS Predict can be found at: <doi:10.1186/bcr2464>. The web version of NHS Predict': <https://breast.predict.nhs.uk/>. A small dataset of 50 fictional patient observations is provided for the purpose of running examples with the main two functions, and an additional dataset is provided for running example with the dedicated SCAN function.
Essentials for PK/PD (pharmacokinetics/pharmacodynamics) such as area under the curve, (geometric) coefficient of variation, and other calculations that are not part of base R. This is not a noncompartmental analysis (NCA) package.
Fits non-homogeneous Markov multistate models and misclassification-type hidden Markov models in continuous time to intermittently observed data. Implements the methods in Titman (2011) <doi:10.1111/j.1541-0420.2010.01550.x>. Uses direct numerical solution of the Kolmogorov forward equations to calculate the transition probabilities.
Robust nonparametric bootstrap and permutation tests for goodness of fit, distribution equivalence, location, correlation, and regression problems, as described in Helwig (2019a) <doi:10.1002/wics.1457> and Helwig (2019b) <doi:10.1016/j.neuroimage.2019.116030>. Univariate and multivariate tests are supported. For each problem, exact tests and Monte Carlo approximations are available. Five different nonparametric bootstrap confidence intervals are implemented. Parallel computing is implemented via the parallel package.
The purpose of this library is to to call different optimization solvers (such as Gonzalez Rodriguez et al. (2022) <doi:10.1007/s10898-022-01229-w>, Tawarmalani and Sahinidis (2005) <doi:10.1007/s10107-005-0581-8>, and Byrd et al. (2006) <doi:10.1007/0-387-30065-1_4>) to solve problems given by a standard nl file.
This package provides functions to calculate estimates of intrinsic and extrinsic noise from the two-reporter single-cell experiment, as in Elowitz, M. B., A. J. Levine, E. D. Siggia, and P. S. Swain (2002) Stochastic gene expression in a single cell. Science, 297, 1183-1186. Functions implement multiple estimators developed for unbiasedness or min Mean Squared Error (MSE) in Fu, A. Q. and Pachter, L. (2016). Estimating intrinsic and extrinsic noise from single-cell gene expression measurements. Statistical Applications in Genetics and Molecular Biology, 15(6), 447-471.
This package implements the non-asymptotically valid and asymptotically exact confidence intervals in two cases: estimation of the mean, and estimation of (a linear combination of) the coefficients in a linear regression model, following (Derumigny, Girard and Guyonvarch, 2025) <doi:10.48550/arXiv.2507.16776>.
Allele frequency databases for 50 forensic short tandem repeat (STR) markers, covering Norway and several broader regional populations: Europe, Africa, South America, West Asia, Middle Asia, and East Asia. Developed and maintained for use at the Department of Forensic Sciences, Oslo, Norway.
Empirical statistical analysis, visualization and simulation of diffusion and contagion processes on networks. The package implements algorithms for calculating network diffusion statistics such as transmission rate, hazard rates, exposure models, network threshold levels, infectiousness (contagion), and susceptibility. The package is inspired by work published in Valente, et al., (2015) <DOI:10.1016/j.socscimed.2015.10.001>; Valente (1995) <ISBN: 9781881303213>, Myers (2000) <DOI:10.1086/303110>, Iyengar and others (2011) <DOI:10.1287/mksc.1100.0566>, Burt (1987) <DOI:10.1086/228667>; among others.
Different inference procedures are proposed in the literature to correct for selection bias that might be introduced with non-random selection mechanisms. A class of methods to correct for selection bias is to apply a statistical model to predict the units not in the sample (super-population modeling). Other studies use calibration or Statistical Matching (statistically match nonprobability and probability samples). To date, the more relevant methods are weighting by Propensity Score Adjustment (PSA). The Propensity Score Adjustment method was originally developed to construct weights by estimating response probabilities and using them in Horvitzâ Thompson type estimators. This method is usually used by combining a non-probability sample with a reference sample to construct propensity models for the non-probability sample. Calibration can be used in a posterior way to adding information of auxiliary variables. Propensity scores in PSA are usually estimated using logistic regression models. Machine learning classification algorithms can be used as alternatives for logistic regression as a technique to estimate propensities. The package NonProbEst implements some of these methods and thus provides a wide options to work with data coming from a non-probabilistic sample.
This package provides a comprehensive toolkit for calculating and visualizing Nitrogen Use Efficiency (NUE) indicators in agricultural research. The package implements 23 parameters categorized into fertilizer-based, plant-based, soil-based, isotope-based, ecology-based, and system-based indicators based on Congreves et al. (2021) <doi:10.3389/fpls.2021.637108>. Key features include vectorized calculations for paired-plot experimental designs, batch processing capabilities for handling large datasets, and built-in visualization tools using ggplot2'. Designed to streamline the workflow from raw agronomic data to publication-ready metrics and plots.
Design and analysis of flexible platform trials with non-concurrent controls. Functions for data generation, analysis, visualization and running simulation studies are provided. The implemented analysis methods are described in: Bofill Roig et al. (2022) <doi:10.1186/s12874-022-01683-w>, Saville et al. (2022) <doi:10.1177/17407745221112013> and Schmidli et al. (2014) <doi:10.1111/biom.12242>.
Network trees recursively partition the data with respect to covariates. Two network tree algorithms are available: model-based trees based on a multivariate normal model and nonparametric trees based on covariance structures. After partitioning, correlation-based networks (psychometric networks) can be fit on the partitioned data. For details see Jones, Mair, Simon, & Zeileis (2020) <doi:10.1007/s11336-020-09731-4>.
Variational Expectation-Maximization algorithm to fit the noisy stochastic block model to an observed dense graph and to perform a node clustering. Moreover, a graph inference procedure to recover the underlying binary graph. This procedure comes with a control of the false discovery rate. The method is described in the article "Powerful graph inference with false discovery rate control" by T. Rebafka, E. Roquain, F. Villers (2020) <arXiv:1907.10176>.
Computes and plots the boundary between night and day.
Derives the most frequent hierarchies along with their probability of occurrence. One can also define complex hierarchy criteria and calculate their probability. Methodology based on Papakonstantinou et al. (2021) <DOI:10.21203/rs.3.rs-858140/v1>.