Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a suite of empirical Bayes methods to use in pharmacovigilance. Contains various model fitting and post-processing functions. For more details see Tan et al. (2025) <doi:10.48550/arXiv.2502.09816>, <doi:10.48550/arXiv.2512.01057>; Koenker and Mizera (2014) <doi:10.1080/01621459.2013.869224>; Efron (2016) <doi:10.1093/biomet/asv068>.
An implementation of the Partition Of variation (POV) method as developed by Dr. Thomas A Little <https://thomasalittleconsulting.com> in 1993 for the analysis of semiconductor data for hard drive manufacturing. POV is based on sequential sum of squares and is an exact method that explains all observed variation. It quantitates both the between and within factor variation effects and can quantitate the influence of both continuous and categorical factors.
Cluster analysis via nonparametric density estimation is performed. Operationally, the kernel method is used throughout to estimate the density. Diagnostics methods for evaluating the quality of the clustering are available. The package includes also a routine to estimate the probability density function obtained by the kernel method, given a set of data with arbitrary dimensions.
This package provides functions to aid in micro and macro economic analysis and handling of price and currency data. Includes extraction of relevant inflation and exchange rate data from World Bank API, data cleaning/parsing, and standardisation. Inflation adjustment calculations as found in Principles of Macroeconomics by Gregory Mankiw et al (2014). Current and historical end of day exchange rates for 171 currencies from the European Central Bank Statistical Data Warehouse (2020).
This package creates a non-negative low-rank approximate factorization of a sparse counts matrix by maximizing Poisson likelihood with L1/L2 regularization (e.g. for implicit-feedback recommender systems or bag-of-words-based topic modeling) (Cortes, (2018) <arXiv:1811.01908>), which usually leads to very sparse user and item factors (over 90% zero-valued). Similar to hierarchical Poisson factorization (HPF), but follows an optimization-based approach with regularization instead of a hierarchical prior, and is fit through gradient-based methods instead of variational inference.
We provide inference for personalized medicine models. Namely, we answer the questions: (1) how much better does a purported personalized recommendation engine for treatments do over a business-as-usual approach and (2) is that difference statistically significant?
Using Electronic Health Record (EHR) is difficult because most of the time the true characteristic of the patient is not available. Instead we can retrieve the International Classification of Disease code related to the disease of interest or we can count the occurrence of the Unified Medical Language System. None of them is the true phenotype which needs chart review to identify. However chart review is time consuming and costly. PheVis is an algorithm which is phenotyping (i.e identify a characteristic) at the visit level in an unsupervised fashion. It can be used for chronic or acute diseases. An example of how to use PheVis is available in the vignette. Basically there are two functions that are to be used: `train_phevis()` which trains the algorithm and `test_phevis()` which get the predicted probabilities. The detailed method is described in preprint by Ferté et al. (2020) <doi:10.1101/2020.06.15.20131458>.
Following Sommer (2022) <https://mediatum.ub.tum.de/1658240> portfolio level risk estimates (e.g. Value at Risk, Expected Shortfall) are estimated by modeling each asset univariately by an ARMA-GARCH model and then their cross dependence via a Vine Copula model in a rolling window fashion. One can even condition on variables/time series at certain quantile levels to stress test the risk measure estimates.
This package provides functions that support a broad range of common tasks in physical activity research, including but not limited to creation of Bland-Altman plots (<doi:10.1136/bmj.313.7049.106>), metabolic calculations such as basal metabolic rate predictions (<https://europepmc.org/article/med/4044297/reloa>), demographic calculations such as age-for-body-mass-index percentile (<https://www.cdc.gov/growthcharts/cdc_charts.htm>), and analysis of bout detection algorithm performance (<https://pubmed.ncbi.nlm.nih.gov/34258524/>).
This package provides randomization using permutation for applications. To provide a Quality Control (QC) check, QC samples can be randomized within strata. A second function allows for the ability to â switchâ samples to meet set requirements and perform a certain amount of minimization on these switches. The functions are flexible for users by specifying strata size and number of QC samples per strata. The randomization meets the following requirements â ¢ QC sample requirements: QC samples not adjacent, QC samples from same mother must follow certain patterns. â ¢ Matched sample sets must be within a single strata, and next to each other.
Utilities for multiple hypothesis testing, companion datasets from "Probability and Statistics for Economics and Business: An Introduction Using R" by Jason Abrevaya (MIT Press, under contract).
Homogeneity tests of the coefficients in panel data. Currently, only the Hsiao test for determining coefficient homogeneity between the panel data individuals is implemented, as described in Hsiao (2022), "Analysis of Panel Data" (<doi:10.1017/9781009057745>).
This package provides a central decision in a parametric regression is how to specify the relation between an dependent variable and each explanatory variable. This package provides a semi-parametric tool for comparing different transformations of an explanatory variables in a parametric regression. The functions is relevant in a situation, where you would use a box-cox or Box-Tidwell transformations. In contrast to the classic power-transformations, the methods in this package allows for theoretical driven user input and the possibility to compare with a non-parametric transformation.
This package provides functions for phenological data preprocessing, modelling and result handling. For more information, please refer to Lange et al. (2016) <doi:10.1007/s00484-016-1161-8>.
Markov chain Monte Carlo diagnostic plots. The purpose of the package is to combine existing tools from the coda and lattice packages, and make it easy to adjust graphical details.
Fits successive Lasso models for several blocks of (omics) data with different priorities and takes the predicted values as an offset for the next block. Also offers options to deal with block-wise missingness in multi-omics data.
This package implements partition-assisted clustering and multiple alignments of networks. It 1) utilizes partition-assisted clustering to find robust and accurate clusters and 2) discovers coherent relationships of clusters across multiple samples. It is particularly useful for analyzing single-cell data set. Please see Li et al. (2017) <doi:10.1371/journal.pcbi.1005875> for detail method description.
The perturbR() function incrementally perturbs network edges (using the rewireR function)and compares the resulting community detection solutions from the rewired networks with the solution found for the original network. These comparisons aid in understanding the stability of the original solution. The package requires symmetric, weighted (specifically, count) matrices/networks.
This package performs partial verification bias (PVB) correction for binary diagnostic tests, where PVB arises from selective patient verification in diagnostic accuracy studies. Supports correction of important accuracy measures -- sensitivity, specificity, positive predictive values and negative predictive value -- under missing-at-random and missing-not-at-random missing data mechanisms. Available methods and references are "Begg and Greenes methods" in Alonzo & Pepe (2005) <doi:10.1111/j.1467-9876.2005.00477.x> and deGroot et al. (2011) <doi:10.1016/j.annepidem.2010.10.004>; "Multiple imputation" in Harel & Zhou (2006) <doi:10.1002/sim.2494>, "EM-based logistic regression" in Kosinski & Barnhart (2003) <doi:10.1111/1541-0420.00019>; "Inverse probability weighting" in Alonzo & Pepe (2005) <doi:10.1111/j.1467-9876.2005.00477.x>; "Inverse probability bootstrap sampling" in Nahorniak et al. (2015) <doi:10.1371/journal.pone.0131765> and Arifin & Yusof (2022) <doi:10.3390/diagnostics12112839>; "Scaled inverse probability resampling methods" in Arifin & Yusof (2025) <doi:10.1371/journal.pone.0321440>.
This package provides a collection of tools to facilitate standardized analysis and graphical procedures when using the National Cancer Instituteâ s Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE) and other PRO measurements.
Computes the Danish Pesticide Load Indicator as described in Kudsk et al. (2018) <doi:10.1016/j.landusepol.2017.11.010> and Moehring et al. (2019) <doi:10.1016/j.scitotenv.2018.07.287> for pesticide use data. Additionally offers the possibility to directly link pesticide use data to pesticide properties given access to the Pesticide properties database (Lewis et al., 2016) <doi:10.1080/10807039.2015.1133242>.
This package performs Bayesian variable selection under normal linear models for the data with the model parameters following as prior distributions either the power-expected-posterior (PEP) or the intrinsic (a special case of the former) (Fouskakis and Ntzoufras (2022) <doi: 10.1214/21-BA1288>, Fouskakis and Ntzoufras (2020) <doi: 10.3390/econometrics8020017>). The prior distribution on model space is the uniform over all models or the uniform on model dimension (a special case of the beta-binomial prior). The selection is performed by either implementing a full enumeration and evaluation of all possible models or using the Markov Chain Monte Carlo Model Composition (MC3) algorithm (Madigan and York (1995) <doi: 10.2307/1403615>). Complementary functions for hypothesis testing, estimation and predictions under Bayesian model averaging, as well as, plotting and printing the results are also provided. The results can be compared to the ones obtained under other well-known priors on model parameters and model spaces.
This package provides a reliable and flexible toolbox to score patient-reported outcome (PRO), Quality of Life (QOL), and other psychometric measures. The guiding philosophy is that scoring errors can be eliminated by using a limited number of well-tested, well-behaved functions to score PRO-like measures. The workhorse of the package is the scoreScale function, which can be used to score most single-scale measures. It can reverse code items that need to be reversed before scoring and pro-rate scores for missing item data. Currently, three different types of scores can be output: summed item scores, mean item scores, and scores scaled to range from 0 to 100. The PROscorerTools functions can be used to write new functions that score more complex measures. In fact, PROscorerTools functions are the building blocks of the scoring functions in the PROscorer package (which is a repository of functions that score specific commonly-used instruments). Users are encouraged to use PROscorerTools to write scoring functions for their favorite PRO-like instruments, and to submit these functions for inclusion in PROscorer (a tutorial vignette will be added soon). The long-term vision for the PROscorerTools and PROscorer packages is to provide an easy-to-use system to facilitate the incorporation of PRO measures into research studies in a scientifically rigorous and reproducible manner. These packages and their vignettes are intended to help establish and promote "best practices" for scoring and describing PRO-like measures in research.
Includes a collection of functions presented in "Measuring stability in ecological systems without static equilibria" by Clark et al. (2022) <doi:10.1002/ecs2.4328> in Ecosphere. These can be used to estimate the parameters of a stochastic state space model (i.e. a model where a time series is observed with error). The goal of this package is to estimate the variability around a deterministic process, both in terms of observation error - i.e. variability due to imperfect observations that does not influence system state - and in terms of process noise - i.e. stochastic variation in the actual state of the process. Unlike classical methods for estimating variability, this package does not necessarily assume that the deterministic state is fixed (i.e. a fixed-point equilibrium), meaning that variability around a dynamic trajectory can be estimated (e.g. stochastic fluctuations during predator-prey dynamics).