Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides high-level modeling functions to define and train models using the torch R package. Models include linear, logistic, and multinomial regression as well as multilayer perceptrons.
General-purpose MCMC and SMC samplers, as well as plot and diagnostic functions for Bayesian statistics, with a particular focus on calibrating complex system models. Implemented samplers include various Metropolis MCMC variants (including adaptive and/or delayed rejection MH), the T-walk, two differential evolution MCMCs, two DREAM MCMCs, and a sequential Monte Carlo (SMC) particle filter.
Developed for the following tasks. Simulating, computing maximum likelihood estimator, computing the Fisher information matrix, computing goodness-of-fit measures, and correcting bias of the ML estimator for a wide range of distributions fitted to units placed on progressive type-I interval censoring and progressive type-II censoring plans. The methods of Cox and Snell (1968) <doi:10.1111/j.2517-6161.1968.tb00724.x> and bootstrap method for computing the bias-corrected maximum likelihood estimator.
This package provides the estimation algorithm to perform the demand estimation described in Berry, Levinsohn and Pakes (1995) <DOI:10.2307/2171802> . The routine uses analytic gradients and offers a large number of implemented integration methods and optimization routines.
Binomial Haar-Fisz transforms for Gaussianization as in Nunes and Nason (2009).
Description: Provides streamlined tools for retrieving sequences from NCBI, performing sequence alignments (pairwise and multiple), and building phylogenetic trees. Implements the Needleman-Wunsch algorithm for global alignment (Needleman & Wunsch (1970) <doi:10.1016/0022-2836(70)90057-4>), Smith-Waterman for local alignment (Smith & Waterman (1981) <doi:10.1016/0022-2836(81)90087-5>), and Neighbor-Joining for tree construction (Saitou & Nei (1987) <doi:10.1093/oxfordjournals.molbev.a040454>).
Implementation of multisource exchangeability models for Bayesian analyses of prespecified subgroups arising in the context of basket trial design and monitoring. The R basket package facilitates implementation of the binary, symmetric multi-source exchangeability model (MEM) with posterior inference arising from both exact computation and Markov chain Monte Carlo sampling. Analysis output includes full posterior samples as well as posterior probabilities, highest posterior density (HPD) interval boundaries, effective sample sizes (ESS), mean and median estimations, posterior exchangeability probability matrices, and maximum a posteriori MEMs. In addition to providing "basketwise" analyses, the package includes similar calculations for "clusterwise" analyses for which subgroups are combined into meta-baskets, or clusters, using graphical clustering algorithms that treat the posterior exchangeability probabilities as edge weights. In addition plotting tools are provided to visualize basket and cluster densities as well as their exchangeability. References include Hyman, D.M., Puzanov, I., Subbiah, V., Faris, J.E., Chau, I., Blay, J.Y., Wolf, J., Raje, N.S., Diamond, E.L., Hollebecque, A. and Gervais, R (2015) <doi:10.1056/NEJMoa1502309>; Hobbs, B.P. and Landin, R. (2018) <doi:10.1002/sim.7893>; Hobbs, B.P., Kane, M.J., Hong, D.S. and Landin, R. (2018) <doi:10.1093/annonc/mdy457>; and Kaizer, A.M., Koopmeiners, J.S. and Hobbs, B.P. (2017) <doi:10.1093/biostatistics/kxx031>.
Generates synthetic data distributions to enable testing various modelling techniques in ways that real data does not allow. Noise can be added in a controlled manner such that the data seems real. This methodology is generic and therefore benefits both the academic and industrial research.
This package contains some commonly used categorical variable encoders, such as LabelEncoder and OneHotEncoder'. Inspired by the encoders implemented in Python sklearn.preprocessing package (see <http://scikit-learn.org/stable/modules/preprocessing.html>).
This package provides a likelihood-based hypothesis testing approach is implemented for assessing causal mediation. Described in Millstein, Chen, and Breton (2016), <DOI:10.1093/bioinformatics/btw135>, it could be used to test for mediation of a known causal association between a DNA variant, the instrumental variable', and a clinical outcome or phenotype by gene expression or DNA methylation, the potential mediator. Another example would be testing mediation of the effect of a drug on a clinical outcome by the molecular target. The hypothesis test generates a p-value or permutation-based FDR value with confidence intervals to quantify uncertainty in the causal inference. The outcome can be represented by either a continuous or binary variable, the potential mediator is continuous, and the instrumental variable can be continuous or binary and is not limited to a single variable but may be a design matrix representing multiple variables.
Splits data into Gaussian type clusters using the Cross-Entropy Clustering ('CEC') method. This method allows for the simultaneous use of various types of Gaussian mixture models, for performing the reduction of unnecessary clusters, and for discovering new clusters by splitting them. CEC is based on the work of Spurek, P. and Tabor, J. (2014) <doi:10.1016/j.patcog.2014.03.006>.
This package provides a collection of functions for modeling fissile material operations in nuclear facilities, based on Zywiec et al (2021) <doi:10.1016/j.ress.2020.107322>.
This package provides a collection of functions for exploratory chemometrics of 2D spectroscopic data sets such as COSY (correlated spectroscopy) and HSQC (heteronuclear single quantum coherence) 2D NMR (nuclear magnetic resonance) spectra. ChemoSpec2D deploys methods aimed primarily at classification of samples and the identification of spectral features which are important in distinguishing samples from each other. Each 2D spectrum (a matrix) is treated as the unit of observation, and thus the physical sample in the spectrometer corresponds to the sample from a statistical perspective. In addition to chemometric tools, a few tools are provided for plotting 2D spectra, but these are not intended to replace the functionality typically available on the spectrometer. ChemoSpec2D takes many of its cues from ChemoSpec and tries to create consistent graphical output and to be very user friendly.
This package provides analytical methods for analyzing CRISPR screen data at different levels of gene expression. Multi-component normal mixture models and EM algorithms are used for modeling.
Simple functions for plotting linear calibration functions and estimating standard errors for measurements according to the Handbook of Chemometrics and Qualimetrics: Part A by Massart et al. (1997) There are also functions estimating the limit of detection (LOD) and limit of quantification (LOQ). The functions work on model objects from - optionally weighted - linear regression (lm) or robust linear regression ('rlm from the MASS package).
The main function calculates confidence intervals (CI) for Mixed Models, utilizing both classical estimators from the lmer() function in the lme4 package and robust estimators from the rlmer() function in the robustlmm package, as well as the varComprob() function in the robustvarComp package. Three methods are available: the classical Wald method, the wild bootstrap, and the parametric bootstrap. Bootstrap methods offer flexibility in obtaining lower and upper bounds through percentile or BCa methods. More details are given in Mason, F., Cantoni, E., & Ghisletta, P. (2021) <doi:10.5964/meth.6607> and Mason, F., Cantoni, E., & Ghisletta, P. (2024) <doi:10.1037/met0000643>.
Every research team have their own script for calculation of hemodynamic indexes. This package makes it possible to insert a long-format dataframe, and add both periods of interest (trigger-periods), and delete artifacts with deleter-files.
Coalescent simulators can rapidly simulate biological sequences evolving according to a given model of evolution. You can use this package to specify such models, to conduct the simulations and to calculate additional statistics from the results (Staab, Metzler, 2016 <doi:10.1093/bioinformatics/btw098>). It relies on existing simulators for doing the simulation, and currently supports the programs ms', msms and scrm'. It also supports finite-sites mutation models by combining the simulators with the program seq-gen'. Coala provides functions for calculating certain summary statistics, which can also be applied to actual biological data. One possibility to import data is through the PopGenome package (<https://github.com/pievos101/PopGenome>).
Stacking rings are tools used to stack pottery in a Kiln. A relatively large group of stacking rings was found in the area of the sanctuary of Dionysos in Miletus in the 1970s. Measurements and additional info is gathered in this package and made available for use by other researchers. The data along with its archaeological context and analysis has been published in "Archäologischer Anzeiger" (2020/1, <doi:10.34780/aa.v0i1.1014>).
Given a collection of intervals with integer start and end positions, find recurrently targeted regions and estimate the significance of finding. Randomization is implemented by parallel methods, either using local host machines, or submitting grid engine jobs.
Discover causality for bivariate categorical data. This package aims to enable users to discover causality for bivariate observational categorical data. See Ni, Y. (2022) <arXiv:2209.08579> "Bivariate Causal Discovery for Categorical Data via Classification with Optimal Label Permutation. Advances in Neural Information Processing Systems 35 (in press)".
Cellular cooperation compromises the plating efficiency-based analysis of clonogenic survival data. This tool provides functions that enable a robust analysis of colony formation assay (CFA) data in presence or absence of cellular cooperation. The implemented method has been described in Brix et al. (2020). (Brix, N., Samaga, D., Hennel, R. et al. "The clonogenic assay: robustness of plating efficiency-based analysis is strongly compromised by cellular cooperation." Radiat Oncol 15, 248 (2020). <doi:10.1186/s13014-020-01697-y>) Power regression for parameter estimation, calculation of survival fractions, uncertainty analysis and plotting functions are provided.
Common API for filtering data stored in different data models. Provides multiple filter types and reproducible R code. Works standalone or with shinyCohortBuilder as the GUI for interactive Shiny apps.
Utility functions that allow checking the basic validity of a function argument or any other value, including generating an error and assigning a default in a single line of code. The main purpose of the package is to provide simple and easily readable argument checking to improve code robustness.