Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Distance-sampling (<doi:10.1007/978-3-319-19219-2>) is a field survey and analytical method that estimates density and abundance of survey targets (e.g., animals) when detection probability declines with observation distance. Distance-sampling is popular in ecology, especially when survey targets are observed from aerial platforms (e.g., airplane or drone), surface vessels (e.g., boat or truck), or along walking transects. Analysis involves fitting smooth (parametric) curves to histograms of observation distances and using those functions to adjust density estimates for missed targets. Routines included here fit curves to observation distance histograms, estimate effective sampling area, density of targets in surveyed areas, and the abundance of targets in a surrounding study area. Confidence interval estimation uses built-in bootstrap resampling. Help files are extensive and have been vetted by multiple authors. Many tutorials are available on the package's website (URL below).
SaTScan'(TM) <https://www.satscan.org> is software for finding regions in Time, Space, or Time-Space that have excess risk, based on scan statistics, and uses Monte Carlo hypothesis testing to generate P-values for these regions. The rsatscan package provides functions for writing R data frames in SaTScan'-readable formats, for setting SaTScan parameters, for running SaTScan in the OS, and for reading the files that SaTScan creates.
Computes confidence intervals for binomial or Poisson rates and their differences or ratios. Including the rate (or risk) difference ('RD') or rate ratio (or relative risk, RR') for binomial proportions or Poisson rates, and odds ratio ('OR', binomial only). Also confidence intervals for RD, RR or OR for paired binomial data, and estimation of a proportion from clustered binomial data. Includes skewness-corrected asymptotic score ('SCAS') methods, which have been developed in Laud (2017) <doi:10.1002/pst.1813> from Miettinen and Nurminen (1985) <doi:10.1002/sim.4780040211> and Gart and Nam (1988) <doi:10.2307/2531848>, and in Laud (2025, under review) for paired proportions. The same score produces hypothesis tests that are improved versions of the non-inferiority test for binomial RD and RR by Farrington and Manning (1990) <doi:10.1002/sim.4780091208>, or a generalisation of the McNemar test for paired data. The package also includes MOVER methods (Method Of Variance Estimates Recovery) for all contrasts, derived from the Newcombe method but with options to use equal-tailed intervals in place of the Wilson score method, and generalised for Bayesian applications incorporating prior information. So-called exact methods for strictly conservative coverage are approximated using continuity adjustments, and the amount of adjustment can be selected to avoid over-conservative coverage. Also includes methods for stratified calculations (e.g. meta-analysis), either with fixed effect assumption (matching the CMH test) or incorporating stratum heterogeneity.
Assist in reproducible retrospective (ex-post) harmonization of data, particularly individual level survey data, by providing tools for organizing metadata, standardizing the coding of variables, and variable names and value labels, including missing values, and documenting the data transformations, with the help of comprehensive s3 classes.
This package contains utilities for the analysis of Michaelian kinetic data. Beside the classical linearization methods (Lineweaver-Burk, Eadie-Hofstee, Hanes-Woolf and Eisenthal-Cornish-Bowden), features include the ability to carry out weighted regression analysis that, in most cases, substantially improves the estimation of kinetic parameters (Aledo (2021) <doi:10.1002/bmb.21522>). To avoid data transformation and the potential biases introduced by them, the package also offers functions to directly fitting data to the Michaelis-Menten equation, either using ([S], v) or (time, [S]) data. Utilities to simulate substrate progress-curves (making use of the Lambert W function) are also provided. The package is accompanied of vignettes that aim to orientate the user in the choice of the most suitable method to estimate the kinetic parameter of an Michaelian enzyme.
This package provides a platform-independent basic-statistics GUI (graphical user interface) for R, based on the tcltk package.
Tool for providing access to the Java version CMAEvolutionStrategy of Nikolaus Hansen. CMA-ES is the Covariance Matrix Adaptation Evolution Strategy, see <https://www.lri.fr/~hansen/cmaes_inmatlab.html#java>.
Insert/extract text "reminders" into/from function source code comments or as the "comment" attribute of any object. The former can be handy in development as reminders of e.g. argument requirements, expected objects in the calling environment, required options settings, etc. The latter can be used to provide information of the object and as simple manual "tooltips" for users, among other things.
We utilize approximate Bayesian machinery to fit two-level conjugate hierarchical models on overdispersed Gaussian, Poisson, and Binomial data and evaluates whether the resulting approximate Bayesian interval estimates for random effects meet the nominal confidence levels via frequency coverage evaluation. The data that Rgbp assumes comprise observed sufficient statistic for each random effect, such as an average or a proportion of each group, without population-level data. The approximate Bayesian tool equipped with the adjustment for density maximization produces approximate point and interval estimates for model parameters including second-level variance component, regression coefficients, and random effect. For the Binomial data, the package provides an option to produce posterior samples of all the model parameters via the acceptance-rejection method. The package provides a quick way to evaluate coverage rates of the resultant Bayesian interval estimates for random effects via a parametric bootstrapping, which we call frequency method checking.
Flexible statistical modelling using a modular framework for regression, in which groups of transformations are composed together and act on probability distributions.
This package implements the rquery piped Codd-style query algebra using data.table'. This allows for a high-speed in memory implementation of Codd-style data manipulation tools.
OpenRefine (formerly Google Refine') is a popular, open source data cleaning software. This package enables users to programmatically trigger data transfer between R and OpenRefine'. Available functionality includes project import, export and deletion.
This package provides a pure R implementation of the median cut algorithm. Extracts the dominant colors from an image, and turns them into a scale for use in plots or for fun!
Download and parse public files released by B3 and convert them into useful formats and data structures common to data analysis practitioners.
Restricted Cubic Splines were performed to explore the shape of association form of "U, inverted U, L" shape and test linearity or non-linearity base on "Cox,Logistic,linear,quasipoisson" regression, and auto output Restricted Cubic Splines figures. rcssci package could automatically draw RCS graphics with Y-axis "OR,HR,RR,beta". The Restricted Cubic Splines method were based on Suli Huang (2022) <doi:10.1016/j.ecoenv.2022.113183>,Amit Kaura (2019) <doi:10.1136/bmj.l6055>, and Harrell Jr (2015, ISBN:978-3-319-19424-0 (Print) 978-3-319-19425-7 (Online)).
The Gene Ontology (GO) Consortium <https://geneontology.org/> organizes genes into hierarchical categories based on biological process (BP), molecular function (MF) and cellular component (CC, i.e., subcellular localization). Tools such as GoMiner (see Zeeberg, B.R., Feng, W., Wang, G. et al. (2003) <doi:10.1186/gb-2003-4-4-r28>) can leverage GO to perform ontological analysis of microarray and proteomics studies, typically generating a list of significant functional categories. The significance is traditionally determined by randomizing the input gene list to computing the false discovery rate (FDR) of the enrichment p-value for each category. We explore here the novel alternative of randomizing the GO database rather than the gene list.
IUCN Red List (<https://api.iucnredlist.org/>) client. The IUCN Red List is a global list of threatened and endangered species. Functions cover all of the Red List API routes. An API key is required.
Collection of models and analysis methods used in regional and urban economics and (quantitative) economic geography, e.g. measures of inequality, regional disparities and convergence, regional specialization as well as accessibility and spatial interaction models.
Estimates Pareto-optimal solution for personnel selection with 3 objectives using Normal Boundary Intersection (NBI) algorithm introduced by Das and Dennis (1998) <doi:10.1137/S1052623496307510>. Takes predictor intercorrelations and predictor-objective relations as input and generates a series of solutions containing predictor weights as output. Accepts between 3 and 10 selection predictors. Maximum 2 objectives could be adverse impact objectives. Partially modeled after De Corte (2006) TROFSS Fortran program <https://users.ugent.be/~wdecorte/trofss.pdf> and updated from ParetoR package described in Song et al. (2017) <doi:10.1037/apl0000240>. For details, see Study 3 of Zhang et al. (2023).
Includes Resourcecode hindcast database (see <https://resourcecode.ifremer.fr>) configuration data: nodes locations for both the sea-state parameters and the spectra data; examples of time series of 1D and 2D surface elevation variance spectral density.
Uses an indirect method based on truncated quantile-quantile plots to estimate reference limits from routine laboratory data: Georg Hoffmann and colleagues (2024) <doi: 10.3390/jcm13154397>. The principle of the method was developed by Robert G Hoffmann (1963) <doi:10.1001/jama.1963.03060110068020> and modified by Georg Hoffmann and colleagues (2015) <doi:10.1515/labmed-2015-0104>, and Frank Klawonn and colleagues (2020) <doi:10.1515/labmed-2020-0005>, (2022) <doi:10.1007/978-3-031-15509-3_31>.
This is an extension of the regression-based causal mediation analysis first proposed by Valeri and VanderWeele (2013) <doi:10.1037/a0031034> and Valeri and VanderWeele (2015) <doi:10.1097/EDE.0000000000000253>). It supports including effect measure modification by covariates(treatment-covariate and mediator-covariate product terms in mediator and outcome regression models) as proposed by Li et al (2023) <doi:10.1097/EDE.0000000000001643>. It also accommodates the original SAS macro and PROC CAUSALMED procedure in SAS when there is no effect measure modification. Linear and logistic models are supported for the mediator model. Linear, logistic, loglinear, Poisson, negative binomial, Cox, and accelerated failure time (exponential and Weibull) models are supported for the outcome model.
Yandex Clickhouse (<https://clickhouse.com/>) is a high-performance relational column-store database to enable big data exploration and analytics scaling to petabytes of data. Methods are provided that enable working with Yandex Clickhouse databases via DBI methods and using dplyr'/'dbplyr idioms.
This package provides a system for describing and manipulating the many models that are generated in causal inference and data analysis projects, as based on the causal theory and criteria of Austin Bradford Hill (1965) <doi:10.1177/003591576505800503>. This system includes the addition of formal attributes that modify base `R` objects, including terms and formulas, with a focus on variable roles in the "do-calculus" of modeling, as described in Pearl (2010) <doi:10.2202/1557-4679.1203>. For example, the definition of exposure, outcome, and interaction are implicit in the roles variables take in a formula. These premises allow for a more fluent modeling approach focusing on variable relationships, and assessing effect modification, as described by VanderWeele and Robins (2007) <doi:10.1097/EDE.0b013e318127181b>. The essential goal is to help contextualize formulas and models in causality-oriented workflows.