Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for working with code lists and vectors with codes. These are an alternative for factor that keep track of both the codes and labels. Methods allow for transforming between codes and labels. Also supports hierarchical code lists.
In meta regression sometimes the studies have multiple effects that are correlated. For this reason cluster robust standard errors must be computed. However, since the clusters are unbalanced the wild bootstrap is suggested. See Oczkowski E. and Doucouliagos H. (2015). "Wine prices and quality ratings: a meta-regression analysis". American Journal of Agricultural Economics, 97(1): 103--121. <doi:10.1093/ajae/aau057> and Cameron A. C., Gelbach J. B. and Miller D. L. (2008). "Bootstrap-based improvements for inference with clustered errors". The Review of Economics and Statistics, 90(3): 414--427. <doi:10.1162/rest.90.3.414>.
Processes survey data and displays estimation results along with the relative standard error in a table, including the number of samples and also uses a t-distribution approach to compute confidence intervals, similar to SPSS (Statistical Package for the Social Sciences) software.
This package implements a specific form of segmented linear regression with two independent variables. The visualization of that function looks like a quarter segment of a cowbell giving the package its name. The package has been specifically constructed for the case where minimum and maximum value of the dependent and two independent variables are known a prior, which is usually the case when those values are derived from Likert scales.
Draws causal hypergraph plots from models output by configurational comparative methods such as Coincidence Analysis (CNA) or Qualitative Comparative Analysis (QCA).
Column Text Format (CTF) is a new tabular data format designed for simplicity and performance. CTF is the simplest column store you can imagine: plain text files for each column in a table, and a metadata file. The underlying plain text means the data is human readable and familiar to programmers, unlike specialized binary formats. CTF is faster than row oriented formats like CSV when loading a subset of the columns in a table. This package provides functions to read and write CTF data from R.
Calculate the distribution of costs for the installation of an elevator based on the different distribution rules.
The cgAUC can calculate the AUC-type measure of Obuchowski(2006) when gold standard is continuous, and find the optimal linear combination of variables with respect to this measure.
Implementation of estimators for inferring the mean of censored cost data. Including the estimators BT from Bang and Tsiatis (2000) <doi:10.1093/biomet/87.2.329> and ZT from Zhao and Tian (2001) <doi:10.1111/j.0006-341X.2001.01002.x>.
Estimation of crop water demand can be processed via this package. As example, the data from TerraClimate dataset (<https://www.climatologylab.org/terraclimate.html>) calibrated with automatic weather stations of National Meteorological Institute of Brazil is available in a coarse spatial resolution to do the crop water demand. However, the user have also the option to download the variables directly from TerraClimate repository with the download.terraclimate function and access the original TerraClimate products. If the user believes that is necessary calibrate the variables, there is another function to do it. Lastly, the estimation of the crop water demand present in this package can be run for all the Brazilian territory with TerraClimate dataset.
One haplotype is a combination of SNP (Single Nucleotide Polymorphisms) within the QTL (Quantitative Trait Loci). clusterhap groups together all individuals of a population with the same haplotype. Each group contains individual with the same allele in each SNP, whether or not missing data. Thus, clusterhap groups individuals, that to be imputed, have a non-zero probability of having the same alleles in the entire sequence of SNP's. Moreover, clusterhap calculates such probability from relative frequencies.
This package contains the prepared data that is needed for the shiny application examples in the canvasXpress package. This package also includes datasets used for automated testthat tests. Scotto L, Narayan G, Nandula SV, Arias-Pulido H et al. (2008) <doi:10.1002/gcc.20577>. Davis S, Meltzer PS (2007) <doi:10.1093/bioinformatics/btm254>.
This tool performs pairwise correlation analysis and estimate causality. Particularly, it is useful for detecting the metabolites that would be altered by the gut bacteria.
We implement causal decomposition analysis using methods proposed by Park, Lee, and Qin (2022) and Park, Kang, and Lee (2023), which provide researchers with multiple-mediator imputation, single-mediator imputation, and product-of-coefficients regression approaches to estimate the initial disparity, disparity reduction, and disparity remaining (<doi:10.1177/00491241211067516>; <doi:10.1177/00811750231183711>). We also implement sensitivity analysis for causal decomposition using R-squared values as sensitivity parameters (Park, Kang, Lee, and Ma, 2023 <doi:10.1515/jci-2022-0031>). Finally, we include individualized causal decomposition and sensitivity analyses proposed by Park, Kang, and Lee (2025+) <doi:10.48550/arXiv.2506.19010>.
The number of bird or bat fatalities from collisions with buildings, towers or wind energy turbines can be estimated based on carcass searches and experimentally assessed carcass persistence times and searcher efficiency. Functions for estimating the probability that a bird or bat that died is found by a searcher are provided. Further functions calculate the posterior distribution of the number of fatalities based on the number of carcasses found and the estimated detection probability.
Constrained ordinary least squares is performed. One constraint is that all beta coefficients (including the constant) cannot be negative. They can be either 0 or strictly positive. Another constraint is that the sum of the beta coefficients equals a constant. References: Hansen, B. E. (2022). Econometrics, Princeton University Press. <ISBN:9780691235899>.
This package implements higher order likelihood-based inference for logistic and loglinear models.
Wrangle country data more effectively and quickly. This package contains functions to easily identify and convert country names, download country information, merge country data from different sources, and make quick world maps.
Retrieve cancer screening data for cervical, breast and colorectal cancers from the Kenya Health Information System <https://hiskenya.org> in a consistent way.
An implementation of efficiency first conformal prediction (EFCP) and validity first conformal prediction (VFCP) that demonstrates both validity (coverage guarantee) and efficiency (width guarantee). To learn how to use it, check the vignettes for a quick tutorial. The package is based on the work by Yang Y., Kuchibhotla A.,(2021) <arxiv:2104.13871>.
Clean, decompose and aggregate univariate time series following the procedure "Cyclic/trend decomposition using bin interpolation" and the Logbox method for flagging outliers, both detailed in Ritter, F.: Technical note: A procedure to clean, decompose, and aggregate time series, Hydrol. Earth Syst. Sci., 27, 349â 361, <doi:10.5194/hess-27-349-2023>, 2023.
This package provides functions for identifying, fitting, and applying continuous-space, continuous-time stochastic-process movement models to animal tracking data. The package is described in Calabrese et al (2016) <doi:10.1111/2041-210X.12559>, with models and methods based on those introduced and detailed in Fleming & Calabrese et al (2014) <doi:10.1086/675504>, Fleming et al (2014) <doi:10.1111/2041-210X.12176>, Fleming et al (2015) <doi:10.1103/PhysRevE.91.032107>, Fleming et al (2015) <doi:10.1890/14-2010.1>, Fleming et al (2016) <doi:10.1890/15-1607>, Péron & Fleming et al (2016) <doi:10.1186/s40462-016-0084-7>, Fleming & Calabrese (2017) <doi:10.1111/2041-210X.12673>, Péron et al (2017) <doi:10.1002/ecm.1260>, Fleming et al (2017) <doi:10.1016/j.ecoinf.2017.04.008>, Fleming et al (2018) <doi:10.1002/eap.1704>, Winner & Noonan et al (2018) <doi:10.1111/2041-210X.13027>, Fleming et al (2019) <doi:10.1111/2041-210X.13270>, Noonan & Fleming et al (2019) <doi:10.1186/s40462-019-0177-1>, Fleming et al (2020) <doi:10.1101/2020.06.12.130195>, Noonan et al (2021) <doi:10.1111/2041-210X.13597>, Fleming et al (2022) <doi:10.1111/2041-210X.13815>, Silva et al (2022) <doi:10.1111/2041-210X.13786>, Alston & Fleming et al (2023) <doi:10.1111/2041-210X.14025>.
Fits a constrained regression model for an ordinal response with ordinal predictors and possibly others, Espinosa and Hennig (2019) <DOI:10.1007/s11222-018-9842-2>. The parameter estimates associated with an ordinal predictor are constrained to be monotonic. If a monotonicity direction (isotonic or antitonic) is not specified for an ordinal predictor by the user, then one of the available methods will either establish it or drop the monotonicity assumption. Two monotonicity tests are also available to test the null hypothesis of monotonicity over a set of parameters associated with an ordinal predictor.
Various statistical methods for survival analysis in comparing survival curves between two groups, including overall hypothesis tests described in Li et al. (2015) <doi:10.1371/journal.pone.0116774> and Huang et al. (2020) <doi:10.1080/03610918.2020.1753075>, fixed-point tests in Klein et al. (2007) <doi:10.1002/sim.2864>, short-term tests, and long-term tests in Logan et al. (2008) <doi:10.1111/j.1541-0420.2007.00975.x>. Some commonly used descriptive statistics and plots are also included.