Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package contains functions that can determine whether a time series is second-order stationary or not (and hence evidence for locally stationarity). Given two non-stationary series (i.e. locally stationary series) this package can then discover time-varying linear combinations that are second-order stationary. Cardinali, A. and Nason, G.P. (2013) <doi:10.18637/jss.v055.i01>.
Frequentist confidence analysis answers the question: How confident are we in a particular treatment effect? This package calculates the frequentist confidence in a treatment effect of interest given observed data, and returns the family of confidence curves associated with that data.
Computing elliptical joint confidence regions at a specified confidence level. It provides the flexibility to estimate either classical or robust confidence regions, which can be visualized in 2D or 3D plots. The classical approach assumes normality and uses the mean and covariance matrix to define the confidence regions. Alternatively, the robustified version employs estimators like minimum covariance determinant (MCD) and M-estimator, making them less sensitive to outliers and departures from normality. Furthermore, the functions allow users to group the dataset based on categorical variables and estimate separate confidence regions for each group. This capability is particularly useful for exploring potential differences or similarities across subgroups within a dataset. Varmuza and Filzmoser (2009, ISBN:978-1-4200-5947-2). Johnson and Wichern (2007, ISBN:0-13-187715-1). Raymaekers and Rousseeuw (2019) <DOI:10.1080/00401706.2019.1677270>.
Different methods to conduct causal inference for multiple treatments with a binary outcome, including regression adjustment, vector matching, Bayesian additive regression trees, targeted maximum likelihood and inverse probability of treatment weighting using different generalized propensity score models such as multinomial logistic regression, generalized boosted models and super learner. For more details, see the paper by Hu et al. <doi:10.1177/0962280220921909>.
Regression splines that handle a mix of continuous and categorical (discrete) data often encountered in applied settings. I would like to gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC, <https://www.nserc-crsng.gc.ca>), the Social Sciences and Humanities Research Council of Canada (SSHRC, <https://www.sshrc-crsh.gc.ca>), and the Shared Hierarchical Academic Research Computing Network (SHARCNET, <https://www.sharcnet.ca>). We would also like to acknowledge the contributions of the GNU GSL authors. In particular, we adapt the GNU GSL B-spline routine gsl_bspline.c adding automated support for quantile knots (in addition to uniform knots), providing missing functionality for derivatives, and for extending the splines beyond their endpoints.
When taking online surveys, participants sometimes respond to items without regard to their content. These types of responses, referred to as careless or insufficient effort responding, constitute significant problems for data quality, leading to distortions in data analysis and hypothesis testing, such as spurious correlations. The R package careless provides solutions designed to detect such careless / insufficient effort responses by allowing easy calculation of indices proposed in the literature. It currently supports the calculation of longstring, even-odd consistency, psychometric synonyms/antonyms, Mahalanobis distance, and intra-individual response variability (also termed inter-item standard deviation). For a review of these methods, see Curran (2016) <doi:10.1016/j.jesp.2015.07.006>.
This package provides methods for the import/export and automated analysis of concept maps and concept landscapes (sets of concept maps).
Functions, data and code for Hilbe, J.M. 2011. Negative Binomial Regression, 2nd Edition (Cambridge University Press) and Hilbe, J.M. 2014. Modeling Count Data (Cambridge University Press).
Supplies higher-order coordinatized data specification and fluid transform operators that include pivot and anti-pivot as special cases. The methodology is describe in Zumel', 2018, "Fluid data reshaping with cdata'", <https://winvector.github.io/FluidData/FluidDataReshapingWithCdata.html> , <DOI:10.5281/zenodo.1173299> . This package introduces the idea of explicit control table specification of data transforms. Works on in-memory data or on remote data using rquery and SQL database interfaces.
This package provides functions to access data from public RESTful APIs including Nager.Date', World Bank API', and REST Countries API', retrieving real-time or historical data related to China, such as holidays, economic indicators, and international demographic and geopolitical indicators. Additionally, the package includes one of the largest curated collections of open datasets focused on China and Hong Kong, covering topics such as air quality, demographics, input-output tables, epidemiology, political structure, names, and social indicators. The package supports reproducible research and teaching by integrating reliable international APIs and structured datasets from public, academic, and government sources. For more information on the APIs, see: Nager.Date <https://date.nager.at/Api>, World Bank API <https://datahelpdesk.worldbank.org/knowledgebase/articles/889392>, and REST Countries API <https://restcountries.com/>.
This package provides a convenient set of wrapper functions to install pharmacometric packages and Shiny applications developed by Certara PMX and Integrated Drug Development (iDD). The functions ensure the successful installation of packages from non-standard repositories.
This package provides a new robust principal component analysis algorithm is implemented that relies upon the Cauchy Distribution. The algorithm is suitable for high dimensional data even if the sample size is less than the number of variables. The methodology is described in this paper: Fayomi A., Pantazis Y., Tsagris M. and Wood A.T.A. (2024). "Cauchy robust principal component analysis with applications to high-dimensional data sets". Statistics and Computing, 34: 26. <doi:10.1007/s11222-023-10328-x>.
Terrestrial maps with simplified topologies for Census Divisions, Agricultural Regions, Economic Regions, Federal Electoral Divisions and Provinces.
This package provides correlation-based penalty estimators for both linear and logistic regression models by implementing a new regularization method that incorporates correlation structures within the data. This method encourages a grouping effect where strongly correlated predictors tend to be in or out of the model together. See Tutz and Ulbricht (2009) <doi:10.1007/s11222-008-9088-5> and Algamal and Lee (2015) <doi:10.1016/j.eswa.2015.08.016>.
This package implements the general template for collaborative targeted maximum likelihood estimation. It also provides several commonly used C-TMLE instantiation, like the vanilla/scalable variable-selection C-TMLE (Ju et al. (2017) <doi:10.1177/0962280217729845>) and the glmnet-C-TMLE algorithm (Ju et al. (2017) <arXiv:1706.10029>).
This package provides a set of functions for conducting cognitive diagnostic computerized adaptive testing applications (Chen, 2009) <DOI:10.1007/s11336-009-9123-2>). It includes different item selection rules such us the global discrimination index (Kaplan, de la Torre, and Barrada (2015) <DOI:10.1177/0146621614554650>) and the nonparametric selection method (Chang, Chiu, and Tsai (2019) <DOI:10.1177/0146621618813113>), as well as several stopping rules. Functions for generating item banks and responses are also provided. To guide item bank calibration, model comparison at the item level can be conducted using the two-step likelihood ratio test statistic by Sorrel, de la Torre, Abad and Olea (2017) <DOI:10.1027/1614-2241/a000131>.
Fits a constrained regression model for an ordinal response with ordinal predictors and possibly others, Espinosa and Hennig (2019) <DOI:10.1007/s11222-018-9842-2>. The parameter estimates associated with an ordinal predictor are constrained to be monotonic. If a monotonicity direction (isotonic or antitonic) is not specified for an ordinal predictor by the user, then one of the available methods will either establish it or drop the monotonicity assumption. Two monotonicity tests are also available to test the null hypothesis of monotonicity over a set of parameters associated with an ordinal predictor.
Explore calcium (Ca) and phosphate (Pi) homeostasis with two novel Shiny apps, building upon on a previously published mathematical model written in C, to ensure efficient computations. The underlying model is accessible here <https://pubmed.ncbi.nlm.nih.gov/28747359/)>. The first application explores the fundamentals of Ca-Pi homeostasis, while the second provides interactive case studies for in-depth exploration of the topic, thereby seeking to foster student engagement and an integrative understanding of Ca-Pi regulation.
Analyze data from next-generation sequencing experiments on genomic samples. CLONETv2 offers a set of functions to compute allele specific copy number and clonality from segmented data and SNPs position pileup. The package has also calculated the clonality of single nucleotide variants given read counts at mutated positions. The package has been developed at the laboratory of Computational and Functional Oncology, Department of CIBIO, University of Trento (Italy), under the supervision of prof Francesca Demichelis. References: Prandi et al. (2014) <doi:10.1186/s13059-014-0439-6>; Carreira et al. (2014) <doi:10.1126/scitranslmed.3009448>; Romanel et al. (2015) <doi:10.1126/scitranslmed.aac9511>.
In meta regression sometimes the studies have multiple effects that are correlated. For this reason cluster robust standard errors must be computed. However, since the clusters are unbalanced the wild bootstrap is suggested. See Oczkowski E. and Doucouliagos H. (2015). "Wine prices and quality ratings: a meta-regression analysis". American Journal of Agricultural Economics, 97(1): 103--121. <doi:10.1093/ajae/aau057> and Cameron A. C., Gelbach J. B. and Miller D. L. (2008). "Bootstrap-based improvements for inference with clustered errors". The Review of Economics and Statistics, 90(3): 414--427. <doi:10.1162/rest.90.3.414>.
P-values and no/lowest observed (adverse) effect concentration values derived from the closure principle computational approach test (Lehmann, R. et al. (2015) <doi:10.1007/s00477-015-1079-4>) are provided. The package contains functions to generate intersection hypotheses according to the closure principle (Bretz, F., Hothorn, T., Westfall, P. (2010) <doi:10.1201/9781420010909>), an implementation of the computational approach test (Ching-Hui, C., Nabendu, P., Jyh-Jiuan, L. (2010) <doi:10.1080/03610918.2010.508860>) and the combination of both, that is, the closure principle computational approach test.
Curates biological sequences massively, quickly, without errors and without internet connection. Biological sequences curing is performed by aligning the forward and / or revers primers or ends of cloning vectors with the sequences to be cleaned. After the alignment, new subsequences are generated without biological fragment not desired by the user. Pozzi et al (2020) <doi:10.1007/s00438-020-01671-z>.
The cito package provides a user-friendly interface for training and interpreting deep neural networks (DNN). cito simplifies the fitting of DNNs by supporting the familiar formula syntax, hyperparameter tuning under cross-validation, and helps to detect and handle convergence problems. DNNs can be trained on CPU, GPU and MacOS GPUs. In addition, cito has many downstream functionalities such as various explainable AI (xAI) metrics (e.g. variable importance, partial dependence plots, accumulated local effect plots, and effect estimates) to interpret trained DNNs. cito optionally provides confidence intervals (and p-values) for all xAI metrics and predictions. At the same time, cito is computationally efficient because it is based on the deep learning framework torch'. The torch package is native to R, so no Python installation or other API is required for this package.
This package provides a series of wrapper functions to implement the 10 maximum likelihood models of animal orientation described by Schnute and Groot (1992) <DOI:10.1016/S0003-3472(05)80068-5>. The functions also include the ability to use different optimizer methods and calculate various model selection metrics (i.e., AIC, AICc, BIC). The ability to perform variants of the Hermans-Rasson test and Pycke test is also included as described in Landler et al. (2019) <DOI:10.1186/s12898-019-0246-8>. The latest version also includes a new method to calculate circular-circular and circular-linear distance correlations.