Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Higher Criticism (HC) test between two frequency tables. Test is based on an adaptation of the Tukey-Donoho-Jin HC statistic to testing frequency tables described in Kipnis (2019) <arXiv:1911.01208>.
The strength of evidence provided by epidemiological and observational studies is inherently limited by the potential for unmeasured confounding. We focus on three key quantities: the observed bound of the confidence interval closest to the null, the relationship between an unmeasured confounder and the outcome, for example a plausible residual effect size for an unmeasured continuous or binary confounder, and the relationship between an unmeasured confounder and the exposure, for example a realistic mean difference or prevalence difference for this hypothetical confounder between exposure groups. Building on the methods put forth by Cornfield et al. (1959), Bross (1966), Schlesselman (1978), Rosenbaum & Rubin (1983), Lin et al. (1998), Lash et al. (2009), Rosenbaum (1986), Cinelli & Hazlett (2020), VanderWeele & Ding (2017), and Ding & VanderWeele (2016), we can use these quantities to assess how an unmeasured confounder may tip our result to insignificance.
Theme and colour palettes for The Globe and Mail's graphics. Includes colour and fill scale functions, colour palette helpers and a Globe-styled ggplot2 theme object.
This package implements tic-tac-toe game to play on console, either with human or AI players. Various levels of AI players are trained through the Q-learning algorithm.
Generic methods for parameter tuning of classification algorithms using multiple scoring functions (Muessel et al. (2012), <doi:10.18637/jss.v046.i05>).
This package performs various statistical transformations; Box-Cox and Log (Box and Cox, 1964) <doi:10.1111/j.2517-6161.1964.tb00553.x>, Glog (Durbin et al., 2002) <doi:10.1093/bioinformatics/18.suppl_1.S105>, Neglog (Whittaker et al., 2005) <doi:10.1111/j.1467-9876.2005.00520.x>, Reciprocal (Tukey, 1957), Log Shift (Feng et al., 2016) <doi:10.1002/sta4.104>, Bickel-Docksum (Bickel and Doksum, 1981) <doi:10.1080/01621459.1981.10477649>, Yeo-Johnson (Yeo and Johnson, 2000) <doi:10.1093/biomet/87.4.954>, Square Root (Medina et al., 2019), Manly (Manly, 1976) <doi:10.2307/2988129>, Modulus (John and Draper, 1980) <doi:10.2307/2986305>, Dual (Yang, 2006) <doi:10.1016/j.econlet.2006.01.011>, Gpower (Kelmansky et al., 2013) <doi:10.1515/sagmb-2012-0030>. It also performs graphical approaches, assesses the success of the transformation via tests and plots.
An integrated suite of tools for creating, maintaining, and reusing FAIR (Findable, Accessible, Interoperable, Reusable) theories. Designed to support transparent and collaborative theory development, the package enables users to formalize theories, track changes with version control, assess pre-empirical coherence, and derive testable hypotheses. Aligning with open science principles and workflows, theorytools facilitates the systematic improvement of theoretical frameworks and enhances their discoverability and usability.
Streamline the process of accessing fundamental financial data from the United States Securities and Exchange Commission's ('SEC') Electronic Data Gathering, Analysis, and Retrieval system ('EDGAR') API <https://www.sec.gov/edgar/sec-api-documentation>, transforming it into a tidy, analysis-ready format.
This package provides functions for the computationally efficient simulation of dynamic networks estimated with the statistical framework of temporal exponential random graph models, implemented in the tergm package.
Topological data analytic methods in machine learning rely on vectorizations of the persistence diagrams that encode persistent homology, as surveyed by Ali &al (2000) <doi:10.48550/arXiv.2212.09703>. Persistent homology can be computed using TDA and ripserr and vectorized using TDAvec'. The Tidymodels package collection modularizes machine learning in R for straightforward extensibility; see Kuhn & Silge (2022, ISBN:978-1-4920-9644-3). These recipe steps and dials tuners make efficient algorithms for computing and vectorizing persistence diagrams available for Tidymodels workflows.
This package provides customizable 3D tree models (as OBJ files) for use in data visualization. Includes both planar and solid tree models, various crown types (columnar, oval, palm, pyramidal, rounded, spreading, vase, weeping), and options to change the diameter, height, and color of the tree's crown and trunk.
This package provides functions for the retrieval, manipulation, and visualization of geospatial data, with an aim towards producing 3D landscape visualizations in the Unity 3D rendering engine. Functions are also provided for retrieving elevation data and base map tiles from the USGS National Map <https://apps.nationalmap.gov/services/>.
Two stage curvature identification with machine learning for causal inference in settings when instrumental variable regression is not suitable because of potentially invalid instrumental variables. Based on Guo and Buehlmann (2022) "Two Stage Curvature Identification with Machine Learning: Causal Inference with Possibly Invalid Instrumental Variables" <doi:10.48550/arXiv.2203.12808>. The vignette is available in Carl, Emmenegger, Bühlmann and Guo (2025) "TSCI: Two Stage Curvature Identification for Causal Inference with Invalid Instruments in R" <doi:10.18637/jss.v114.i07>.
This application provides exploratory and confirmatory factor analysis, classical test theory, unidimensional and multidimensional item response theory, and continuous item response model analysis, through the shiny interactive interface. In addition, it offers rich functionalities for visualizing and downloading results. Users can download figures, tables, and analysis reports via the interactive interface.
This package implements nonlinear autoregressive (AR) time series models. For univariate series, a non-parametric approach is available through additive nonlinear AR. Parametric modeling and testing for regime switching dynamics is available when the transition is either direct (TAR: threshold AR) or smooth (STAR: smooth transition AR, LSTAR). For multivariate series, one can estimate a range of TVAR or threshold cointegration TVECM models with two or three regimes. Tests can be conducted for TVAR as well as for TVECM (Hansen and Seo 2002 and Seo 2006).
Agglomerative hierarchical clustering with a bespoke distance measure based on medication similarities in the Anatomical Therapeutic Chemical Classification System, medication timing and medication amount or dosage. Tools for summarizing, illustrating and manipulating the cluster objects are also available.
Regression models for temporal process responses with time-varying coefficient.
This package provides a two-stage regression method that can be used when various input data types are correlated, for example gene expression and methylation in drug response prediction. In the first stage it uses the upstream features (such as methylation) to predict the response variable (such as drug response), and in the second stage it uses the downstream features (such as gene expression) to predict the residuals of the first stage. In our manuscript (Aben et al., 2016, <doi:10.1093/bioinformatics/btw449>), we show that using TANDEM prevents the model from being dominated by gene expression and that the features selected by TANDEM are more interpretable.
This package creates a local Lightning Memory-Mapped Database ('LMDB') of many commonly used taxonomic authorities and provides functions that can quickly query this data. Supported taxonomic authorities include the Integrated Taxonomic Information System ('ITIS'), National Center for Biotechnology Information ('NCBI'), Global Biodiversity Information Facility ('GBIF'), Catalogue of Life ('COL'), and Open Tree Taxonomy ('OTT'). Name and identifier resolution using LMDB can be hundreds of times faster than either relational databases or internet-based queries. Precise data provenance information for data derived from naming providers is also included.
Data collected on movement behavior is often in the form of time- stamped latitude/longitude coordinates sampled from the underlying movement behavior. These data can be compressed into a set of segments via the Top- Down Time Ratio Segmentation method described in Meratnia and de By (2004) <doi:10.1007/978-3-540-24741-8_44> which, with some loss of information, can both reduce the size of the data as well as provide corrective smoothing mechanisms to help reduce the impact of measurement error. This is an improvement on the well-known Douglas-Peucker algorithm for segmentation that operates not on the basis of perpendicular distances. Top-Down Time Ratio segmentation allows for disparate sampling time intervals by calculating the distance between locations and segments with respect to time. Provided a trajectory with timestamps, tdtr() returns a set of straight- line segments that can represent the full trajectory. McCool, Lugtig, and Schouten (2022) <doi:10.1007/s11116-022-10328-2> describe this method as implemented here in more detail.
This package provides a toolkit for working with TOML files in R while preserving formatting, comments, and structure. tomledit enables serialization of R objects such as lists, data.frames, numeric, logical, and date vectors.
Package designed for working with vectors and lists of vectors, mainly for turning them into other indexed data structures.
An object model for source text and translations. Find and extract translatable strings. Provide translations and seamlessly retrieve them at runtime.
Displays processing time in a clear and structured way. One function supports iterative workflows by predicting and showing the total time required, while another reports the time taken for individual steps within a process.