Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Most estimators implemented by the video game industry cannot obtain reliable initial estimates nor guarantee comparability between distant estimates. TrueSkill Through Time solves all these problems by modeling the entire history of activities using a single Bayesian network allowing the information to propagate correctly throughout the system. This algorithm requires only a few iterations to converge, allowing millions of observations to be analyzed using any low-end computer. Landfried G, Mocskos E (2025). "TrueSkill Through Time: Reliable Initial Skill Estimates and Historical Comparability with Julia, Python, and R." <doi:10.18637/jss.v112.i06>. The core ideas implemented in this project were developed by Dangauthier P, Herbrich R, Minka T, Graepel T (2007). "Trueskill through time: Revisiting the history of chess.".
Calculates total survey error (TSE) for one or more surveys, using both scale-dependent and scale-independent metrics. Package works directly from the data set, with no hand calculations required: just upload a properly structured data set (see TESTIND and its documentation), properly input column names (see functions documentation), and run your functions. For more on TSE, see: Weisberg, Herbert (2005, ISBN:0-226-89128-3); Biemer, Paul (2010) <doi:10.1093/poq/nfq058>; Biemer, Paul et.al. (2017, ISBN:9781119041672); etc.
Simple tabulation should be dead simple. This package is an opinionated approach to easy tabulations while also providing exact numbers and allowing for re-usability. This is achieved by providing tabulations as data.frames with columns for values, optional variable names, frequency counts including and excluding NAs and percentages for counts including and excluding NAs. Also values are automatically sorted by in decreasing order of frequency counts to allow for fast skimming of the most important information.
This package implements a method for identifying subgroups with superior response relative to the overall sample.
Pure R implementation of Apache Thrift. This library doesn't require any code generation. To learn more about Thrift go to <https://thrift.apache.org>.
Interactively gate points on a scatter plot. Interactively drawn gates are recorded and can be applied programmatically to reproduce results exactly. Programmatic gating is based on the package gatepoints by Wajid Jawaid.
This package provides several confidence interval and testing procedures, based on either semiparametric (using event-specific win ratios) or nonparametric measures, including the ratio of integrated cumulative hazard (RICH) and the ratio of integrated transformed cumulative hazard (RITCH), for treatment effect inference with terminal and non-terminal events under competing risks. The semiparametric results were developed in Yang et al. (2022 <doi:10.1002/sim.9266>), and the nonparametric results were developed in Yang (2025 <doi:10.1002/sim.70205>). For comparison, results for the win ratio (Finkelstein and Schoenfeld 1999 <doi:10.1002/(SICI)1097-0258(19990615)18:11%3C1341::AID-SIM129%3E3.0.CO;2-7>), Pocock et al. 2012 <doi:10.1093/eurheartj/ehr352>, and Bebu and Lachin 2016 <doi:10.1093/biostatistics/kxv032>) are included. The package also supports univariate survival analysis with a single event. In this package, effect size estimates and confidence intervals are obtained for each event type, and several testing procedures are implemented for the global null hypothesis of no treatment effect on either terminal or non-terminal events. Furthermore, a test of proportional hazards assumptions, under which the event-specific win ratios converge to hazard ratios, and a test of equal hazard ratios, are provided. For summarizing the treatment effect across all events, confidence intervals for linear combinations of the event-specific win ratios, RICH, or RITCH are available using pre-determined or data-driven weights. Asymptotic properties of these inference procedures are discussed in Yang et al. (2022 <doi:10.1002/sim.9266>) and Yang (2025 <doi:10.1002/sim.70205>).
Interface to TensorFlow Probability', a Python library built on TensorFlow that makes it easy to combine probabilistic models and deep learning on modern hardware ('TPU', GPU'). TensorFlow Probability includes a wide selection of probability distributions and bijectors, probabilistic layers, variational inference, Markov chain Monte Carlo, and optimizers such as Nelder-Mead, BFGS, and SGLD.
This package provides functions for assigning taxonomy to NCBI accession numbers and taxon IDs based on NCBI's accession2taxid and taxdump files. This package allows the user to download NCBI data dumps and create a local database for fast and local taxonomic assignment.
Simulation of random vectors from truncated multivariate normal and t distributions based on the algorithms proposed by Yifang Li and Sujit K. Ghosh (2015) <doi:10.1080/15598608.2014.996690>.
Generate a palette of tints, shades or both from a single colour.
Table, Listings, and Graphs (TLG) library for common outputs used in clinical trials.
Uniform random samples from simple manifolds, sometimes with noise, are commonly used to test topological data analytic (TDA) tools. This package includes samplers powered by two techniques: analytic volume-preserving parameterizations, as employed by Arvo (1995) <doi:10.1145/218380.218500>, and rejection sampling, as employed by Diaconis, Holmes, and Shahshahani (2013) <doi:10.1214/12-IMSCOLL1006>.
Enables the analysis of spectroscopy data such as infrared ('IR'), Raman, and nuclear magnetic resonance ('NMR') using the tidy data framework from the tidyverse'. The tidyspec package provides functions for data transformation, normalization, baseline correction, smoothing, derivatives, and both interactive and static visualization. It promotes structured, reproducible workflows for spectral data exploration and preprocessing. Implemented methods include Savitzky and Golay (1964) "Smoothing and Differentiation of Data by Simplified Least Squares Procedures" <doi:10.1021/ac60214a047>, Sternberg (1983) "Biomedical Image Processing" <https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1654163>, Zimmermann and Kohler (1996) "Baseline correction using the rolling ball algorithm" <doi:10.1016/0168-583X(95)00908-6>, Beattie and Esmonde-White (2021) "Exploration of Principal Component Analysis: Deriving Principal Component Analysis Visually Using Spectra" <doi:10.1177/0003702820987847>, Wickham et al. (2019) "Welcome to the tidyverse" <doi:10.21105/joss.01686>, and Kuhn, Wickham and Hvitfeldt (2024) "recipes: Preprocessing and Feature Engineering Steps for Modeling" <https://CRAN.R-project.org/package=recipes>.
This package provides a geomorphology-based hydrological modelling for transferring streamflow measurements from gauged to ungauged catchments. Inverse modelling enables to estimate net rainfall from streamflow measurements following Boudhraâ et al. (2018) <doi:10.1080/02626667.2018.1425801>. Resulting net rainfall is then estimated on the ungauged catchments by spatial interpolation in order to finally simulate streamflow following de Lavenne et al. (2016) <doi:10.1002/2016WR018716>.
This package provides a tm Source to create corpora from articles exported from the Europresse content provider as HTML files. It is able to read both text content and meta-data information (including source, date, title, author and pages).
Manager of tick-by-tick transaction data that performs cleaning', aggregation and import in an efficient and fast way. The package engine, written in C++, exploits the zlib and gzstream libraries to handle gzipped data without need to uncompress them. Cleaning and aggregation are performed according to Brownlees and Gallo (2006) <DOI:10.1016/j.csda.2006.09.030>. Currently, TAQMNGR processes raw data from WRDS (Wharton Research Data Service, <https://wrds-web.wharton.upenn.edu/wrds/>).
This package provides threshold sweep methods for Qualitative Comparative Analysis (QCA). Implements Condition Threshold Sweep-Single (CTS-S), Condition Threshold Sweep-Multiple (CTS-M), Outcome Threshold Sweep (OTS), and Dual Threshold Sweep (DTS) for systematic exploration of threshold calibration effects on crisp-set QCA results. These methods extend traditional robustness approaches by treating threshold variation as an exploratory tool for discovering causal structures. Built on top of the QCA package by Dusa (2019) <doi:10.1007/978-3-319-75668-4>, with function arguments following QCA conventions. Based on set-theoretic methods by Ragin (2008) <doi:10.7208/chicago/9780226702797.001.0001> and established robustness protocols by Rubinson et al. (2019) <doi:10.1177/00491241211036158>.
Converting structured data from tables into XML format using predefined templates ensures consistency and flexibility, making it ideal for data exchange, reporting, and automated workflows.
We provide a toolbox to estimate the time delay between the brightness time series of gravitationally lensed quasar images via Bayesian and profile likelihood approaches. The model is based on a state-space representation for irregularly observed time series data generated from a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian method adopts scientifically motivated hyper-prior distributions and a Metropolis-Hastings within Gibbs sampler, producing posterior samples of the model parameters that include the time delay. A profile likelihood of the time delay is a simple approximation to the marginal posterior distribution of the time delay. Both Bayesian and profile likelihood approaches complement each other, producing almost identical results; the Bayesian way is more principled but the profile likelihood is easier to implement. A new functionality is added in version 1.0.9 for estimating the time delay between doubly-lensed light curves observed in two bands. See also Tak et al. (2017) <doi:10.1214/17-AOAS1027>, Tak et al. (2018) <doi:10.1080/10618600.2017.1415911>, Hu and Tak (2020) <arXiv:2005.08049>.
This package contains summary data on gene expression in normal human tissues from the Human Protein Atlas for use with the Tissue-Adjusted Pathway Analysis of cancer (TPAC) method. Frost, H. Robert (2023) "Tissue-adjusted pathway analysis of cancer (TPAC)" <doi:10.1101/2022.03.17.484779>.
The Tanaka method enhances the representation of topography on a map using shaded contour lines. In this simplified implementation of the method, north-west white contours represent illuminated topography and south-east black contours represent shaded topography. See Tanaka (1950) <doi:10.2307/211219>.
Calculates topic-specific diagnostics (e.g. mean token length, exclusivity) for Latent Dirichlet Allocation and Correlated Topic Models fit using the topicmodels package. For more details, see Chapter 12 in Airoldi et al. (2014, ISBN:9781466504080), pp 262-272 Mimno et al. (2011, ISBN:9781937284114), and Bischof et al. (2014) <arXiv:1206.4631v1>.
This package performs the detection of linear trend changes for univariate time series by implementing the bottom-up unbalanced wavelet transformation proposed by H. Maeng and P. Fryzlewicz (2023). The estimated number and locations of the change-points are returned with the piecewise-linear estimator for signal.