Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements a semiparametric estimator for the odds ratio model with censored, time-lagged, ordered categorical outcome in a randomized clinical trial that incorporates baseline and time-dependent information. Tsiatis AA, Davidian M, Holloway ST (2023) <doi:10.1111/biom.13603>.
Enables the analysis of spectroscopy data such as infrared ('IR'), Raman, and nuclear magnetic resonance ('NMR') using the tidy data framework from the tidyverse'. The tidyspec package provides functions for data transformation, normalization, baseline correction, smoothing, derivatives, and both interactive and static visualization. It promotes structured, reproducible workflows for spectral data exploration and preprocessing. Implemented methods include Savitzky and Golay (1964) "Smoothing and Differentiation of Data by Simplified Least Squares Procedures" <doi:10.1021/ac60214a047>, Sternberg (1983) "Biomedical Image Processing" <https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1654163>, Zimmermann and Kohler (1996) "Baseline correction using the rolling ball algorithm" <doi:10.1016/0168-583X(95)00908-6>, Beattie and Esmonde-White (2021) "Exploration of Principal Component Analysis: Deriving Principal Component Analysis Visually Using Spectra" <doi:10.1177/0003702820987847>, Wickham et al. (2019) "Welcome to the tidyverse" <doi:10.21105/joss.01686>, and Kuhn, Wickham and Hvitfeldt (2024) "recipes: Preprocessing and Feature Engineering Steps for Modeling" <https://CRAN.R-project.org/package=recipes>.
This package provides a tufte'-alike style for rmarkdown'. A modern take on the Tufte design for pdf and html vignettes, building on the tufte package with additional contributions from the knitr and ggtufte package, and also acknowledging the key influence of envisioned css'.
Interface to TensorFlow IO', Datasets and filesystem extensions maintained by `TensorFlow SIG-IO` <https://github.com/tensorflow/community/blob/master/sigs/io/CHARTER.md>.
Computation of t-year survival probabilities and t-year risks with right censored survival data. The Kaplan-Meier estimator is used to provide estimates for data without competing risks and the Aalen-Johansen estimator is used when there are competing risks. Confidence intervals and p-values are obtained using either usual Wald-type inference or empirical likelihood inference, as described in Thomas and Grunkemeier (1975) <doi:10.1080/01621459.1975.10480315> and Blanche (2020) <doi:10.1007/s10985-018-09458-6>. Functions for both one-sample and two-sample inference are provided. Unlike Wald-type inference, empirical likelihood inference always leads to consistent conclusions, in terms of statistical significance, when comparing two risks (or survival probabilities) via either a ratio or a difference.
This package implements a likelihood ratio test and two pairwise standardized mean difference tests for testing equality of means against tree ordered alternatives in one-way ANOVA. The null hypothesis assumes all group means are equal, while the alternative assumes the control mean is less than or equal to each treatment mean with at least one strict inequality. Inputs are a list of numeric vectors (groups) and a significance level; outputs include the test statistic, critical value, and decision. Methods described in "Testing Against Tree Ordered Alternatives in One-way ANOVA" <doi:10.48550/arXiv.2507.17229>.
Unleash the power of time-series data visualization with ease using our package. Designed with simplicity in mind, it offers three key features through the shiny package output. The first tab shows time- series charts with forecasts, allowing users to visualize trends and changes effortlessly. The second one displays Averages per country presented in tables with accompanying sparklines, providing a quick and attractive overview of the data. The last tab presents A customizable world map colored based on user-defined variables for any chosen number of countries, offering an advanced visual approach to understanding geographical data distributions. This package operates with just a few simple arguments, enabling users to conduct sophisticated analyses without the need for complex programming skills. Transform your time-series data analysis experience with our user-friendly tool.
Efficient estimation of the population-level causal effects of stochastic interventions on a continuous-valued exposure. Both one-step and targeted minimum loss estimators are implemented for the counterfactual mean value of an outcome of interest under an additive modified treatment policy, a stochastic intervention that may depend on the natural value of the exposure. To accommodate settings with outcome-dependent two-phase sampling, procedures incorporating inverse probability of censoring weighting are provided to facilitate the construction of inefficient and efficient one-step and targeted minimum loss estimators. The causal parameter and its estimation were first described by DÃ az and van der Laan (2013) <doi:10.1111/j.1541-0420.2011.01685.x>, while the multiply robust estimation procedure and its application to data from two-phase sampling designs is detailed in NS Hejazi, MJ van der Laan, HE Janes, PB Gilbert, and DC Benkeser (2020) <doi:10.1111/biom.13375>. The software package implementation is described in NS Hejazi and DC Benkeser (2020) <doi:10.21105/joss.02447>. Estimation of nuisance parameters may be enhanced through the Super Learner ensemble model in sl3', available for download from GitHub using remotes::install_github("tlverse/sl3")'.
This package provides new layer functions to tmap for creating various types of cartograms. A cartogram is a type of thematic map in which geographic areas are resized or distorted based on a quantitative variable, such as population. The goal is to make the area sizes proportional to the selected variable while preserving geographic positions as much as possible.
This package provides an R-interface to the TMDb API (see TMDb API on <https://developers.themoviedb.org/3/getting-started/introduction>). The Movie Database (TMDb) is a popular user editable database for movies and TV shows (see <https://www.themoviedb.org>).
This package provides a method for comparing the results of two binary diagnostic tests using paired data. Users can rapidly perform descriptive and inferential statistics in a single function call. Options permit users to select which parameters they are interested in comparing and methods for correction for multiple comparisons. Confidence intervals are calculated using the methods with the best coverage. Hypothesis tests use the methods with the best asymptotic performance. A summary of the methods is available in Roldán-Nofuentes (2020) <doi:10.1186/s12874-020-00988-y>. This package is targeted at clinical researchers who want to rapidly and effectively compare results from binary diagnostic tests.
Feature selection algorithm that extracts features in highly correlated spaces. The extracted features are meant to be fed into simple explainable models such as linear or logistic regressions. The package is useful in the field of explainable modelling as a way to understand variable behavior.
This package implements the TabNet model by Sercan O. Arik et al. (2019) <doi:10.48550/arXiv.1908.07442> with Coherent Hierarchical Multi-label Classification Networks by Giunchiglia et al. <doi:10.48550/arXiv.2010.10151> and provides a consistent interface for fitting and creating predictions. It's also fully compatible with the tidymodels ecosystem.
Computes treatment patterns within a given cohort using the Observational Medical Outcomes Partnership (OMOP) common data model (CDM). As described in Markus, Verhamme, Kors, and Rijnbeek (2022) <doi:10.1016/j.cmpb.2022.107081>.
Write output (plots and tables) ensuring traceability back to code. Includes a graphics saver with simple automation of stamping with source, destination and creation time. A list of plots can be saved at once. A user-friendly selection of output dimensions for presentations, on-screen inspections, and more available.
We propose an optimality criterion to determine the required training set, r-score, which is derived directly from Pearson's correlation between the genomic estimated breeding values and phenotypic values of the test set <doi:10.1007/s00122-019-03387-0>. This package provides two main functions to determine a good training set and its size.
Package test2norm contains functions to generate formulas for normative standards applied to cognitive tests. It takes raw test scores (e.g., number of correct responses) and converts them to scaled scores and demographically adjusted scores, using methods described in Heaton et al. (2003) <doi:10.1016/B978-012703570-3/50010-9> & Heaton et al. (2009, ISBN:9780199702800). The scaled scores are calculated as quantiles of the raw test scores, scaled to have the mean of 10 and standard deviation of 3, such that higher values always correspond to better performance on the test. The demographically adjusted scores are calculated from the residuals of a model that regresses scaled scores on demographic predictors (e.g., age). The norming procedure makes use of the mfp2() function from the mfp2 package to explore nonlinear associations between cognition and demographic variables.
Multiscale multifractal analysis (MMA) (GieraÅ towski et al., 2012)<DOI:10.1103/PhysRevE.85.021915> is a time series analysis method, designed to describe scaling properties of fluctuations within the signal analyzed. The main result of this procedure is the so called Hurst surface h(q,s) , which is a dependence of the local Hurst exponent h (fluctuation scaling exponent) on the multifractal parameter q and the scale of observation s (data window width).
Which uses Twitter APIs for the necessary data in sentiment analysis, acts as a middleware with the approved Twitter Application. A special access key is given to users who subscribe to the application with their Twitter account. With this special access key, the user defined keyword for sentiment analysis can be searched in twitter recent searches and results can be obtained( more information <https://github.com/hakkisabah/tsentiment> ). In addition, a service named tsentiment-services has been developed to provide all these operations ( for more information <https://github.com/hakkisabah/tsentiment-services> ). After the successful results obtained and in line with the permissions given by the user, the results of the analysis of the word cloud and bar graph saved in the user folder directory can be seen. In each analysis performed, the previous analysis visual result is deleted and this is the basic information you need to know as a practice rule. tsentiment package provides a free service that acts as a middleware for easy data extraction from Twitter, and in return, the user rate limit is reduced by 30 requests from the total limit and the remaining requests are used. These 30 requests are reserved for use in application analytics. For information about endpoints, you can refer to the limit information in the "GET search/tweets" row in the Endpoints column in the list at <https://developer.twitter.com/en/docs/twitter-api/v1/rate-limits>.
This package provides a collection of functions and routines for inputting thermal image video files, plotting and converting binary raw data into estimates of temperature. First published 2015-03-26. Written primarily for research purposes in biological applications of thermal images. v1 included the base calculations for converting thermal image binary values to temperatures. v2 included additional equations for providing heat transfer calculations and an import function for thermal image files (v2.2.3 fixed error importing thermal image to windows OS). v3. Added numerous functions for converting thermal image, videos, rewriting and exporting. v3.1. Added new functions to convert files. v3.2. Fixed the various functions related to finding frame times. v4.0. fixed an error in atmospheric attenuation constants, affecting raw2temp and temp2raw functions. Recommend update for use with long distance calculations. v.4.1.3 changed to frameLocates to reflect change to as.character() to format().
This package provides extended data frames, with a special data frame column which contains two indexes, with potentially a nesting structure, and support for tibbles and methods for dplyr'.
Calculates trait moments from trait and community data using the methods developed in Maitner et al (2021) <doi:10.22541/au.162196147.76797968/v1>.
Temporal disaggregation methods are used to disaggregate and interpolate a low frequency time series to a higher frequency series, where either the sum, the mean, the first or the last value of the resulting high frequency series is consistent with the low frequency series. Temporal disaggregation can be performed with or without one or more high frequency indicator series. Contains the methods of Chow-Lin, Santos-Silva-Cardoso, Fernandez, Litterman, Denton and Denton-Cholette, summarized in Sax and Steiner (2013) <doi:10.32614/RJ-2013-028>. Supports most R time series classes.
U.S. Department of the Treasury, Alcohol and Tobacco Tax and Trade Bureau (TTB) collects data and reports on monthly beer industry production and operations. This data package includes a collection of 10 years (2006 - 2015) worth of data on materials used at U.S. breweries in pounds reported by the Brewer's Report of Operations and the Quarterly Brewer's Report of Operations forms, ready for data analysis. This package also includes historical tax rates on distilled spirits, wine, beer, champagne, and tobacco products as individual data sets.