Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides the means to convert multiqc_data.json files, produced by the wonderful MultiQC tool, into tidy data frames for downstream analysis in R. This analysis might involve cohort analysis, quality control visualisation, change-point detection, statistical process control, clustering, or any other type of quality analysis.
This package contains some auxiliary functions.
Estimates the time-varying (tv) parameters of the GARCH(1,1) model, enabling the modeling of non-stationary volatilities by allowing the model parameters to change gradually over time. The estimation and prediction processes are facilitated through the application of the Kalman filter and state-space equations. This package supports the estimation of tv parameters for various deterministic functions, which can be identified through exploratory analysis of different time periods or segments of return data. The methodology is grounded in the framework presented by Ferreira et al. (2017) <doi:10.1080/00949655.2017.1334778>.
Fitting models for, and simulation of, trend locally stationary wavelet (TLSW) time series models, which take account of time-varying trend and dependence structure in a univariate time series. The TLSW model, and its estimation, is described in McGonigle, Killick and Nunes (2022a) <doi:10.1111/jtsa.12643>, (2022b) <doi:10.1214/22-EJS2044>. Further information regarding the use of the package, along with detailed examples, can be found in McGonigle, Killick and Nunes (2025) <doi:10.18637/jss.v115.i10>. New users will likely want to start with the TLSW function.
This package provides a suite of auxiliary functions that enhance time series estimation and forecasting, including a robust anomaly detection routine based on Chen and Liu (1993) <doi:10.2307/2290724> (imported and wrapped from the tsoutliers package), utilities for managing calendar and time conversions, performance metrics to assess both point forecasts and distributional predictions, advanced simulation by allowing the generation of time series componentsâ such as trend, seasonal, ARMA, irregular, and anomaliesâ in a modular fashion based on the innovations form of the state space model and a number of transformation methods including Box-Cox, Logit, Softplus-Logit and Sigmoid.
This package provides implementation of the "Topic SCORE" algorithm that is proposed by Tracy Ke and Minzhe Wang. The singular value decomposition step is optimized through the usage of svds() function in RSpectra package, on a dgRMatrix sparse matrix. Also provides a column-wise error measure in the word-topic matrix A, and an algorithm for recovering the topic-document matrix W given A and D based on quadratic programming. The details about the techniques are explained in the paper "A new SVD approach to optimal topic estimation" by Tracy Ke and Minzhe Wang (2017) <arXiv:1704.07016>.
This package provides triangulations of regular height fields, based on the methods described in "Fast Polygonal Approximation of Terrains and Height Fields" Michael Garland and Paul S. Heckbert (1995) <https://www.mgarland.org/files/papers/scape.pdf> using code from the hmm library written by Michael Fogleman <https://github.com/fogleman/hmm>.
Fast, reproducible detection and quantitative analysis of tertiary lymphoid structures (TLS) in multiplexed tissue imaging. Implements Independent Component Analysis Trace (ICAT) index, local Ripley's K scanning, automated K Nearest Neighbor (KNN)-based TLS detection, and T-cell clusters identification as described in Amiryousefi et al. (2025) <doi:10.1101/2025.09.21.677465>.
Read General Transit Feed Specification (GTFS) zipfiles into a list of R dataframes. Perform validation of the data structure against the specification. Analyze the headways and frequencies at routes and stops. Create maps and perform spatial analysis on the routes and stops. Please see the GTFS documentation here for more detail: <https://gtfs.org/>.
Recursive partytioning of transformation models with corresponding random forest for conditional transformation models as described in Transformation Forests (Hothorn and Zeileis, 2021, <doi:10.1080/10618600.2021.1872581>) and Top-Down Transformation Choice (Hothorn, 2018, <DOI:10.1177/1471082X17748081>).
This package provides rolling statistical functions based on date and time windows instead of n-lagged observations.
This package provides classes and methods for trajectory data, with support for nesting individual Track objects in track sets (Tracks) and track sets for different entities in collections of Tracks. Methods include selection, generalization, aggregation, intersection, simulation, and plotting.
There is a wide range of R packages created for data visualization, but still, there was no simple and easily accessible way to create clean and transparent charts - up to now. The tidycharts package enables the user to generate charts compliant with International Business Communication Standards ('IBCS'). It means unified bar widths, colors, chart sizes, etc. Creating homogeneous reports has never been that easy! Additionally, users can apply semantic notation to indicate different data scenarios (plan, budget, forecast). What's more, it is possible to customize the charts by creating a personal color pallet with the possibility of switching to default options after the experiments. We wanted the package to be helpful in writing reports, so we also made joining charts in a one, clear image possible. All charts are generated in SVG format and can be shown in the RStudio viewer pane or exported to HTML output of knitr'/'markdown'.
Instance feature calculation and evolutionary instance generation for the traveling salesman problem. Also contains code to "morph" two TSP instances into each other. And the possibility to conveniently run a couple of solvers on TSP instances.
This package creates a local Lightning Memory-Mapped Database ('LMDB') of many commonly used taxonomic authorities and provides functions that can quickly query this data. Supported taxonomic authorities include the Integrated Taxonomic Information System ('ITIS'), National Center for Biotechnology Information ('NCBI'), Global Biodiversity Information Facility ('GBIF'), Catalogue of Life ('COL'), and Open Tree Taxonomy ('OTT'). Name and identifier resolution using LMDB can be hundreds of times faster than either relational databases or internet-based queries. Precise data provenance information for data derived from naming providers is also included.
Collect your data on digital marketing campaigns from Taboola using the Windsor.ai API <https://windsor.ai/api-fields/>.
Reconstructs animal tracks from magnetometer, accelerometer, depth and optional speed data. Designed primarily using data from Wildlife Computers Daily Diary tags deployed on northern fur seals.
This tool is extended from methods in Bio.SeqUtils.MeltingTemp of python. The melting temperature of nucleic acid sequences can be calculated in three method, the Wallace rule (Thein & Wallace (1986) <doi:10.1016/S0140-6736(86)90739-7>), empirical formulas based on G and C content (Marmur J. (1962) <doi:10.1016/S0022-2836(62)80066-7>, Schildkraut C. (2010) <doi:10.1002/bip.360030207>, Wetmur J G (1991) <doi:10.3109/10409239109114069>, Untergasser,A. (2012) <doi:10.1093/nar/gks596>, von Ahsen N (2001) <doi:10.1093/clinchem/47.11.1956>) and nearest neighbor thermodynamics (Breslauer K J (1986) <doi:10.1073/pnas.83.11.3746>, Sugimoto N (1996) <doi:10.1093/nar/24.22.4501>, Allawi H (1998) <doi:10.1093/nar/26.11.2694>, SantaLucia J (2004) <doi:10.1146/annurev.biophys.32.110601.141800>, Freier S (1986) <doi:10.1073/pnas.83.24.9373>, Xia T (1998) <doi:10.1021/bi9809425>, Chen JL (2012) <doi:10.1021/bi3002709>, Bommarito S (2000) <doi:10.1093/nar/28.9.1929>, Turner D H (2010) <doi:10.1093/nar/gkp892>, Sugimoto N (1995) <doi:10.1016/S0048-9697(98)00088-6>, Allawi H T (1997) <doi:10.1021/bi962590c>, Santalucia N (2005) <doi:10.1093/nar/gki918>), and it can also be corrected with salt ions and chemical compound (SantaLucia J (1996) <doi:10.1021/bi951907q>, SantaLucia J(1998) <doi:10.1073/pnas.95.4.1460>, Owczarzy R (2004) <doi:10.1021/bi034621r>, Owczarzy R (2008) <doi:10.1021/bi702363u>).
This package provides functions for defining and conducting a time series prediction process including pre(post)processing, decomposition, modelling, prediction and accuracy assessment. The generated models and its yielded prediction errors can be used for benchmarking other time series prediction methods and for creating a demand for the refinement of such methods. For this purpose, benchmark data from prediction competitions may be used.
The tmap package provides two plotting modes for static and interactive thematic maps. This package extends tmap with two additional modes based on Mapbox GL JS and MapLibre GL JS'. These modes feature interactive vector tiles, globe views, and other modern web-mapping capabilities, while maintaining a consistent tmap interface across all plotting modes.
This package provides diverse datasets in the tsibble data structure. These datasets are useful for learning and demonstrating how tidy temporal data can tidied, visualised, and forecasted.
Articles in the R Journal were first authored in LaTeX', which performs admirably for PDF files but is less than ideal for modern online interfaces. The texor package does all the transitional chores and conversions necessary to move to the online versions.
This package implements a method for identifying subgroups with superior response relative to the overall sample.
Time series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing. References: Nayak et al. (2024) <doi:10.1007/s40808-023-01944-7> and Nayak et al. (2024) <doi:10.1016/j.simpa.2024.100716>.