Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An optimal alternating optimization algorithm for estimation of precision matrices of sparse tensor graphical models, and an efficient inference procedure for support recovery of the precision matrices.
Implementation of functions for fitting taper curves (a semiparametric linear mixed effects taper model) to diameter measurements along stems. Further functions are provided to estimate the uncertainty around the predicted curves, to calculate timber volume (also by sections) and marginal (e.g., upper) diameters. For cases where tree heights are not measured, methods for estimating additional variance in volume predictions resulting from uncertainties in tree height models (tariffs) are provided. The example data include the taper curve parameters for Norway spruce used in the 3rd German NFI fitted to 380 trees and a subset of section-wise diameter measurements of these trees. The functions implemented here are detailed in Kublin, E., Breidenbach, J., Kaendler, G. (2013) <doi:10.1007/s10342-013-0715-0>.
Attaches a set of packages commonly used for spatial plotting with tmap'. It includes tmap and its extensions ('tmap.glyphs', tmap.networks', tmap.cartogram', tmap.mapgl'), as well as supporting spatial data packages ('sf', stars', terra') and cols4all for exploring color palettes. The collection is designed for thematic mapping workflows and does not include the full set of packages from the R-spatial ecosystem.
Framework provides functions to parse Training Center XML (TCX) files and extract key activity metrics such as total distance, total time, calories burned, maximum altitude, and power values (watts). This package is useful for analyzing workout and training data from devices that export TCX format.
This package provides a tm Source to create corpora from articles exported from the Europresse content provider as HTML files. It is able to read both text content and meta-data information (including source, date, title, author and pages).
Data collected on movement behavior is often in the form of time- stamped latitude/longitude coordinates sampled from the underlying movement behavior. These data can be compressed into a set of segments via the Top- Down Time Ratio Segmentation method described in Meratnia and de By (2004) <doi:10.1007/978-3-540-24741-8_44> which, with some loss of information, can both reduce the size of the data as well as provide corrective smoothing mechanisms to help reduce the impact of measurement error. This is an improvement on the well-known Douglas-Peucker algorithm for segmentation that operates not on the basis of perpendicular distances. Top-Down Time Ratio segmentation allows for disparate sampling time intervals by calculating the distance between locations and segments with respect to time. Provided a trajectory with timestamps, tdtr() returns a set of straight- line segments that can represent the full trajectory. McCool, Lugtig, and Schouten (2022) <doi:10.1007/s11116-022-10328-2> describe this method as implemented here in more detail.
This package contains R functions for simulating and estimating integer-valued trawl processes as described in the article Veraart (2019),"Modeling, simulation and inference for multivariate time series of counts using trawl processes", Journal of Multivariate Analysis, 169, pages 110-129, <doi:10.1016/j.jmva.2018.08.012> and for simulating random vectors from the bivariate negative binomial and the bi- and trivariate logarithmic series distributions.
The trapezoid package provides dtrapezoid', ptrapezoid', qtrapezoid', and rtrapezoid functions for the trapezoidal distribution.
Carries out analyses of two-way tables with one observation per cell, together with graphical displays for an additive fit and a diagnostic plot for removable non-additivity via a power transformation of the response. It implements Tukey's Exploratory Data Analysis (1973) <ISBN: 978-0201076165> methods, including a 1-degree-of-freedom test for row*column non-additivity', linear in the row and column effects.
This package provides a fast, interactive cross-platform, and easy to share WebGL'-based 3D brain viewer that visualizes FreeSurfer and/or AFNI/SUMA surfaces. The viewer widget can be either standalone or embedded into R-shiny applications. The standalone version only require a web browser with WebGL2 support (for example, Chrome', Firefox', Safari'), and can be inserted into any websites. The R-shiny support allows the 3D viewer to be dynamically generated from reactive user inputs. Please check the publication by Wang, Magnotti, Zhang, and Beauchamp (2023, <doi:10.1523/ENEURO.0328-23.2023>) for electrode localization. This viewer has been fully adopted by RAVE <https://openwetware.org/wiki/RAVE>, an interactive toolbox to analyze iEEG data by Magnotti, Wang, and Beauchamp (2020, <doi:10.1016/j.neuroimage.2020.117341>). Please check citation("threeBrain") for details.
This package creates some WebGL shaders. They can be used as the background of a Shiny app. They also can be visualized in the RStudio viewer pane or included in Rmd documents, but this is pretty useless, besides contemplating them.
Adds some functions to help in your coding etiquette. tinycodet primarily focuses on 4 aspects. 1) Safer decimal (in)equality testing, standard-evaluated alternatives to with() and aes(), and other functions for safer coding. 2) A new package import system, that attempts to combine the benefits of using a package without attaching it, with the benefits of attaching a package. 3) Extending the string manipulation capabilities of the stringi R package. 4) Reducing repetitive code. Besides linking to Rcpp', tinycodet has only one other dependency, namely stringi'.
The best ANN structure for time series data analysis is a demanding need in the present era. This package will find the best-fitted ANN model based on forecasting accuracy. The optimum size of the hidden layers was also determined after determining the number of lags to be included. This package has been developed using the algorithm of Paul and Garai (2021) <doi:10.1007/s00500-021-06087-4>.
Useful functions to connect to TM1 <https://www.ibm.com/uk-en/products/planning-and-analytics> instance from R via REST API. With the functions in the package, data can be imported from TM1 via mdx view or native view, data can be sent to TM1', processes and chores can be executed, and cube and dimension metadata information can be taken.
Generic methods for use in a time series probabilistic framework, allowing for a common calling convention across packages. Additional methods for time series prediction ensembles and probabilistic plotting of predictions is included. A more detailed description is available at <https://www.nopredict.com/packages/tsmethods> which shows the currently implemented methods in the tsmodels framework.
This package provides bindings to Tree-sitter', an incremental parsing system for programming tools. Tree-sitter builds concrete syntax trees for source files of any language, and can efficiently update those syntax trees as the source file is edited. It also includes a robust error recovery system that provides useful parse results even in the presence of syntax errors.
Computes the t* statistic corresponding to the tau* population coefficient introduced by Bergsma and Dassios (2014) <DOI:10.3150/13-BEJ514> and does so in O(n^2) time following the algorithm of Heller and Heller (2016) <DOI:10.48550/arXiv.1605.08732> building off of the work of Weihs, Drton, and Leung (2016) <DOI:10.1007/s00180-015-0639-x>. Also allows for independence testing using the asymptotic distribution of t* as described by Nandy, Weihs, and Drton (2016) <DOI:10.1214/16-EJS1166>.
This package provides various commonly-used response time trimming methods, including the recursive / moving-criterion methods reported by Van Selst and Jolicoeur (1994). By passing trimming functions raw data files, the package will return trimmed data ready for inferential testing.
This package provides tools to download data series from Banco de España ('BdE') on tibble format. Banco de España is the national central bank and, within the framework of the Single Supervisory Mechanism ('SSM'), the supervisor of the Spanish banking system along with the European Central Bank. This package is in no way sponsored endorsed or administered by Banco de España'.
Some accelerated three-term conjugate gradient algorithms implemented purely in R with the same user interface as optim(). The search directions and acceleration scheme are described in Andrei, N. (2013) <doi:10.1016/j.amc.2012.11.097>, Andrei, N. (2013) <doi:10.1016/j.cam.2012.10.002>, and Andrei, N (2015) <doi:10.1007/s11075-014-9845-9>. Line search is done by a hybrid algorithm incorporating the ideas in Oliveia and Takahashi (2020) <doi:10.1145/3423597> and More and Thuente (1994) <doi:10.1145/192115.192132>.
Gene and exon information from Ensembl genome builds GRCh38.p13 (104) and GRCh37 (v40) to use with the topr package.
Routines for nonlinear time series analysis based on Threshold Autoregressive Moving Average (TARMA) models. It provides functions and methods for: TARMA model fitting and forecasting, including robust estimators, see Goracci et al. JBES (2025) <doi:10.1080/07350015.2024.2412011>; tests for threshold effects, see Giannerini et al. JoE (2024) <doi:10.1016/j.jeconom.2023.01.004>, Goracci et al. Statistica Sinica (2023) <doi:10.5705/ss.202021.0120>, Angelini et al. (2024) <doi:10.48550/arXiv.2308.00444>; unit-root tests based on TARMA models, see Chan et al. Statistica Sinica (2024) <doi:10.5705/ss.202022.0125>.
Unicodes are not friendly to work with, and not all Unicodes are Emoji per se, making obtaining Emoji statistics a difficult task. This tool can help your experience of working with Emoji as smooth as possible, as it has the tidyverse style.
This package provides a wrapper to a set of algorithms designed to recognise positional cues present in hierarchical for-human Tables (which would normally be interpreted visually by the human brain) to decompose, then reconstruct the data into machine-readable LongForm Dataframes.