Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Fits mixtures of multivariate t-distributions (with eigen-decomposed covariance structure) via the expectation conditional-maximization algorithm under a clustering or classification paradigm.
The t-Digest construction algorithm, by Dunning et al., (2019) <doi:10.48550/arXiv.1902.04023>, uses a variant of 1-dimensional k-means clustering to produce a very compact data structure that allows accurate estimation of quantiles. This t-Digest data structure can be used to estimate quantiles, compute other rank statistics or even to estimate related measures like trimmed means. The advantage of the t-Digest over previous digests for this purpose is that the t-Digest handles data with full floating point resolution. The accuracy of quantile estimates produced by t-Digests can be orders of magnitude more accurate than those produced by previous digest algorithms. Methods are provided to create and update t-Digests and retrieve quantiles from the accumulated distributions.
The two-parameter Xgamma and Poisson Xgamma distributions are analyzed, covering standard distribution and regression functions, maximum likelihood estimation, quantile functions, probability density and mass functions, cumulative distribution functions, and random number generation. References include: "Sen, S., Chandra, N. and Maiti, S. S. (2018). On properties and applications of a two-parameter XGamma distribution. Journal of Statistical Theory and Applications, 17(4): 674--685. <doi:10.2991/jsta.2018.17.4.9>." "Wani, M. A., Ahmad, P. B., Para, B. A. and Elah, N. (2023). A new regression model for count data with applications to health care data. International Journal of Data Science and Analytics. <doi:10.1007/s41060-023-00453-1>.".
Multiscale multifractal analysis (MMA) (GieraĆ
towski et al., 2012)<DOI:10.1103/PhysRevE.85.021915> is a time series analysis method, designed to describe scaling properties of fluctuations within the signal analyzed. The main result of this procedure is the so called Hurst surface h(q,s) , which is a dependence of the local Hurst exponent h (fluctuation scaling exponent) on the multifractal parameter q and the scale of observation s (data window width).
An implementation of turtle graphics <http://en.wikipedia.org/wiki/Turtle_graphics>. Turtle graphics comes from Papert's language Logo and has been used to teach concepts of computer programming.
Unit testing is a solid component of automated CI/CD pipelines. tinytest - a lightweight, zero-dependency alternative to testthat was developed. To be able to integrate tinytests results into common CI/CD systems the test results from tinytest need to be caputred and converted to JUnit XML format. tinytest2JUnit enables this conversion while staying also lightweight and only have tinytest as its dependency.
Adds some functions to help in your coding etiquette. tinycodet primarily focuses on 4 aspects. 1) Safer decimal (in)equality testing, standard-evaluated alternatives to with() and aes(), and other functions for safer coding. 2) A new package import system, that attempts to combine the benefits of using a package without attaching it, with the benefits of attaching a package. 3) Extending the string manipulation capabilities of the stringi R package. 4) Reducing repetitive code. Besides linking to Rcpp', tinycodet has only one other dependency, namely stringi'.
This package provides a timeR class that makes timing codes easier. One can create timeR objects and use them to record all timings, and extract recordings as data frame for later use.
Additive hazards models with two stage residual inclusion method are fitted under either survival data or competing risks data. The estimator incorporates an instrumental variable and therefore can recover causal estimand in the presence of unmeasured confounding under some assumptions. A.Ying, R. Xu and J. Murphy. (2019) <doi:10.1002/sim.8071>.
Provide a range of functions with multiple criteria for cutting phylogenetic trees at any evolutionary depth. It enables users to cut trees in any orientation, such as rootwardly (from root to tips) and tipwardly (from tips to its root), or allows users to define a specific time interval of interest. It can also be used to create multiple tree pieces of equal temporal width. Moreover, it allows the assessment of novel temporal rates for various phylogenetic indexes, which can be quickly displayed graphically.
Fit a threshold regression model for Interval Censored Data based on the first-hitting-time of a boundary by the sample path of a Wiener diffusion process. The threshold regression methodology is well suited to applications involving survival and time-to-event data.
This package provides rolling statistical functions based on date and time windows instead of n-lagged observations.
An object model for source text and translations. Find and extract translatable strings. Provide translations and seamlessly retrieve them at runtime.
This package provides wrapper functions to the multiple marginal model function mmm() of package multcomp to implement the trend test of Tukey, Ciminera and Heyse (1985) <DOI:10.2307/2530666> for general parametric models.
Manage time-series data frames across time zones, resolutions, and date ranges, while filling gaps using weekday/hour patterns or simple fill helpers or plotting them interactively. It is designed to work seamlessly with the tidyverse and dygraphs environments.
This package provides a simple type annotation for R that is usable in scripts, in the R console and in packages. It is intended as a convention to allow other packages to use the type information to provide error checking, automatic documentation or optimizations.
Plots and analyzes time-intensity curve data, such as data from (contrast-enhanced) ultrasound. Values such as peak intensity, time to peak, area under the curve, wash in rate and wash out rate are calculated.
Node centrality measures for temporal networks. Available measures are temporal degree centrality, temporal closeness centrality and temporal betweenness centrality defined by Kim and Anderson (2012) <doi:10.1103/PhysRevE.85.026107>. Applying the REN algorithm by Hanke and Foraita (2017) <doi:10.1186/s12859-017-1677-x> when calculating the centrality measures keeps the computational running time linear in the number of graph snapshots. Further, all methods can run in parallel up to the number of nodes in the network.
Approaches for incorporating time into network analysis. Methods include: construction of time-ordered networks (temporal graphs); shortest-time and shortest-path-length analyses; resource spread calculations; data resampling and rarefaction for null model construction; reduction to time-aggregated networks with variable window sizes; application of common descriptive statistics to these networks; vector clock latencies; and plotting functionalities. The package supports <doi:10.1371/journal.pone.0020298>.
Gives the required 2^n treatment combinations in a 2^n symmetric factorial experiment in their respective standard order.
Compute a non-overlapping layout of text boxes to label multiple overlain curves. For each curve, iteratively search for an adjacent x,y position for the text box that does not overlap with the other curves. If this process fails, then offsets are computed to add to the y values for each curve, that results in sufficient space to add all of the text labels.
This package provides functions to design phase 1 trials using an isotonic regression based design incorporating time-to-event information. Simulation and design functions are available, which incorporate information about followup and DLTs, and apply isotonic regression to devise estimates of DLT probability.
Collection of shiny widgets to support teal applications. Enables the manipulation of application layout and plot or table settings.
Description: Implementation of topological data analysis methods based on graph-theoretic approaches for discovering topological structures in data. The core algorithm constructs topological spaces from graphs following Nada et al. (2018) <doi:10.1002/mma.5096> "New types of topological structures via graphs".