Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
In some phase I trials, the design goal is to find the dose associated with a certain target toxicity rate or the dose with a certain weighted sum of rates of various toxicity grades. TITEgBOIN provides the set up and calculations needed to run a dose-finding trial using bayesian optimal interval (BOIN) (Yuan et al. (2016) <doi:10.1158/1078-0432.CCR-16-0592>), generalized bayesian optimal interval (gBOIN) (Mu et al. (2019) <doi:10.1111/rssc.12263>), time-to-event bayesian optimal interval (TITEBOIN) (Lin et al. (2020) <doi:10.1093/biostatistics/kxz007>) and time-to-event generalized bayesian optimal interval (TITEgBOIN) (Takeda et al. (2022) <doi:10.1002/pst.2182>) designs. TITEgBOIN can conduct tasks: run simulations and get operating characteristics; determine the dose for the next cohort; select maximum tolerated dose (MTD). These functions allow customization of design characteristics to vary sample size, cohort sizes, target dose limiting toxicity (DLT) rates or target normalized equivalent toxicity score (ETS) rates to account for discrete toxicity score, and incorporate safety and/or stopping rules.
This package provides a complete data set of historic GB trig points in British National Grid (OSGB36) coordinate reference system. Trig points (aka triangulation stations) are fixed survey points used to improve the accuracy of map making in Great Britain during the 20th Century. Trig points are typically located on hilltops so still serve as a useful navigational aid for walkers and hikers today.
Implementation of a Bayesian two-way latent structure model for integrative genomic clustering. The model clusters samples in relation to distinct data sources, with each subject-dataset receiving a latent cluster label, though cluster labels have across-dataset meaning because of the model formulation. A common scaling across data sources is unneeded, and inference is obtained by a Gibbs Sampler. The model can fit multivariate Gaussian distributed clusters or a heavier-tailed modification of a Gaussian density. Uniquely among integrative clustering models, the formulation makes no nestedness assumptions of samples across data sources -- the user can still fit the model if a study subject only has information from one data source. The package provides a variety of post-processing functions for model examination including ones for quantifying observed alignment of clusterings across genomic data sources. Run time is optimized so that analyses of datasets on the order of thousands of features on fewer than 5 datasets and hundreds of subjects can converge in 1 or 2 days on a single CPU. See "Swanson DM, Lien T, Bergholtz H, Sorlie T, Frigessi A, Investigating Coordinated Architectures Across Clusters in Integrative Studies: a Bayesian Two-Way Latent Structure Model, 2018, <doi:10.1101/387076>, Cold Spring Harbor Laboratory" at <https://www.biorxiv.org/content/early/2018/08/07/387076.full.pdf> for model details.
You only need to type why pie charts are bad on Google to find thousands of articles full of (valid) reasons why other types of charts should be preferred over this one. Therefore, because of the little use due to the reasons already mentioned, making pie charts (and related) in R is not straightforward, so other functions are needed to simplify things. In this R package there are useful functions to make tasty pie charts immediately by exploiting the many cool templates provided.
Generates stochastic time series and genealogies associated with a population dynamics model. Times series are simulated using the Gillespie exact and approximate algorithms and a new algorithm we introduce that uses both approaches to optimize the time execution of the simulations. Genealogies are simulated from a trajectory using a backwards-in-time based approach. Methods are described in Danesh G et al (2022) <doi:10.1111/2041-210X.14038>.
Tipping point analysis for clinical trials that employ Bayesian dynamic borrowing via robust meta-analytic predictive (MAP) priors. Further functions facilitate expert elicitation of a primary weight of the informative component of the robust MAP prior and computation of operating characteristics. Intended use is the planning, analysis and interpretation of extrapolation studies in pediatric drug development, but applicability is generally wider.
Density, distribution function, quantile function, and random generation function, maximum likelihood estimation (MLE), penalized maximum likelihood estimation (PMLE), the quartiles method estimation (QM), and median rank estimation (MEDRANK) for the two-parameter exponential distribution. MLE and PMLE are based on Mengjie Zheng (2013)<https://scse.d.umn.edu/sites/scse.d.umn.edu/files/mengjie-thesis_masters-1.pdf>. QM is based on Entisar Elgmati and Nadia Gregni (2016)<doi:10.5539/ijsp.v5n5p12>. MEDRANK is based on Matthew Reid (2022)<doi:10.5281/ZENODO.3938000>.
Plot official statistics time series conveniently: automatic legends, highlight windows, stacked bar chars with positive and negative contributions, sum-as-line option, two y-axes with automatic horizontal grids that fit both axes and other popular chart types. tstools comes with a plethora of defaults to let you plot without setting an abundance of parameters first, but gives you the flexibility to tweak the defaults. In addition to charts, tstools provides a super fast, data.table backed time series I/O that allows the user to export / import long format, wide format and transposed wide format data to various file types.
Node centrality measures for temporal networks. Available measures are temporal degree centrality, temporal closeness centrality and temporal betweenness centrality defined by Kim and Anderson (2012) <doi:10.1103/PhysRevE.85.026107>. Applying the REN algorithm by Hanke and Foraita (2017) <doi:10.1186/s12859-017-1677-x> when calculating the centrality measures keeps the computational running time linear in the number of graph snapshots. Further, all methods can run in parallel up to the number of nodes in the network.
Checks LaTeX documents and .bib files for typing errors, such as spelling errors, incorrect quotation marks. Also provides useful functions for parsing and linting bibliography files.
It performs the smoothing approach provided by penalized least squares for univariate and bivariate time series, as proposed by Guerrero (2007) and Gerrero et al. (2017). This allows to estimate the time series trend by controlling the amount of resulting (joint) smoothness. --- Guerrero, V.M (2007) <DOI:10.1016/j.spl.2007.03.006>. Guerrero, V.M; Islas-Camargo, A. and Ramirez-Ramirez, L.L. (2017) <DOI:10.1080/03610926.2015.1133826>.
The function TailClassifier() suggests one of the following types of tail for your discrete data: 1) Power decaying tail; 2) Sub-exponential decaying tail; and 3) Near-exponential decaying tail. The function also provides an estimate of the parameter for the classified-distribution as a reference.
This package implements a method for identifying subgroups with superior response relative to the overall sample.
Data frames with time information are subset and flagged with period information. Data frames with times are dealt as timeDF objects and periods are represented as periodDF objects.
Multiple flavors of the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model with a large choice of conditional distributions. Methods for specification, estimation, prediction, filtering, simulation, statistical testing and more. Represents a partial re-write and re-think of rugarch', making use of automatic differentiation for estimation.
This package provides tools for translating environmental change into organismal response. Microclimate models to vertically scale weather station data to organismal heights. The biophysical modeling tools include both general models for heat flows and specific models to predict body temperatures for a variety of ectothermic taxa. Additional functions model and temporally partition air and soil temperatures and solar radiation. Utility functions estimate the organismal and environmental parameters needed for biophysical ecology. TrenchR focuses on relatively simple and modular functions so users can create transparent and flexible biophysical models. Many functions are derived from Gates (1980) <doi:10.1007/978-1-4612-6024-0> and Campbell and Norman (1988) <isbn:9780387949376>.
Time series toolkit with identical behavior for all time series classes: ts','xts', data.frame', data.table', tibble', zoo', timeSeries', tsibble', tis or irts'. Also converts reliably between these classes.
Component analysis for three-way data arrays by means of Candecomp/Parafac, Tucker3, Tucker2 and Tucker1 models.
Accompanies the book Rainer Schlittgen and Cristina Sattarhoff (2020) <https://www.degruyter.com/view/title/575978> "Angewandte Zeitreihenanalyse mit R, 4. Auflage" . The package contains the time series and functions used therein. It was developed over many years teaching courses about time series analysis.
This package creates useful files and folders for data analysis projects and provides functions to manage data, scripts and output files. Also provides a project template for Rstudio'.
This package provides a unified estimation procedure for the analysis of right censored data using linear transformation models. An introduction can be found in Jie Zhou et al. (2022) <doi:10.18637/jss.v101.i09>.
Two-stage procedure compares hazard rate functions, which may or may not cross each other.
This package creates interpretable decision tree visualizations with the data represented as a heatmap at the tree's leaf nodes. treeheatr utilizes the customizable ggparty package for drawing decision trees.
The t-Digest construction algorithm, by Dunning et al., (2019) <doi:10.48550/arXiv.1902.04023>, uses a variant of 1-dimensional k-means clustering to produce a very compact data structure that allows accurate estimation of quantiles. This t-Digest data structure can be used to estimate quantiles, compute other rank statistics or even to estimate related measures like trimmed means. The advantage of the t-Digest over previous digests for this purpose is that the t-Digest handles data with full floating point resolution. The accuracy of quantile estimates produced by t-Digests can be orders of magnitude more accurate than those produced by previous digest algorithms. Methods are provided to create and update t-Digests and retrieve quantiles from the accumulated distributions.