Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package contains functions to generate random numbers from the beta distribution and random vectors from the Dirichlet distribution.
An implementation to compute an optimal dose escalation rule using deep reinforcement learning in phase I oncology trials (Matsuura et al. (2023) <doi:10.1080/10543406.2023.2170402>). The dose escalation rule can directly optimize the percentages of correct selection (PCS) of the maximum tolerated dose (MTD).
The expander functions rely on the mathematics developed for the Hessian-definiteness invariance theorem for linear projection transformations of variables, described in authors paper, to generate the full, high-dimensional gradient and Hessian from the lower-dimensional derivative objects. This greatly relieves the computational burden of generating the regression-function derivatives, which in turn can be fed into any optimization routine that utilizes such derivatives. The theorem guarantees that Hessian definiteness is preserved, meaning that reasoning about this property can be performed in the low-dimensional space of the base distribution. This is often a much easier task than its equivalent in the full, high-dimensional space. Definiteness of Hessian can be useful in selecting optimization/sampling algorithms such as Newton-Raphson optimization or its sampling equivalent, the Stochastic Newton Sampler. Finally, in addition to being a computational tool, the regression expansion framework is of conceptual value by offering new opportunities to generate novel regression problems.
Bootstrap forecast densities for GARCH (Generalized Autoregressive Conditional Heteroskedastic) returns and volatilities using the robust residual-based bootstrap procedure of Trucios, Hotta and Ruiz (2017) <DOI:10.1080/00949655.2017.1359601>.
Allows wrapping values in success() and failure() types to capture the result of operations, along with any status codes. Risky expressions can be wrapped in as_result() and functions wrapped in result() to catch errors and assign the relevant result types. Monadic functions can be bound together as pipelines or transaction scripts using then_try(), to gracefully handle errors at any step.
Inspired by Karl Broman`s reader on using knitr with asciidoc (<https://kbroman.org/knitr_knutshell/pages/asciidoc.html>), this is merely a wrapper to knitr and asciidoc'.
These datasets support the implementation in R of the software PACTA (Paris Agreement Capital Transition Assessment), which is a free tool that calculates the alignment between corporate lending portfolios and climate scenarios (<https://www.transitionmonitor.com/>). Financial institutions use PACTA to study how their capital allocation decisions align with climate change mitigation goals. Because both financial institutions and market data providers keep their data private, this package provides fake, public data to enable the development and use of PACTA in R.
The rmoo package is a framework for multi- and many-objective optimization, which allows researchers and users versatility in parameter configuration, as well as tools for analysis, replication and visualization of results. The rmoo package was built as a fork of the GA package by Luca Scrucca(2017) <DOI:10.32614/RJ-2017-008> and implementing the Non-Dominated Sorting Genetic Algorithms proposed by K. Deb's.
This package creates reports from Trello, a collaborative, project organization and list-making application. <https://trello.com/> Reports are created by comparing individual Trello board cards from two different points in time and documenting any changes made to the cards.
This package provides functions to retrieve data and metadata from providers that disseminate data by means of SDMX web services. SDMX (Statistical Data and Metadata eXchange) is a standard that has been developed with the aim of simplifying the exchange of statistical information. More about the SDMX standard and the SDMX Web Services can be found at: <https://sdmx.org>.
Maximum likelihood estimation for univariate reducible stochastic differential equation models. Discrete, possibly noisy observations, not necessarily evenly spaced in time. Can fit multiple individuals/units with global and local parameters, by fixed-effects or mixed-effects methods. Ref.: Garcia, O. (2019) "Estimating reducible stochastic differential equations by conversion to a least-squares problem", Computational Statistics 34(1): 23-46, <doi:10.1007/s00180-018-0837-4>.
Compress local and online images using the reSmush.it API service <https://resmush.it/>.
Download and handle spatial and temporal data from the CAMELS-CL dataset (Catchment Attributes and Meteorology for Large Sample Studies, Chile) <https://camels.cr2.cl/>, developed by Alvarez-Garreton et al. (2018) <doi:10.5194/hess-22-5817-2018>. The package does not generate new data, it only facilitates direct access to the original dataset for hydrological analyses.
Mixed Treatment Comparison is a methodology to compare directly and/or indirectly health strategies (drugs, treatments, devices). This package provides an Rcmdr plugin to perform Mixed Treatment Comparison for binary outcome using BUGS code from Bristol University (Lu and Ades).
This package provides a collection of palettes designed to integrate with ggplot', reflecting the color schemes associated with ConesaLab'.
This package provides tools for working with Type S (Sign) and Type M (Magnitude) errors, as proposed in Gelman and Tuerlinckx (2000) <doi:10.1007/s001800000040> and Gelman & Carlin (2014) <doi:10.1177/1745691614551642>. In addition to simply calculating the probability of Type S/M error, the package includes functions for calculating these errors across a variety of effect sizes for comparison, and recommended sample size given "tolerances" for Type S/M errors. To improve the speed of these calculations, closed forms solutions for the probability of a Type S/M error from Lu, Qiu, and Deng (2018) <doi:10.1111/bmsp.12132> are implemented. As of 1.0.0, this includes support only for simple research designs. See the package vignette for a fuller exposition on how Type S/M errors arise in research, and how to analyze them using the type of design analysis proposed in the above papers.
Symbolic Data Analysis (SDA) was proposed by professor Edwin Diday in 1987, the main purpose of SDA is to substitute the set of rows (cases) in the data table for a concept (second order statistical unit). This package implements, to the symbolic case, certain techniques of automatic classification, as well as some linear models.
Within this package the XML-RPC API to NEOS <https://neos-server.org/neos/> is implemented. This enables the user to pass optimization problems to NEOS and retrieve results within R.
An integrated set of tools to allow data users to conduct meteorological normalisation and counterfactual modelling for air quality data. The meteorological normalisation technique uses predictive random forest models to remove variation of pollutant concentrations so trends and interventions can be explored in a robust way. For examples, see Grange et al. (2018) <doi:10.5194/acp-18-6223-2018> and Grange and Carslaw (2019) <doi:10.1016/j.scitotenv.2018.10.344>. The random forest models can also be used for counterfactual or business as usual (BAU) modelling by using the models to predict, from the model's perspective, the future. For an example, see Grange et al. (2021) <doi:10.5194/acp-2020-1171>.
This package implements the network clustering algorithm described in Newman (2006) <doi:10.1103/PhysRevE.74.036104>. The complete iterative algorithm comprises of two steps. In the first step, the network is expressed in terms of its leading eigenvalue and eigenvector and recursively partition into two communities. Partitioning occurs if the maximum positive eigenvalue is greater than the tolerance (10e-5) for the current partition, and if it results in a positive contribution to the Modularity. Given an initial separation using the leading eigen step, rSpectral then continues to maximise for the change in Modularity using a fine-tuning step - or variate thereof. The first stage here is to find the node which, when moved from one community to another, gives the maximum change in Modularity. This nodeâ s community is then fixed and we repeat the process until all nodes have been moved. The whole process is repeated from this new state until the change in the Modularity, between the new and old state, is less than the predefined tolerance. A slight variant of the fine-tuning step, which can improve speed of the calculation, is also provided. Instead of moving each node into each community in turn, we only consider moves of neighbouring nodes, found in different communities, to the community of the current node of interest. The two steps process is repeatedly applied to each new community found, subdivided each community into two new communities, until we are unable to find any division that results in a positive change in Modularity.
Optimally robust estimation for extreme value distributions using S4 classes and methods (based on packages distr', distrEx', distrMod', RobAStBase', and ROptEst'); the underlying theoretic results can be found in Ruckdeschel and Horbenko, (2013 and 2012), \doi10.1080/02331888.2011.628022 and \doi10.1007/s00184-011-0366-4.
This package performs robust estimation and inference when using covariate adjustment and/or covariate-adaptive randomization in randomized clinical trials. Ting Ye, Jun Shao, Yanyao Yi, Qinyuan Zhao (2023) <doi:10.1080/01621459.2022.2049278>. Ting Ye, Marlena Bannick, Yanyao Yi, Jun Shao (2023) <doi:10.1080/24754269.2023.2205802>. Ting Ye, Jun Shao, Yanyao Yi (2023) <doi:10.1093/biomet/asad045>. Marlena Bannick, Jun Shao, Jingyi Liu, Yu Du, Yanyao Yi, Ting Ye (2024) <doi:10.1093/biomet/asaf029>. Xiaoyu Qiu, Yuhan Qian, Jaehwan Yi, Jinqiu Wang, Yu Du, Yanyao Yi, Ting Ye (2025) <doi:10.48550/arXiv.2408.12541>.
This package provides functions to assist in performing probabilistic record linkage and deduplication: generating pairs, comparing records, em-algorithm for estimating m- and u-probabilities (I. Fellegi & A. Sunter (1969) <doi:10.1080/01621459.1969.10501049>, T.N. Herzog, F.J. Scheuren, & W.E. Winkler (2007), "Data Quality and Record Linkage Techniques", ISBN:978-0-387-69502-0), forcing one-to-one matching. Can also be used for pre- and post-processing for machine learning methods for record linkage. Focus is on memory, CPU performance and flexibility.
Recursive partitioning methods to build classification trees for ordinal responses within the CART framework. Trees are grown using the Generalized Gini impurity function, where the misclassification costs are given by the absolute or squared differences in scores assigned to the categories of the response. Pruning is based on the total misclassification rate or on the total misclassification cost.