Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This algorithm is described in detail in the paper "Hedging Forecast Combinations With an Application to the Random Forest" by Beck et al. (2024) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5032102>. The package provides a function hedgedrf() that can be used to train a Hedged Random Forest model on a dataset, and a function predict.hedgedrf() that can be used to make predictions with the model.
This package implements assessment of benefit-risk balance using Bayesian Discrete Choice Experiment. For more details see the article by Mukhopadhyay et al. (2019) <DOI:10.1080/19466315.2018.1527248>.
This package provides tools to estimate, compare, and visualize healthcare resource utilization using data derived from electronic health records or real-world evidence sources. The package supports pre index and post index analysis, patient cohort comparison, and customizable summaries and visualizations for clinical and health economics research. Methods implemented are based on Scott et al. (2022) <doi:10.1080/13696998.2022.2037917> and Xia et al. (2024) <doi:10.14309/ajg.0000000000002901>.
This package provides easy access to essential climate change datasets to non-climate experts. Users can download the latest raw data from authoritative sources and view it via pre-defined ggplot2 charts. Datasets include atmospheric CO2, methane, emissions, instrumental and proxy temperature records, sea levels, Arctic/Antarctic sea-ice, Hurricanes, and Paleoclimate data. Sources include: NOAA Mauna Loa Laboratory <https://gml.noaa.gov/ccgg/trends/data.html>, Global Carbon Project <https://www.globalcarbonproject.org/carbonbudget/>, NASA GISTEMP <https://data.giss.nasa.gov/gistemp/>, National Snow and Sea Ice Data Center <https://nsidc.org/home>, CSIRO <https://research.csiro.au/slrwavescoast/sea-level/measurements-and-data/sea-level-data/>, NOAA Laboratory for Satellite Altimetry <https://www.star.nesdis.noaa.gov/socd/lsa/SeaLevelRise/> and HURDAT Atlantic Hurricane Database <https://www.aoml.noaa.gov/hrd/hurdat/Data_Storm.html>, Vostok Paleo carbon dioxide and temperature data: <doi:10.3334/CDIAC/ATG.009>.
Method and tool for generating hybrid time series forecasts using an error remodeling approach. These forecasting approaches utilize a recursive technique for modeling the linearity of the series using a linear method (e.g., ARIMA, Theta, etc.) and then models (forecasts) the residuals of the linear forecaster using non-linear neural networks (e.g., ANN, ARNN, etc.). The hybrid architectures comprise three steps: firstly, the linear patterns of the series are forecasted which are followed by an error re-modeling step, and finally, the forecasts from both the steps are combined to produce the final output. This method additionally provides the confidence intervals as needed. Ten different models can be implemented using this package. This package generates different types of hybrid error correction models for time series forecasting based on the algorithms by Zhang. (2003), Chakraborty et al. (2019), Chakraborty et al. (2020), Bhattacharyya et al. (2021), Chakraborty et al. (2022), and Bhattacharyya et al. (2022) <doi:10.1016/S0925-2312(01)00702-0> <doi:10.1016/j.physa.2019.121266> <doi:10.1016/j.chaos.2020.109850> <doi:10.1109/IJCNN52387.2021.9533747> <doi:10.1007/978-3-030-72834-2_29> <doi:10.1007/s11071-021-07099-3>.
This package provides a program that conducts group variable selection for quantile and robust mean regression (Sherwood and Li, 2022). The group lasso penalty (Yuan and Lin, 2006) is used for group-wise variable selection. Both of the quantile and mean regression models are based on the Huber loss. Specifically, with the tuning parameter in the Huber loss approaching to 0, the quantile check function can be approximated by the Huber loss for the median and the tilted version of Huber loss at other quantiles. Such approximation provides computational efficiency and stability, and has also been shown to be statistical consistent.
Interact with the application programming interface for the web annotation service Hypothes.is (See <http://hypothes.is> for more information.) Allows users to download data about public annotations, and create, retrieve, update, and delete their own annotations.
Datasets related to Hong Kong, including information on the 2019 elected District Councillors (<https://www.districtcouncils.gov.hk> and <https://dce2019.hk01.com/>) and traffic collision data from the Hong Kong Department of Transport (<https://www.td.gov.hk/>). All of the data in this package is available in the public domain.
Audio interactivity within shiny applications using howler.js'. Enables the status of the audio player to be sent from the UI to the server, and events such as playing and pausing the audio can be triggered from the server.
The hydReng package provides a set of functions for hydraulic engineering tasks and natural hazard assessments. It includes basic hydraulics (wetted area, wetted perimeter, flow, flow velocity, flow depth, and maximum flow) for open channels with arbitrary geometry under uniform flow conditions. For structures such as circular pipes, weirs, and gates, the package includes calculations for pressure flow, backwater depth, and overflow over a weir crest. Additionally, it provides formulas for calculating bedload transport. The formulas used can be found in standard literature on hydraulics, such as Bollrich (2019, ISBN:978-3-410-29169-5) or Hager (2011, ISBN:978-3-642-77430-0).
The "Hit and Run" Markov Chain Monte Carlo method for sampling uniformly from convex shapes defined by linear constraints, and the "Shake and Bake" method for sampling from the boundary of such shapes. Includes specialized functions for sampling normalized weights with arbitrary linear constraints. Tervonen, T., van Valkenhoef, G., Basturk, N., and Postmus, D. (2012) <doi:10.1016/j.ejor.2012.08.026>. van Valkenhoef, G., Tervonen, T., and Postmus, D. (2014) <doi:10.1016/j.ejor.2014.06.036>.
An implementation of the sandwich smoother proposed in Fast Bivariate Penalized Splines by Xiao et al. (2012) <doi:10.1111/rssb.12007>. A hero is a specific type of sandwich. Dictionary.com (2018) <https://www.dictionary.com> describes a hero as: a large sandwich, usually consisting of a small loaf of bread or long roll cut in half lengthwise and containing a variety of ingredients, as meat, cheese, lettuce, and tomatoes. Also implements the spatio-temporal sandwich smoother of French and Kokoszka (2021) <doi:10.1016/j.spasta.2020.100413>.
Computes the ACMIF test and Bonferroni-adjusted p-value of interaction in two-factor studies. Produces corresponding interaction plot and analysis of variance tables and p-values from several other tests of non-additivity.
Given a high-dimensional dataset that typically represents a cytometry dataset, and a subset of the datapoints, this algorithm outputs an hyperrectangle so that datapoints within the hyperrectangle best correspond to the specified subset. In essence, this allows the conversion of clustering algorithms outputs to gating strategies outputs.
We provide an R tool for computation and nonparametric plug-in estimation of Highest Density Regions (HDRs) and general level sets in the directional setting. Concretely, circular and spherical HDRs can be reconstructed from a data sample following Saavedra-Nieves and Crujeiras (2021) <doi:10.1007/s11634-021-00457-4>. This library also contains two real datasets in the circular and spherical settings. The first one concerns a problem from animal orientation studies and the second one is related to earthquakes occurrences.
This package provides functions for calculating the hazard discrimination summary and its standard errors, as described in Liang and Heagerty (2016) <doi:10.1111/biom.12628>.
Factor models have been widely applied in areas such as economics and finance, and the well-known heavy-tailedness of macroeconomic/financial data should be taken into account when conducting factor analysis. We propose two algorithms to do robust factor analysis by considering the Huber loss. One is based on minimizing the Huber loss of the idiosyncratic error's L2 norm, which turns out to do Principal Component Analysis (PCA) on the weighted sample covariance matrix and thereby named as Huber PCA. The other one is based on minimizing the element-wise Huber loss, which can be solved by an iterative Huber regression algorithm. In this package we also provide the code for traditional PCA, the Robust Two Step (RTS) method by He et al. (2022) and the Quantile Factor Analysis (QFA) method by Chen et al. (2021) and He et al. (2023).
We provide functions for identifying the core community phylogeny in any microbiome, drawing phylogenetic Venn diagrams, calculating the core Faithâ s PD for a set of communities, and calculating the core UniFrac distance between two sets of communities. All functions rely on construction of a core community phylogeny, which is a phylogeny where branches are defined based on their presence in multiple samples from a single type of habitat. Our package provides two options for constructing the core community phylogeny, a tip-based approach, where the core community phylogeny is identified based on incidence of leaf nodes and a branch-based approach, where the core community phylogeny is identified based on incidence of individual branches. We suggest use of the microViz package.
This package provides a user-friendly interface for the Hierarchical Data Format 5 ('HDF5') library designed to "just work." It bundles the necessary system libraries to ensure easy installation on all platforms. Features smart defaults that automatically map R objects (vectors, matrices, data frames) to efficient HDF5 types, removing the need to manage low-level details like dataspaces or property lists. Uses the HDF5 library developed by The HDF Group <https://www.hdfgroup.org/>.
Distribution free heteroscedastic tests for functional data. The following tests are included in this package: test of no main treatment or contrast effect and no simple treatment effect given in Wang, Higgins, and Blasi (2010) <doi:10.1016/j.spl.2009.11.016>, no main time effect, and no interaction effect based on original observations given in Wang and Akritas (2010a) <doi:10.1080/10485250903171621> and tests based on ranks given in Wang and Akritas (2010b) <doi:10.1016/j.jmva.2010.03.012>.
This package provides a generic function and a set of methods to calculate highest density intervals for a variety of classes of objects which can specify a probability density distribution, including MCMC output, fitted density objects, and functions.
This package implements the simpler and faster heat index, which matches the values of the original 1979 heat index and its 2022 extension for air temperatures above 300 K (27 C, 80 F) and with only minor differences at lower temperatures. Also implements an algorithm for calculating the thermodynamic (and psychrometric) wet-bulb (and ice-bulb) temperature.
Perform Hi-C data differential analysis based on pixel-level differential analysis and a post hoc inference strategy to quantify signal in clusters of pixels. Clusters of pixels are obtained through a connectivity-constrained two-dimensional hierarchical clustering.
S3 functions implementing both statistical and graphical goodness-of-fit measures between observed and simulated values, mainly oriented to be used during the calibration, validation, and application of hydrological models. Missing values in observed and/or simulated values can be removed before computations. Comments / questions / collaboration of any kind are very welcomed.