Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Facilitates the analysis and evaluation of hydrologic model output and time-series data with functions focused on comparison of modeled (simulated) and observed data, period-of-record statistics, and trends.
The presence of outliers in a dataset can substantially bias the results of statistical analyses. To correct for outliers, micro edits are manually performed on all records. A set of constraints and decision rules is typically used to aid the editing process. However, straightforward decision rules might overlook anomalies arising from disruption of linear relationships. Computationally efficient methods are provided to identify historical, tail, and relational anomalies at the data-entry level (Sartore et al., 2024; <doi:10.6339/24-JDS1136>). A score statistic is developed for each anomaly type, using a distribution-free approach motivated by the Bienaymé-Chebyshev's inequality, and fuzzy logic is used to detect cellwise outliers resulting from different types of anomalies. Each data entry is individually scored and individual scores are combined into a final score to determine anomalous entries. In contrast to fuzzy logic, Bayesian bootstrap and a Bayesian test based on empirical likelihoods are also provided as studied by Sartore et al. (2024; <doi:10.3390/stats7040073>). These algorithms allow for a more nuanced approach to outlier detection, as it can identify outliers at data-entry level which are not obviously distinct from the rest of the data. --- This research was supported in part by the U.S. Department of Agriculture, National Agriculture Statistics Service. The findings and conclusions in this publication are those of the authors and should not be construed to represent any official USDA, or US Government determination or policy.
Enhances the H2O platform by providing tools for detailed evaluation of machine learning models. It includes functions for bootstrapped performance evaluation, extended F-score calculations, and various other metrics, aimed at improving model assessment.
Implementation of S4 class of sets and multisets of numbers. The implementation is based on the hash table from the package hash'. Quick operations are allowed when the set is a dynamic object. The implementation is discussed in detail in Ceoldo and Wit (2023) <arXiv:2304.09809>.
Identification of recombination events, haplotype reconstruction, sire imputation and pedigree reconstruction using half-sib family SNP data.
This package provides a program that conducts group variable selection for quantile and robust mean regression (Sherwood and Li, 2022). The group lasso penalty (Yuan and Lin, 2006) is used for group-wise variable selection. Both of the quantile and mean regression models are based on the Huber loss. Specifically, with the tuning parameter in the Huber loss approaching to 0, the quantile check function can be approximated by the Huber loss for the median and the tilted version of Huber loss at other quantiles. Such approximation provides computational efficiency and stability, and has also been shown to be statistical consistent.
This package provides a correlation-based batch process for fast, accurate imputation for high dimensional missing data problems via chained random forests. See Waggoner (2023) <doi:10.1007/s00180-023-01325-9> for more on hdImpute', Stekhoven and Bühlmann (2012) <doi:10.1093/bioinformatics/btr597> for more on missForest', and Mayer (2022) <https://github.com/mayer79/missRanger> for more on missRanger'.
When performing multiple imputations, while 5-10 imputations are sufficient for obtaining point estimates, a larger number of imputations are needed for proper standard error estimates. This package allows you to calculate how many imputations are needed, following the work of von Hippel (2020) <doi:10.1177/0049124117747303>.
This package provides tools to estimate, compare, and visualize healthcare resource utilization using data derived from electronic health records or real-world evidence sources. The package supports pre index and post index analysis, patient cohort comparison, and customizable summaries and visualizations for clinical and health economics research. Methods implemented are based on Scott et al. (2022) <doi:10.1080/13696998.2022.2037917> and Xia et al. (2024) <doi:10.14309/ajg.0000000000002901>.
Automatic construction of regular and irregular histograms as described in Rozenholc/Mildenberger/Gather (2010).
This package provides access to Uber's H3 geospatial indexing system via h3lib <https://CRAN.R-project.org/package=h3lib>. h3r is designed to mimic the H3 Application Programming Interface (API) <https://h3geo.org/docs/api/indexing/>, so that any function in the API is also available in h3r'.
This model divides coefficients into three types, i.e., local fixed effects, global fixed effects, and random effects (Hu et al., 2022)<doi:10.1177/23998083211063885>. If data have spatial hierarchical structures (especially are overlapping on some locations), it is worth trying this model to reach better fitness.
Holistic generalized linear models (HGLMs) extend generalized linear models (GLMs) by enabling the possibility to add further constraints to the model. The holiglm package simplifies estimating HGLMs using convex optimization. Additional information about the package can be found in the reference manual, the README and the accompanying paper <doi:10.18637/jss.v108.i07>.
This package provides tools for estimating sample sizes primarily based on heritability, while also considering additional parameters such as statistical power and fold change. The package normalizes heritability values according to trait-specific heritability and classification to enhance accuracy in sample size estimation.
Interface to the HERE REST APIs <https://developer.here.com/develop/rest-apis>: (1) geocode and autosuggest addresses or reverse geocode POIs using the Geocoder API; (2) route directions, travel distance or time matrices and isolines using the Routing', Matrix Routing and Isoline Routing APIs; (3) request real-time traffic flow and incident information from the Traffic API; (4) find request public transport connections and nearby stations from the Public Transit API; (5) request intermodal routes using the Intermodal Routing API; (6) get weather forecasts, reports on current weather conditions, astronomical information and alerts at a specific location from the Destination Weather API. Locations, routes and isolines are returned as sf objects.
This package contains functions for hidden Markov models with observations having extra zeros as defined in the following two publications, Wang, T., Zhuang, J., Obara, K. and Tsuruoka, H. (2016) <doi:10.1111/rssc.12194>; Wang, T., Zhuang, J., Buckby, J., Obara, K. and Tsuruoka, H. (2018) <doi:10.1029/2017JB015360>. The observed response variable is either univariate or bivariate Gaussian conditioning on presence of events, and extra zeros mean that the response variable takes on the value zero if nothing is happening. Hence the response is modelled as a mixture distribution of a Bernoulli variable and a continuous variable. That is, if the Bernoulli variable takes on the value 1, then the response variable is Gaussian, and if the Bernoulli variable takes on the value 0, then the response is zero too. This package includes functions for simulation, parameter estimation, goodness-of-fit, the Viterbi algorithm, and plotting the classified 2-D data. Some of the functions in the package are based on those of the R package HiddenMarkov by David Harte. This updated version has included an example dataset and R code examples to show how to transform the data into the objects needed in the main functions. We have also made changes to increase the speed of some of the functions.
Fits Hierarchical Bayesian space-Time models for Gaussian data. Furthermore, its functions have been implemented for analysing the fitting qualities of those models.
Method and tool for generating hybrid time series forecasts using an error remodeling approach. These forecasting approaches utilize a recursive technique for modeling the linearity of the series using a linear method (e.g., ARIMA, Theta, etc.) and then models (forecasts) the residuals of the linear forecaster using non-linear neural networks (e.g., ANN, ARNN, etc.). The hybrid architectures comprise three steps: firstly, the linear patterns of the series are forecasted which are followed by an error re-modeling step, and finally, the forecasts from both the steps are combined to produce the final output. This method additionally provides the confidence intervals as needed. Ten different models can be implemented using this package. This package generates different types of hybrid error correction models for time series forecasting based on the algorithms by Zhang. (2003), Chakraborty et al. (2019), Chakraborty et al. (2020), Bhattacharyya et al. (2021), Chakraborty et al. (2022), and Bhattacharyya et al. (2022) <doi:10.1016/S0925-2312(01)00702-0> <doi:10.1016/j.physa.2019.121266> <doi:10.1016/j.chaos.2020.109850> <doi:10.1109/IJCNN52387.2021.9533747> <doi:10.1007/978-3-030-72834-2_29> <doi:10.1007/s11071-021-07099-3>.
This package provides functions for the management and treatment of hydrology and meteorology time-series stored in a Sqlite data base.
This package provides a local haplotyping tool for use in trait association and trait prediction analyses pipelines. HaploVar enables users take single nucleotide polymorphisms (SNPs) (in VCF format) and a linkage disequilibrium (LD) matrix, calculate local haplotypes and format the output to be compatible with a wide range of trait association and trait prediction tools. The local haplotypes are calculated from the LD matrix using a clustering algorithm called density-based spatial clustering of applications with noise ('DBSCAN') (Ester et al., 1996) <ISBN: 1577350049>.
This package provides functions for calculating the hazard discrimination summary and its standard errors, as described in Liang and Heagerty (2016) <doi:10.1111/biom.12628>.
This code provides a method to fit the hidden compact representation model as well as to identify the causal direction on discrete data. We implement an effective solution to recover the above hidden compact representation under the likelihood framework. Please see the Causal Discovery from Discrete Data using Hidden Compact Representation from NIPS 2018 by Ruichu Cai, Jie Qiao, Kun Zhang, Zhenjie Zhang and Zhifeng Hao (2018) <https://nips.cc/Conferences/2018/Schedule?showEvent=11274> for a description of some of our methods.
Can be used for paternity and maternity assignment and outperforms conventional methods where closely related individuals occur in the pool of possible parents. The method compares the genotypes of offspring with any combination of potentials parents and scores the number of mismatches of these individuals at bi-allelic genetic markers (e.g. Single Nucleotide Polymorphisms). It elaborates on a prior exclusion method based on the Homozygous Opposite Test (HOT; Huisman 2017 <doi:10.1111/1755-0998.12665>) by introducing the additional exclusion criterion HIPHOP (Homozygous Identical Parents, Heterozygous Offspring are Precluded; Cockburn et al., in revision). Potential parents are excluded if they have more mismatches than can be expected due to genotyping error and mutation, and thereby one can identify the true genetic parents and detect situations where one (or both) of the true parents is not sampled. Package hiphop can deal with (a) the case where there is contextual information about parentage of the mother (i.e. a female has been seen to be involved in reproductive tasks such as nest building), but paternity is unknown (e.g. due to promiscuity), (b) where both parents need to be assigned, because there is no contextual information on which female laid eggs and which male fertilized them (e.g. polygynandrous mating system where multiple females and males deposit young in a common nest, or organisms with external fertilisation that breed in aggregations). For details: Cockburn, A., Penalba, J.V.,Jaccoud, D.,Kilian, A., Brouwer, L., Double, M.C., Margraf, N., Osmond, H.L., van de Pol, M. and Kruuk, L.E.B. (in revision). HIPHOP: improved paternity assignment among close relatives using a simple exclusion method for bi-allelic markers. Molecular Ecology Resources, DOI to be added upon acceptance.
Hierarchical community detection on networks by a recursive spectral partitioning strategy, which is shown to be effective and efficient in Li, Lei, Bhattacharyya, Sarkar, Bickel, and Levina (2018) <arXiv:1810.01509>. The package also includes a data generating function for a binary tree stochastic block model, a special case of stochastic block model that admits hierarchy between communities.