Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel search send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Spatial heterogeneity can be specified in various ways. hspm is an ambitious project that aims at implementing various methodologies to control for heterogeneity in spatial models. The current version of hspm deals with spatial and (non-spatial) regimes models. In particular, the package allows to estimate a general spatial regimes model with additional endogenous variables, specified in terms of a spatial lag of the dependent variable, the spatially lagged regressors, and, potentially, a spatially autocorrelated error term. Spatial regime models are estimated by instrumental variables and generalized methods of moments (see Arraiz et al., (2010) <doi:10.1111/j.1467-9787.2009.00618.x>, Bivand and Piras, (2015) <doi:10.18637/jss.v063.i18>, Drukker et al., (2013) <doi:10.1080/07474938.2013.741020>, Kelejian and Prucha, (2010) <doi:10.1016/j.jeconom.2009.10.025>).
Machine learning hierarchical risk clustering portfolio allocation strategies. The implemented methods are: Hierarchical risk parity (De Prado, 2016) <DOI: 10.3905/jpm.2016.42.4.059>. Hierarchical clustering-based asset allocation (Raffinot, 2017) <DOI: 10.3905/jpm.2018.44.2.089>. Hierarchical equal risk contribution portfolio (Raffinot, 2018) <DOI: 10.2139/ssrn.3237540>. A Constrained Hierarchical Risk Parity Algorithm with Cluster-based Capital Allocation (Pfitzingera and Katzke, 2019) <https://www.ekon.sun.ac.za/wpapers/2019/wp142019/wp142019.pdf>.
This package provides a suite of diagnostic tools for hierarchical (multilevel) linear models. The tools include not only leverage and traditional deletion diagnostics (Cook's distance, covratio, covtrace, and MDFFITS) but also convenience functions and graphics for residual analysis. Models can be fit using either lmer in the lme4 package or lme in the nlme package.
Unsupervised multivariate filter feature selection using the UFS-rHCM or UFS-cHCM algorithms based on the heterogeneous correlation matrix (HCM). The HCM consists of Pearson's correlations between numerical features, polyserial correlations between numerical and ordinal features, and polychoric correlations between ordinal features. Tortora C., Madhvani S., Punzo A. (2025). "Designing unsupervised mixed-type feature selection techniques using the heterogeneous correlation matrix." International Statistical Review <doi:10.1111/insr.70016>. This work was supported by the National Science foundation NSF Grant N 2209974 (Tortora) and by the Italian Ministry of University and Research (MUR) under the PRIN 2022 grant number 2022XRHT8R (CUP: E53D23005950006), as part of â The SMILE Project: Statistical Modelling and Inference to Live the Environmentâ , funded by the European Union â Next Generation EU (Punzo).
This package implements hierarchical conformal prediction for clustered data with missing responses. The method uses repeated cluster-level splitting and within-cluster subsampling to accommodate dependence, and inverse-probability weighting to correct distribution shift induced by missingness. Conditional densities are estimated by inverting fitted conditional quantiles (linear quantile regression or quantile regression forests), and p-values are aggregated across resampling and splitting steps using the Cauchy combination test.
Penalized regression for generalized linear models for measurement error problems (aka. errors-in-variables). The package contains a version of the lasso (L1-penalization) which corrects for measurement error (Sorensen et al. (2015) <doi:10.5705/ss.2013.180>). It also contains an implementation of the Generalized Matrix Uncertainty Selector, which is a version the (Generalized) Dantzig Selector for the case of measurement error (Sorensen et al. (2018) <doi:10.1080/10618600.2018.1425626>).
Offers methods for visualising, modelling, and forecasting high-dimensional functional time series, also known as functional panel data. Documentation about hdftsa is initially provided via the paper by Cristian F. Jimenez-Varon, Ying Sun and Han Lin Shang (2024, Journal of Computational and Graphical Statistics).
Interact with the application programming interface for the web annotation service Hypothes.is (See <http://hypothes.is> for more information.) Allows users to download data about public annotations, and create, retrieve, update, and delete their own annotations.
This package provides functions to calculate the Hotellingâ s T-squared statistic and corresponding confidence ellipses. Provides the semi-axes of the Hotellingâ s T-squared ellipses at 95% and 99% confidence levels. Enables users to obtain the coordinates in two or three dimensions at user-defined confidence levels, allowing for the construction of 2D or 3D ellipses with customized confidence levels. Bro and Smilde (2014) <DOI:10.1039/c3ay41907j>. Brereton (2016) <DOI:10.1002/cem.2763>.
Collection of functions to help retrieving data from Hub'Eau the free and public French National APIs on water <https://hubeau.eaufrance.fr/>.
Helping to calculate cricket specific problems in a tidy & simple manner.
This package provides functions for combining model outputs (e.g. predictions or estimates) from multiple models into an aggregated ensemble model output.
Convenience date tools for identifying weekends, business days, and Canadian holidays, including R wrappers for the Canada Holidays API <https://canada-holidays.ca/>.
LecÈ iile prof/cls trebuie completate cu un câmp "ora", astfel ca oricare douÄ lecÈ ii prof/cls/ora sÄ nu se suprapunÄ Ã®ntr-o aceeaÈ i orÄ . The prof/cls lessons must be completed with a "hour" field ('ora), so that any two prof/cls/ora lessons do not overlap in the same hour. <https://vlad.bazon.net/>.
This package creates and plots 2D and 3D hive plots. Hive plots are a unique method of displaying networks of many types in which node properties are mapped to axes using meaningful properties rather than being arbitrarily positioned. The hive plot concept was invented by Martin Krzywinski at the Genome Science Center (www.hiveplot.net/). Keywords: networks, food webs, linnet, systems biology, bioinformatics.
This package provides a case conversion between common cases like CamelCase and snake_case. Using the rust crate heck <https://github.com/withoutboats/heck> as the backend for a highly performant case conversion for R'.
This package provides tools for the estimation of Heckman selection models with robust variance-covariance matrices. It includes functions for computing the bread and meat matrices, as well as clustered standard errors for generalized Heckman models, see Fernando de Souza Bastos and Wagner Barreto-Souza and Marc G. Genton (2022, ISSN: <https://www.jstor.org/stable/27164235>). The package also offers cluster-robust inference with sandwich estimators, and tools for handling issues related to eigenvalues in covariance matrices.
This package implements an empirical approach referred to as PeakTrace which uses multiple hydrographs to detect and follow hydropower plant-specific hydropeaking waves at the sub-catchment scale and to describe how hydropeaking flow parameters change along the longitudinal flow path. The method is based on the identification of associated events and uses (linear) regression models to describe translation and retention processes between neighboring hydrographs. Several regression model results are combined to arrive at a power plant-specific model. The approach is proposed and validated in Greimel et al. (2022) <doi:10.1002/rra.3978>. The identification of associated events is based on the event detection implemented in hydropeak'.
Functions, Shiny apps and data for the book "Introduction to Statistics" by Wolfgang Karl Härdle, Sigbert Klinke, and Bernd Rönz (2015) <doi:10.1007/978-3-319-17704-5>.
Inference approach for jointly modeling correlated count and binary outcomes. This formulation allows simultaneous modeling of zero inflation via the Bernoulli component while providing a more accurate assessment of the Hierarchical Zero-Inflated Poisson's parsimony (Lizandra C. Fabio, Jalmar M. F. Carrasco, Victor H. Lachos and Ming-Hui Chen, Likelihood-based inference for joint modeling of correlated count and binary outcomes with extra variability and zeros, 2025, under submission).
An implementation of high-probability lower bounds for the total variance distance as introduced in Michel & Naef & Meinshausen (2020) <arXiv:2005.06006>. An estimated lower-bound (with high-probability) on the total variation distance between two probability distributions from which samples are observed can be obtained with the function HPLB.
Calculate an optimal embedding of a set of data points into low-dimensional hyperbolic space. This uses the strain-minimizing hyperbolic embedding of Keller-Ressel and Nargang (2019), see <arXiv:1903.08977>.
This package provides a tool for Hierarchical Climate Regionalization applicable to any correlation-based clustering. It adds several features and a new clustering method (called, regional linkage) to hierarchical clustering in R ('hclust function in stats library): data regridding, coarsening spatial resolution, geographic masking, contiguity-constrained clustering, data filtering by mean and/or variance thresholds, data preprocessing (detrending, standardization, and PCA), faster correlation function with preliminary big data support, different clustering methods, hybrid hierarchical clustering, multivariate clustering (MVC), cluster validation, visualization of regionalization results, and exporting region map and mean timeseries into NetCDF-4 file. The technical details are described in Badr et al. (2015) <doi:10.1007/s12145-015-0221-7>.
When performing multiple imputations, while 5-10 imputations are sufficient for obtaining point estimates, a larger number of imputations are needed for proper standard error estimates. This package allows you to calculate how many imputations are needed, following the work of von Hippel (2020) <doi:10.1177/0049124117747303>.