Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Interface to the HERE REST APIs <https://developer.here.com/develop/rest-apis>: (1) geocode and autosuggest addresses or reverse geocode POIs using the Geocoder API; (2) route directions, travel distance or time matrices and isolines using the Routing', Matrix Routing and Isoline Routing APIs; (3) request real-time traffic flow and incident information from the Traffic API; (4) find request public transport connections and nearby stations from the Public Transit API; (5) request intermodal routes using the Intermodal Routing API; (6) get weather forecasts, reports on current weather conditions, astronomical information and alerts at a specific location from the Destination Weather API. Locations, routes and isolines are returned as sf objects.
Automatic open data acquisition from resources of IGN ('Institut National de Information Geographique et forestiere') (<https://www.ign.fr/>). Available datasets include various types of raster and vector data, such as digital elevation models, state borders, spatial databases, cadastral parcels, and more. happign also provide access to API Carto (<https://apicarto.ign.fr/api/doc/>).
This package provides a set of tools supporting more flexible heatmaps. The graphics is grid-like using the old graphics system. The main function is heatmap.n2(), which is a wrapper around the various functions constructing individual parts of the heatmap, like sidebars, picket plots, legends etc. The function supports zooming and splitting, i.e., having (unlimited) small heatmaps underneath each other in one plot deriving from the same data set, e.g., clustered and ordered by a supervised clustering method.
Compute duration curves of daily flow series, both real and modeled, to be compared through indexes of flow duration curves. The package functions include comparative plots and goodness of fit tests. Flow duration curve indexes are based on: Yilmaz et al., (2008) <DOI:10.1029/2007WR006716>.
This package provides tools for computing HUM (Hypervolume Under the Manifold) value to estimate features ability to discriminate the class labels, visualizing the ROC curve for two or three class labels (Natalia Novoselova, Cristina Della Beffa, Junxi Wang, Jialiang Li, Frank Pessler, Frank Klawonn (2014) <doi:10.1093/bioinformatics/btu086>).
User-friendly functions for leveraging (multiple) historical data set(s) for generalized linear models. The package contains functions for sampling from the posterior distribution of a generalized linear model using the prior induced by the Bayesian hierarchical model, power prior by Ibrahim and Chen (2000) <doi:10.1214/ss/1009212673>, normalized power prior by Duan et al. (2006) <doi:10.1002/env.752>, normalized asymptotic power prior by Ibrahim et al. (2015) <doi:10.1002/sim.6728>, commensurate prior by Hobbs et al. (2011) <doi:10.1111/j.1541-0420.2011.01564.x>, robust meta-analytic-predictive prior by Schmidli et al. (2014) <doi:10.1111/biom.12242>, the latent exchangeability prior by Alt et al. (2023) <doi:10.48550/arXiv.2303.05223>, and a normal (or half-normal) prior. Functions for computing the marginal log-likelihood under each of the implemented priors are also included. The package compiles all the CmdStan models once during installation using the instantiate package.
This package performs genetic association analyses of case-parent triad (trio) data with multiple markers. It can also incorporate complete or incomplete control triads, for instance independent control children. Estimation is based on haplotypes, for instance SNP haplotypes, even though phase is not known from the genetic data. Haplin estimates relative risk (RR + conf.int.) and p-value associated with each haplotype. It uses maximum likelihood estimation to make optimal use of data from triads with missing genotypic data, for instance if some SNPs has not been typed for some individuals. Haplin also allows estimation of effects of maternal haplotypes and parent-of-origin effects, particularly appropriate in perinatal epidemiology. Haplin allows special models, like X-inactivation, to be fitted on the X-chromosome. A GxE analysis allows testing interactions between environment and all estimated genetic effects. The models were originally described in "Gjessing HK and Lie RT. Case-parent triads: Estimating single- and double-dose effects of fetal and maternal disease gene haplotypes. Annals of Human Genetics (2006) 70, pp. 382-396".
Built by Hodges lab members for current and future Hodges lab members. Other individuals are welcome to use as well. Provides useful functions that the lab uses everyday to analyze various genomic datasets. Critically, only general use functions are provided; functions specific to a given technique are reserved for a separate package. As the lab grows, we expect to continue adding functions to the package to build on previous lab members code.
Enhances the H2O platform by providing tools for detailed evaluation of machine learning models. It includes functions for bootstrapped performance evaluation, extended F-score calculations, and various other metrics, aimed at improving model assessment.
Perform Hi-C data differential analysis based on pixel-level differential analysis and a post hoc inference strategy to quantify signal in clusters of pixels. Clusters of pixels are obtained through a connectivity-constrained two-dimensional hierarchical clustering.
This package provides a set of routines to quickly download and import the HUGO Gene Nomenclature Committee (HGNC) data set on mapping of gene symbols to gene entries in other genomic databases or resources.
This package provides a program that conducts group variable selection for quantile and robust mean regression (Sherwood and Li, 2022). The group lasso penalty (Yuan and Lin, 2006) is used for group-wise variable selection. Both of the quantile and mean regression models are based on the Huber loss. Specifically, with the tuning parameter in the Huber loss approaching to 0, the quantile check function can be approximated by the Huber loss for the median and the tilted version of Huber loss at other quantiles. Such approximation provides computational efficiency and stability, and has also been shown to be statistical consistent.
This package performs multiple hot-deck imputation of categorical and continuous variables in a data frame.
This package provides a streamlined tool for eplet analysis of donor and recipient HLA (human leukocyte antigen) mismatch. Messy, low-resolution HLA typing data is cleaned, and imputed to high-resolution using the NMDP (National Marrow Donor Program) haplotype reference database <https://haplostats.org/haplostats>. High resolution data is analyzed for overall or single antigen eplet mismatch using a reference table (currently supporting HLAMatchMaker <http://www.epitopes.net> versions 2 and 3). Data can enter or exit the workflow at different points depending on the user's aims and initial data quality.
There are two interesting games in this package, one is 2048 games(for windows), using up and down to control the direction until there is a 2048 figure. And the other is what to eat today',preparing for people who choose difficulties, including most of the delicious Cantonese cuisine.
Use the Official Hacker News API through R. Retrieve posts, articles and other items in form of convenient R objects.
H(x) is the h-index for the past x years. Here, the h(x) of a scientist/department/etc. can be calculated using the exported excel file from a Web of Science citation report of a search. Also calculated is the year of first publication, total number of publications, and sum of times cited for the specified period. Therefore, for h-10: the date of first publication, total number of publications, and sum of times cited in the past 10 years are calculated. Note: the excel file has to first be saved in a .csv format.
This package provides a nonparametric smoothed kernel estimator for the future conditional hazard rate function when time-dependent covariates are present, a bandwidth selector for the estimator's implementation and pointwise and uniform confidence bands. Methods used in the package refer to Bagkavos, Isakson, Mammen, Nielsen and Proust-Lima (2025) <doi:10.1093/biomet/asaf008>.
Create dynamic, data-driven text. Given two values, a list of talking points is generated and can be combined using string interpolation. Based on the glue package.
Calculate Hopkins statistic to assess the clusterability of data. See Wright (2023) <doi:10.32614/RJ-2022-055>.
High level functions for hyperplane fitting (hyper.fit()) and visualising (hyper.plot2d() / hyper.plot3d()). In simple terms this allows the user to produce robust 1D linear fits for 2D x vs y type data, and robust 2D plane fits to 3D x vs y vs z type data. This hyperplane fitting works generically for any N-1 hyperplane model being fit to a N dimension dataset. All fits include intrinsic scatter in the generative model orthogonal to the hyperplane.
Simple and integrated tool that automatically extracts and folds all hairpin sequences from raw genome-wide data. It predicts the secondary structure of several overlapped segments, with longer length than the mean length of sequences of interest for the species under processing, ensuring that no one is lost nor inappropriately cut.
This package provides a collection of reweighted marginal hypothesis tests for clustered data, based on reweighting methods of Williamson, J., Datta, S., and Satten, G. (2003) <doi:10.1111/1541-0420.00005>. The tests in this collection are clustered analogs to well-known hypothesis tests in the classical setting, and are appropriate for data with cluster- and/or group-size informativeness. The syntax and output of functions are modeled after common, recognizable functions native to R. Methods used in the package refer to Gregg, M., Datta, S., and Lorenz, D. (2020) <doi:10.1177/0962280220928572>, Nevalainen, J., Oja, H., and Datta, S. (2017) <doi:10.1002/sim.7288> Dutta, S. and Datta, S. (2015) <doi:10.1111/biom.12447>, Lorenz, D., Datta, S., and Harkema, S. (2011) <doi:10.1002/sim.4368>, Datta, S. and Satten, G. (2008) <doi:10.1111/j.1541-0420.2007.00923.x>, Datta, S. and Satten, G. (2005) <doi:10.1198/016214504000001583>.
Hadamard matrix based statistical designs are of immense importance as the resultant designs carry various desirable characterizing properties. Constructing Partially Balanced Incomplete Block Designs (PBIBds) using Kronecker product of incidence matrices of Balanced Incomplete Block (BIB) and Partially Balanced Incomplete Block (PBIB) designs is much evident from literature. Here, we have constructed Incomplete Block Designs (IBDs) based on Hadamard matrices and Kronecker product of Hadamard matrices.