Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Dataset from the young elite swimmers study.
Simplify working with the YouTube Analytics API <https://developers.google.com/youtube/analytics>. Collect data for your channel including geography, traffic sources, time period, etc.
For any spending function specified by the user, this package provides corresponding boundaries for interim testing using the adaptively weighted log-rank test developed by Yang and Prentice (2010 <doi:10.1111/j.1541-0420.2009.01243.x>). The package uses a re-sampling method to obtain stopping boundaries at the interim looks.The output consists of stopping boundaries and observed values of the test statistics at the interim looks, along with nominal p-values defined as the probability of the test exceeding the specific observed test statistic value or critical value, regardless of the test behavior at other looks. The asymptotic validity of the stopping boundaries is established in Yang (2018 <doi:10.1002/sim.7958>).
This package provides a collection of string functions designed for writing compact and expressive R code. yasp (Yet Another String Package) is simple, fast, dependency-free, and written in pure R. The package provides: a coherent set of abbreviations for paste() from package base with a variety of defaults, such as p() for "paste" and pcc() for "paste and collapse with commas"; wrap(), bracket(), and others for wrapping a string in flanking characters; unwrap() for removing pairs of characters (at any position in a string); and sentence() for cleaning whitespace around punctuation and capitalization appropriate for prose sentences.
This collection of data exploration tools was developed at Yale University for the graphical exploration of complex multivariate data; barcode and gpairs now have their own packages. The big.read.table() function provided here may be useful for large files when only a subset is needed (but please see the note in the help page for this function).
This package provides a YAML-based mechanism for working with table metadata. Supports compact syntax for creating, modifying, viewing, exporting, importing, displaying, and plotting metadata coded as column attributes. The yamlet dialect is valid YAML with defaults and conventions chosen to improve readability. See ?yamlet, ?decorate, ?modify, ?io_csv, and ?ggplot.decorated.
This package provides a number of functions to facilitate extracting information in YAML fragments from one or multiple files, optionally structuring the information in a data.tree'. YAML (recursive acronym for "YAML ain't Markup Language") is a convention for specifying structured data in a format that is both machine- and human-readable. YAML therefore lends itself well for embedding (meta)data in plain text files, such as Markdown files. This principle is implemented in yum with minimal dependencies (i.e. only the yaml packages, and the data.tree package can be used to enable additional functionality).
The purpose of this package is to provide methods to interpret multiple linear regression and canonical correlation results including beta weights,structure coefficients, validity coefficients, product measures, relative weights, all-possible-subsets regression, dominance analysis, commonality analysis, and adjusted effect sizes.
An alternative canonical correlation/redundancy analysis function, with associated print, plot, and summary methods. A method for generating helio plots is also included.
This package provides covariate-adjusted comparison of two groups of right censored data, where the binary group variable has separate short-term and long-term effects on the hazard function, while effects of covariates such as age, blood pressure, etc. are proportional on the hazard. The model was studied in Yang and Prentice (2015) <doi:10.1002/sim.6453> and it extends the two sample version of the short-term and long-term hazard ratio model proposed in Yang and Prentice (2005) <doi:10.1093/biomet/92.1.1>. The model extends the usual Cox proportional hazards model to allow more flexible hazard ratio patterns, such as gradual onset of effect, diminishing effect, and crossing hazard or survival functions. This package provides the following: 1) point estimates and confidence intervals for model parameters; 2) point estimate and confidence interval of the average hazard ratio; and 3) plots of estimated hazard ratio function with point-wise and simultaneous confidence bands.
Setting layout through YAML headers in R-Markdown documents, enabling their automatic generation. Functions and methods may summarize R objects in automatic reports, for instance check-lists and further reports applied to the packages taxlist and vegtable'.
An implementation of equilibrium-based yield per recruit methods. Yield per recruit methods can used to estimate the optimal yield for a fish population as described by Walters and Martell (2004) <isbn:0-691-11544-3>. The yield can be based on the number of fish caught (or harvested) or biomass caught for all fish or just large (trophy) individuals.
Generates Realizations of First-Order Integer Valued Autoregressive Processes with Zero-Inflated Innovations (ZINAR(1)) and Estimates its Parameters as described in Garay et al. (2021) <doi:10.1007/978-3-030-82110-4_2>.
This package provides an Interface to Zenodo (<https://zenodo.org>) REST API, including management of depositions, attribution of DOIs by Zenodo and upload and download of files.
Make working with ZIP codes in R painless with an integrated dataset of U.S. ZIP codes and functions for working with them. Search ZIP codes by multiple geographies, including state, county, city & across time zones. Also included are functions for relating ZIP codes to Census data, geocoding & distance calculations.
This package provides a set of functions for working with American postal codes, which are known as ZIP Codes. These include accessing ZIP Code to ZIP Code Tabulation Area (ZCTA) crosswalks, retrieving demographic data for ZCTAs, and tabulating demographic data for three-digit ZCTAs.
Implementation of zero-inflated Poisson models under Bayesian framework using data augmentation as discussed in Chapter 5 of Zhang (2020) <https://hdl.handle.net/10012/16378>. This package is constructed in accommodating four different scenarios: the general scenario, the scenario with measurement error in responses, the external validation scenario, and the internal validation scenario.
Implementation of four extensions of the Zipf distribution: the Marshall-Olkin Extended Zipf (MOEZipf) Pérez-Casany, M., & Casellas, A. (2013) <arXiv:1304.4540>, the Zipf-Poisson Extreme (Zipf-PE), the Zipf-Poisson Stopped Sum (Zipf-PSS) and the Zipf-Polylog distributions. In log-log scale, the two first extensions allow for top-concavity and top-convexity while the third one only allows for top-concavity. All the extensions maintain the linearity associated with the Zipf model in the tail.
Simulation, exploratory data analysis and Bayesian analysis of the p-order Integer-valued Autoregressive (INAR(p)) and Zero-inflated p-order Integer-valued Autoregressive (ZINAR(p)) processes, as described in Garay et al. (2020) <doi:10.1080/00949655.2020.1754819>.
Fits Dirichlet regression and zero-and-one inflated Dirichlet regression with Bayesian methods implemented in Stan. These models are sometimes referred to as trinomial mixture models; covariates and overdispersion can optionally be included.
Estimation methods for zero-inflated Poisson factor analysis (ZIPFA) on sparse data. It provides estimates of coefficients in a new type of zero-inflated regression. It provides a cross-validation method to determine the potential rank of the data in the ZIPFA and conducts zero-inflated Poisson factor analysis based on the determined rank.
This package provides fast and easy access to German census grid data from the 2011 and 2022 censuses <https://www.zensus2022.de/>, including a wide range of socio-economic indicators at multiple spatial resolutions (100m, 1km, 10km). Enables efficient download, processing, and analysis of large census datasets covering population, households, families, dwellings, and buildings. Harmonized data structures allow direct comparison with the 2011 census, supporting temporal and spatial analyses. Facilitates conversion of data into common formats for spatial analysis and mapping ('terra', sf', ggplot2').
This package provides an R wrapper for the Zendesk API.
Improve the usage of model fitting functions within a piped work flow.