Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions to manage taxonomy when lineages are described with strings and ranks separated with special patterns like "|*__" or ";*__".
This package provides a YAML-based mechanism for working with table metadata. Supports compact syntax for creating, modifying, viewing, exporting, importing, displaying, and plotting metadata coded as column attributes. The yamlet dialect is valid YAML with defaults and conventions chosen to improve readability. See ?yamlet, ?decorate, ?modify, ?io_csv, and ?ggplot.decorated.
This collection of data exploration tools was developed at Yale University for the graphical exploration of complex multivariate data; barcode and gpairs now have their own packages. The big.read.table() function provided here may be useful for large files when only a subset is needed (but please see the note in the help page for this function).
This package provides a fast, correct, safe, and ergonomic YAML 1.2 parser and generator written in Rust'. Convert between YAML and simple R objects with full support for multi-document streams, tags, anchors, and aliases. Offers opt-in handlers for custom tag behavior and round-trips common R data structures. Implements the YAML 1.2.2 specification from the YAML Language Development Team (2021) <https://yaml.org/spec/1.2.2/>. Proudly supported by Posit.
Modelling the yield curve with some parametric models. The models implemented are: Nelson, C.R., and A.F. Siegel (1987) <doi: 10.1086/296409>, Diebold, F.X. and Li, C. (2006) <doi: 10.1016/j.jeconom.2005.03.005> and Svensson, L.E. (1994) <doi: 10.3386/w4871>. The package also includes the data of the term structure of interest rate of Federal Reserve Bank and European Central Bank.
Dataset from the young elite swimmers study.
This package provides helper functions to perform Bayesian model averaging using Markov chain Monte Carlo samples from separate models. Calculates weights and obtains draws from the model-averaged posterior for quantities of interest specified by the user. Weight calculations can be done using marginal likelihoods or log-predictive likelihoods as in Ando, T., & Tsay, R. (2010) <doi:10.1016/j.ijforecast.2009.08.001>.
Setting layout through YAML headers in R-Markdown documents, enabling their automatic generation. Functions and methods may summarize R objects in automatic reports, for instance check-lists and further reports applied to the packages taxlist and vegtable'.
An extension for NetSurfP-2.0 (Klausen et al. (2019) <doi:10.1002/prot.25674>) which is specifically designed to analyze the results of bottom-up-proteomics that is primarily analyzed with MaxQuant (Cox, J., Mann, M. (2008) <doi:10.1038/nbt.1511>). This tool is designed to process a large number of yeast peptides that produced as a results of whole yeast cell-proteome digestion and provide a coherent picture of secondary structure of proteins.
Simulation and Inference for SDEs and Other Stochastic Processes.
Nonparametric estimation of discount functions and yield curves from transaction data of coupon paying bonds. Koo, B., La Vecchia, D., & Linton, O. B. (2021) <doi:10.1016/j.jeconom.2020.04.014> describe an application of this package using the Center for Research in Security Prices (CRSP) Bond Data and document its implementation.
Simple and efficient access to Yahoo Finance's historical data API <https://finance.yahoo.com/> for querying and retrieval of financial data. The core functionality of the yfhist package abstracts the complexities of interacting with Yahoo Finance APIs, such as session management, crumb and cookie handling, query construction, date validation, and interval management. This abstraction allows users to focus on retrieving data rather than managing API details. Use cases include historical data across a range of security types including equities & ETFs, indices, and other tickers. The package supports flexible query capabilities, including customizable date ranges, multiple time intervals, and automatic data validation. It automatically manages interval-specific limitations, such as lookback periods for intraday data and maximum date ranges for minute-level intervals. The implementation leverages standard HTTP libraries to handle API interactions efficiently and provides support for both R and Python to ensure accessibility for a broad audience.
For any spending function specified by the user, this package provides corresponding boundaries for interim testing using the adaptively weighted log-rank test developed by Yang and Prentice (2010 <doi:10.1111/j.1541-0420.2009.01243.x>). The package uses a re-sampling method to obtain stopping boundaries at the interim looks.The output consists of stopping boundaries and observed values of the test statistics at the interim looks, along with nominal p-values defined as the probability of the test exceeding the specific observed test statistic value or critical value, regardless of the test behavior at other looks. The asymptotic validity of the stopping boundaries is established in Yang (2018 <doi:10.1002/sim.7958>).
Asks Yes-No questions with variable or custom responses.
This package performs nearest neighbor-based imputation using one or more alternative approaches to processing multivariate data. These include methods based on canonical correlation: analysis, canonical correspondence analysis, and a multivariate adaptation of the random forest classification and regression techniques of Leo Breiman and Adele Cutler. Additional methods are also offered. The package includes functions for comparing the results from running alternative techniques, detecting imputation targets that are notably distant from reference observations, detecting and correcting for bias, bootstrapping and building ensemble imputations, and mapping results.
Analyzing performances of cricketers and cricket teams based on yaml match data from Cricsheet <https://cricsheet.org/>.
This package provides a fast JSON parser, generator and validator which converts JSON', NDJSON (Newline Delimited JSON') and GeoJSON (Geographic JSON') data to/from R objects. The standard R data types are supported (e.g. logical, numeric, integer) with configurable handling of NULL and NA values. Data frames, atomic vectors and lists are all supported as data containers translated to/from JSON'. GeoJSON data is read in as simple features objects. This implementation wraps the yyjson C library which is available from <https://github.com/ibireme/yyjson>.
Obtain historical and near real time data related to stocks, index and currencies from the Yahoo Finance API. This package is community maintained and is not officially supported by Yahoo'. The accuracy of data is only as correct as provided on <https://finance.yahoo.com/>.
This package provides a suite of statistics for identifying areas of the genome under selective pressure. See Jacobs, Sluckin and Kivisild (2016) <doi:10.1534/genetics.115.185900>.
Facilitates making a connection to the Zoom API and executing various queries. You can use it to get data on Zoom webinars and Zoom meetings. The Zoom documentation is available at <https://developers.zoom.us/docs/api/>. This package is not supported by Zoom (owner of the software).
This package provides a collection of utility functions that facilitate looking up vector values from a lookup table, annotate values in at table for clearer viewing, and support a safer approach to vector sampling, sequence generation, and aggregation.
This package provides functions to compute compositional turnover using zeta-diversity, the number of species shared by multiple assemblages. The package includes functions to compute zeta-diversity for a specific number of assemblages and to compute zeta-diversity for a range of numbers of assemblages. It also includes functions to explain how zeta-diversity varies with distance and with differences in environmental variables between assemblages, using generalised linear models, linear models with negative constraints, generalised additive models,shape constrained additive models, and I-splines.
Improve the usage of model fitting functions within a piped work flow.
Estimation methods for zero-inflated Poisson factor analysis (ZIPFA) on sparse data. It provides estimates of coefficients in a new type of zero-inflated regression. It provides a cross-validation method to determine the potential rank of the data in the ZIPFA and conducts zero-inflated Poisson factor analysis based on the determined rank.