Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Analyzis and filtering of phylogenomics datasets. It takes an input either a collection of gene trees (then transformed to matrices) or directly a collection of gene matrices and performs an iterative process to identify what species in what genes are outliers, and whose elimination significantly improves the concordance between the input matrices. The methods builds upon the Distatis approach (Abdi et al. (2005) <doi:10.1101/2021.09.08.459421>), a generalization of classical multidimensional scaling to multiple distance matrices.
Converts TXT and XML data curated by the United States Patent and Trademark Office (USPTO). Allows conversion of bulk data after downloading directly from the USPTO bulk data website, eliminating need for users to wrangle multiple data formats to get large patent databases in tidy, rectangular format. Data details can be found on the USPTO website <https://bulkdata.uspto.gov/>. Currently, all 3 formats: 1. TXT data (1976-2001); 2. XML format 1 data (2002-2004); and 3. XML format 2 data (2005-current) can be converted to rectangular, CSV format. Relevant literature that uses data from USPTO includes Wada (2020) <doi:10.1007/s11192-020-03674-4> and Plaza & Albert (2008) <doi:10.1007/s11192-007-1763-3>.
Calculate seat apportionment for legislative bodies with various methods. The algorithms include divisor or highest averages methods (e.g. Jefferson, Webster or Adams), largest remainder methods and biproportional apportionment. Gaffke, N. & Pukelsheim, F. (2008) <doi:10.1016/j.mathsocsci.2008.01.004> Oelbermann, K. F. (2016) <doi:10.1016/j.mathsocsci.2016.02.003>.
Perform 1-dim/2-dim projection pursuit, grand tour and guided tour for big data based on data nuggets. Reference papers: [1] Beavers et al., (2024) <doi:10.1080/10618600.2024.2341896>. [2] Duan, Y., Cabrera, J., & Emir, B. (2023). "A New Projection Pursuit Index for Big Data." <doi:10.48550/arXiv.2312.06465>.
This package provides a small, dependency-free way to generate random names. Methods provided include the adjective-surname approach of Docker containers ('<https://github.com/moby/moby/blob/master/pkg/namesgenerator/names-generator.go>'), and combinations of common English or Spanish words.
Calculates the pooled mean group (PMG) estimator for dynamic panel data models, as described by Pesaran, Shin and Smith (1999) <doi:10.1080/01621459.1999.10474156>.
This package provides functions to extract and handle commonly occurring principal phrases obtained from collections of texts. Major speed improvements - core functions rewritten in C++ for faster phrase-document parsing, clustering, and text distance computations. Based on, Small, E., & Cabrera, J. (2025). Principal phrase mining, an automated method for extracting meaningful phrases from text. International Journal of Computers and Applications, 47(1), 84â 92.
Threshold model, panel version of Hylleberg et al. (1990) <DOI:10.1016/0304-4076(90)90080-D> seasonal unit root tests, and panel unit root test of Chang (2002) <DOI:10.1016/S0304-4076(02)00095-7>.
Procedures for testing for group-wide signal in clusters of variables. Tests can be performed for single groups in isolation (univariate) or multiple groups together (multivariate). Specific tests include the exact and approximate (un)selective likelihood ratio tests described in Reid et al (2015), the selective F test and marginal screening prototype test of Reid and Tibshirani (2015). User may pre-specify columns to be included in prototype formation, or allow the function to select them itself. A mixture of these two is also possible. Any variable selection is accounted for using the selective inference framework. Options for non-sampling and hit-and-run null reference distributions.
This package provides tools for reshaping, plotting, and manipulating matrices of orthogonal polynomials.
This package contains functions to simulate the most commonly used SAS® procedures. Specifically, the package aims to simulate the functionality of proc freq', proc means', proc ttest', proc reg', proc transpose', proc sort', and proc print'. The simulation will include recreating all statistics with the highest fidelity possible.
Automate pharmacokinetic/pharmacodynamic bioanalytical procedures based on best practices and regulatory recommendations. The package impose regulatory constrains and sanity checking for common bioanalytical procedures. Additionally, PKbioanalysis provides a relational infrastructure for plate management and injection sequence.
This package provides a set of Analysis Data Model (ADaM) datasets constructed by modifying the ADaM datasets in the pharmaverseadam package to meet J&J Innovative Medicine's standard data structure for Clinical and Statistical Programming.
This package provides a native R client library for querying the Prometheus time-series database, using the PromQL query language.
This package provides analytic and simulation tools to estimate the minimum sample size required for achieving a target prediction mean-squared error (PMSE) or a specified proportional PMSE reduction (pPMSEr) in linear regression models. Functions implement the criteria of Ma (2023) <https://digital.wpi.edu/downloads/0g354j58c>, support covariance-matrix handling, and include helpers for root-finding and diagnostic plotting.
The plsdof package provides Degrees of Freedom estimates for Partial Least Squares (PLS) Regression. Model selection for PLS is based on various information criteria (aic, bic, gmdl) or on cross-validation. Estimates for the mean and covariance of the PLS regression coefficients are available. They allow the construction of approximate confidence intervals and the application of test procedures (Kramer and Sugiyama 2012 <doi:10.1198/jasa.2011.tm10107>). Further, cross-validation procedures for Ridge Regression and Principal Components Regression are available.
This package provides a suite of functions that fit models that use PPM type priors for partitions. Models include hierarchical Gaussian and probit ordinal models with a (covariate dependent) PPM. If a covariate dependent product partition model is selected, then all the options detailed in Page, G.L.; Quintana, F.A. (2018) <doi:10.1007/s11222-017-9777-z> are available. If covariate values are missing, then the approach detailed in Page, G.L.; Quintana, F.A.; Mueller, P (2020) <doi:10.1080/10618600.2021.1999824> is employed. Also included in the package is a function that fits a Gaussian likelihood spatial product partition model that is detailed in Page, G.L.; Quintana, F.A. (2016) <doi:10.1214/15-BA971>, and multivariate PPM change point models that are detailed in Quinlan, J.J.; Page, G.L.; Castro, L.M. (2023) <doi:10.1214/22-BA1344>. In addition, a function that fits a univariate or bivariate functional data model that employs a PPM or a PPMx to cluster curves based on B-spline coefficients is provided.
Be responsible when scraping data from websites by following polite principles: introduce yourself, ask for permission, take slowly and never ask twice.
Search CRAN metadata about packages by keyword, popularity, recent activity, package name and more. Uses the R-hub search server, see <https://r-pkg.org> and the CRAN metadata database, that contains information about CRAN packages. Note that this is _not_ a CRAN project.
Inference and visualize gene regulatory network based on single-cell RNA sequencing pseudo-time information.
This package provides a simple package to grab a Bible proverb corresponding to the day of the month.
Large-scale phenotypic data processing is essential in research. Researchers need to eliminate outliers from the data in order to obtain true and reliable results. Best linear unbiased prediction (BLUP) is a standard method for estimating random effects of a mixed model. This method can be used to process phenotypic data under different conditions and is widely used in animal and plant breeding. The Phenotype can remove outliers from phenotypic data and performs the best linear unbiased prediction (BLUP), help researchers quickly complete phenotypic data analysis. H.P.Piepho. (2008) <doi:10.1007/s10681-007-9449-8>.
Parallelized version of the "segment" function from Bioconductor package "DNAcopy", utilizing multi-core computation on host CPU.
Enforces good practice and provides convenience functions to make work with JavaScript not just easier but also scalable. It is a robust wrapper to NPM', yarn', and webpack that enables to compartmentalize JavaScript code, leverage NPM and yarn packages, include TypeScript', React', or Vue in web applications, and much more.