Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Make R scripts reproducible, by ensuring that every time a given script is run, the same version of the used packages are loaded (instead of whichever version the user running the script happens to have installed). This is achieved by using the command groundhog.library() instead of the base command library(), and including a date in the call. The date is used to call on the same version of the package every time (the most recent version available at that date). Load packages from CRAN, GitHub, or Gitlab.
Generate multiple data sets for educational purposes to demonstrate the importance of multiple regression. The genset function generates a data set from an initial data set to have the same summary statistics (mean, median, and standard deviation) but opposing regression results.
This package implements the GAMbag, GAMrsm and GAMens ensemble classifiers for binary classification (De Bock et al., 2010) <doi:10.1016/j.csda.2009.12.013>. The ensembles implement Bagging (Breiman, 1996) <doi:10.1023/A:1010933404324>, the Random Subspace Method (Ho, 1998) <doi:10.1109/34.709601> , or both, and use Hastie and Tibshirani's (1990, ISBN:978-0412343902) generalized additive models (GAMs) as base classifiers. Once an ensemble classifier has been trained, it can be used for predictions on new data. A function for cross validation is also included.
Give advice about good practices when building R packages. Advice includes functions and syntax to avoid, package structure, code complexity, code formatting, etc.
Reads annual financial reports including assets, liabilities, dividends history, stockholder composition and much more from Bovespa's DFP, FRE and FCA systems <http://www.b3.com.br/pt_br/produtos-e-servicos/negociacao/renda-variavel/empresas-listadas.htm>. These are web based interfaces for all financial reports of companies traded at Bovespa. The package is specially designed for large scale data importation, keeping a tabular (long) structure for easier processing.
This package provides a simple to use, intuitive, and extensible interface to several stochastic simulation algorithms for generating simulated trajectories of finite population continuous-time model. Currently it implements Gillespie's exact stochastic simulation algorithm (Direct method) and several approximate methods (Explicit tau-leap, Binomial tau-leap, and Optimized tau-leap). The package also contains a library of template models that can be run as demo models and can easily be customized and extended. Currently the following models are included, Decaying-Dimerization reaction set, linear chain system, logistic growth model, Lotka predator-prey model, Rosenzweig-MacArthur predator-prey model, Kermack-McKendrick SIR model, and a metapopulation SIRS model. Pineda-Krch et al. (2008) <doi:10.18637/jss.v025.i12>.
This package provides tools to access, search, and download global 3D building footprint datasets (3D-GloBFP) generated by Che et al. (2024) <doi:10.5194/essd-16-5357-2024>. The package includes functions to retrieve metadata, filter by bounding box, and download building height tiles.
Create a user-friendly plotting GUI for R'. In addition, one purpose of creating the R package is to facilitate third-party software to call R for drawing, for example, Phoenix WinNonlin software calls R to draw the drug concentration versus time curve.
Estimates grid type bivariate copula functions, calculates some association measures and provides several copula graphics.
Estimation and inference using the Generalized Maximum Entropy (GME) and Generalized Cross Entropy (GCE) framework, a flexible method for solving ill-posed inverse problems and parameter estimation under uncertainty (Golan, Judge, and Miller (1996, ISBN:978-0471145925) "Maximum Entropy Econometrics: Robust Estimation with Limited Data"). The package includes routines for generalized cross entropy estimation of linear models including the implementation of a GME-GCE two steps approach. Diagnostic tools, and options to incorporate prior information through support and prior distributions are available (Macedo, Cabral, Afreixo, Macedo and Angelelli (2025) <doi:10.1007/978-3-031-97589-9_21>). In particular, support spaces can be defined by the user or be internally computed based on the ridge trace or on the distribution of standardized regression coefficients. Different optimization methods for the objective function can be used. An adaptation of the normalized entropy aggregation (Macedo and Costa (2019) <doi:10.1007/978-3-030-26036-1_2> "Normalized entropy aggregation for inhomogeneous large-scale data") and a two-stage maximum entropy approach for time series regression (Macedo (2022) <doi:10.1080/03610918.2022.2057540>) are also available. Suitable for applications in econometrics, health, signal processing, and other fields requiring robust estimation under data constraints.
Geographically Dependent Individual Level Models (GDILMs) within the Susceptible-Exposed-Infectious-Recovered-Susceptible (SEIRS) framework are applied to model infectious disease transmission, incorporating reinfection dynamics. This package employs a likelihood based Monte Carlo Expectation Conditional Maximization (MCECM) algorithm for estimating model parameters. It also provides tools for GDILM fitting, parameter estimation, AIC calculation on real pandemic data, and simulation studies customized to user-defined model settings.
This package performs genomic mediation analysis with adaptive confounding adjustment (GMAC) proposed by Yang et al. (2017) <doi:10.1101/gr.216754.116>. It implements large scale mediation analysis and adaptively selects potential confounding variables to adjust for each mediation test from a pool of candidate confounders. The package is tailored for but not limited to genomic mediation analysis (e.g., cis-gene mediating trans-gene regulation pattern where an eQTL, its cis-linking gene transcript, and its trans-gene transcript play the roles as treatment, mediator and the outcome, respectively), restricting to scenarios with the presence of cis-association (i.e., treatment-mediator association) and random eQTL (i.e., treatment).
Density, distribution function, quantile function and random generation for the Generalized Binomial Distribution. Functions to compute the Clopper-Pearson Confidence Interval and the required sample size. Enhanced model for burn-in studies, where failures are tackled by countermeasures.
Turn irregular polygons (such as geographical regions) into regular or hexagonal grids. This package enables the generation of regular (square) and hexagonal grids through the package sp and then assigns the content of the existing polygons to the new grid using the Hungarian algorithm, Kuhn (1955) (<doi:10.1007/978-3-540-68279-0_2>). This prevents the need for manual generation of hexagonal grids or regular grids that are supposed to reflect existing geography.
An implementation of the International Bureau of Weights and Measures (BIPM) generalized consensus estimators used to assign the reference value in a key comparison exercise. This can also be applied to any interlaboratory study. Given a set of different sources, primary laboratories or measurement methods this package provides an evaluation of the variance components according to the selected statistical method for consensus building. It also implements the comparison among different consensus builders and evaluates the participating method or sources against the consensus reference value. Based on a diverse set of references, DerSimonian-Laird (1986) <doi:10.1016/0197-2456(86)90046-2>, for a complete list of references look at the reference section in the package documentation.
Send error reports to the Google Error Reporting service <https://cloud.google.com/error-reporting/> and view errors and assign error status in the Google Error Reporting user interface.
This package provides a variety of multivariable data summary statistics and constructions have been proposed, either to generalize univariable analogs or to exploit multivariable properties. Notable among these are the bivariate peelings surveyed by Green (1981, ISBN:978-0-471-28039-2), the bag-and-bolster plots proposed by Rousseeuw &al (1999) <doi:10.1080/00031305.1999.10474494>, and the minimum spanning trees used by Jolliffe (2002) <doi:10.1007/b98835> to represent high-dimensional relationships among data in a low-dimensional plot. Additionally, biplots of singular value--decomposed tabular data, such as from principal components analysis, make use of vectors, calibrated axes, and other representations of variable elements to complement point markers for case elements; see Gabriel (1971) <doi:10.1093/biomet/58.3.453> and Gower & Harding (1988) <doi:10.1093/biomet/75.3.445> for original proposals. Because they treat the abscissa and ordinate as commensurate or the data elements themselves as point masses or unit vectors, these multivariable tools can be thought of as belonging to geometric data analysis; see Podani (2000, ISBN:90-5782-067-6) for techniques and applications and Le Roux & Rouanet (2005) <doi:10.1007/1-4020-2236-0> for foundations. gggda extends Wickham's (2010) <doi:10.1198/jcgs.2009.07098> layered grammar of graphics with statistical transformation ("stat") and geometric construction ("geom") layers for many of these tools, as well as convenience coordinate systems to emphasize intrinsic geometry of the data.
This package provides tools for systematically exploring large quantities of temporal data across cyclic temporal granularities (deconstructions of time) by visualizing probability distributions. Cyclic time granularities can be circular, quasi-circular or aperiodic. gravitas computes cyclic single-order-up or multiple-order-up granularities, check the feasibility of creating plots for any two cyclic granularities and recommend probability distributions plots for exploring periodicity in the data.
This package provides methods and tools for the analysis of Genome Wide Identity-by-Descent ('gwid') mapping data, focusing on testing whether there is a higher occurrence of Identity-By-Descent (IBD) segments around potential causal variants in cases compared to controls, which is crucial for identifying rare variants. To enhance its analytical power, gwid incorporates a Sliding Window Approach, allowing for the detection and analysis of signals from multiple Single Nucleotide Polymorphisms (SNPs).
This package provides a toolkit with functions to fit, plot, summarize, and apply Generalized Dissimilarity Models. Mokany K, Ware C, Woolley SNC, Ferrier S, Fitzpatrick MC (2022) <doi:10.1111/geb.13459> Ferrier S, Manion G, Elith J, Richardson K (2007) <doi:10.1111/j.1472-4642.2007.00341.x>.
Saves a ggplot object into multiple files, each with a layer added incrementally. Generally to be used in presentation slides. Flexible enough to allow different file types for the final complete plot, and intermediate builds.
Fit generalized linear mixed models (GLMMs) with normal random effects using first-order Laplace, fully exponential Laplace (FEL) with mean-only corrections, and FEL with mean and covariance corrections in the E-step of an expectation-maximization (EM) algorithm. The current development version provides a matrix-based interface (y, X, Z) and supports binary logit and probit, and Poisson log-link models. An EM framework is used to update fixed effects, random effects, and a single variance component tau^2 for G = tau^2 I, with staged approximations (Laplace -> FEL mean-only -> FEL full) for efficiency and stability. A pseudo-likelihood engine glmmFEL_pl() implements the working-response / working-weights linearization approach of Wolfinger and O'Connell (1993) <doi:10.1080/00949659308811554>, and is adapted from the implementation used in the RealVAMS package (Broatch, Green, and Karl (2018)) <doi:10.32614/RJ-2018-033>. The FEL implementation follows Karl, Yang, and Lohr (2014) <doi:10.1016/j.csda.2013.11.019> and related work (e.g., Tierney, Kass, and Kadane (1989) <doi:10.1080/01621459.1989.10478824>; Rizopoulos, Verbeke, and Lesaffre (2009) <doi:10.1111/j.1467-9868.2008.00704.x>; Steele (1996) <doi:10.2307/2532845>). Package code was drafted with assistance from generative AI tools.
Routines that allow the user to run goodness of fit tests based on empirical distribution functions for formal model evaluation in a general likelihood model. In addition, functions are provided to test if a sample follows Normal or Gamma distributions, validate the normality assumptions in a linear model, and examine the appropriateness of a Gamma distribution in generalized linear models with various link functions. Michael Arthur Stephens (1976) <http://www.jstor.org/stable/2958206>.
The genetic algorithm can be used directly to find the similarity of users and more effectively to increase the efficiency of the collaborative filtering method. By identifying the nearest neighbors to the active user, before the genetic algorithm, and by identifying suitable starting points, an effective method for user-based collaborative filtering method has been developed. This package uses an optimization algorithm (continuous genetic algorithm) to directly find the optimal similarities between active users (users for whom current recommendations are made) and others. First, by determining the nearest neighbor and their number, the number of genes in a chromosome is determined. Each gene represents the neighbor's similarity to the active user. By estimating the starting points of the genetic algorithm, it quickly converges to the optimal solutions. The positive point is the independence of the genetic algorithm on the number of data that for big data is an effective help in solving the problem.