Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools for modeling non-continuous linear responses of ecological communities to environmental data. The package is straightforward through three steps: (1) data ordering (function OrdData()), (2) split-moving-window analysis (function SMW()) and (3) piecewise redundancy analysis (function pwRDA()). Relevant references include Cornelius and Reynolds (1991) <doi:10.2307/1941559> and Legendre and Legendre (2012, ISBN: 9780444538697).
An exact method for computing the Poisson-Binomial Distribution (PBD). The package provides a function for generating a random sample from the PBD, as well as two distinct approaches for computing the density, distribution, and quantile functions of the PBD. The first method uses direct-convolution, or a dynamic-programming approach which is numerically stable but can be slow for a large input due to its quadratic complexity. The second method is much faster on large inputs thanks to its use of Fast Fourier Transform (FFT) based convolutions. Notably in this case the package uses an exponential shift to practically guarantee the relative accuracy of the computation of an arbitrarily small tail of the PBD -- something that FFT-based methods often struggle with. This ShiftConvolvePoiBin method is described in Peres, Lee and Keich (2020) <arXiv:2004.07429> where it is also shown to be competitive with the fastest implementations for exactly computing the entire Poisson-Binomial distribution.
Character vector to numerical translation in Euros from Spanish spelled monetary quantities. Reverse translation from integer to Spanish. Upper limit is up to the millions range. Geocoding via Cadastral web site.
Interface to the Sensor Tower API <https://app.sensortower.com/api/docs/app_analysis> for mobile app analytics and market intelligence. Provides functions to retrieve app metadata, publisher information, download and revenue estimates, active user metrics, category rankings, and market trends. The package includes data processing utilities to clean and aggregate metrics across platforms, automatic app name resolution, and tools for generating professional analytics dashboards. Supports both iOS and Android app ecosystems with unified data structures for cross-platform analysis.
The current version of this package estimates spatial autoregressive models for binary dependent variables using GMM estimators <doi:10.18637/jss.v107.i08>. It supports one-step (Pinkse and Slade, 1998) <doi:10.1016/S0304-4076(97)00097-3> and two-step GMM estimator along with the linearized GMM estimator proposed by Klier and McMillen (2008) <doi:10.1198/073500107000000188>. It also allows for either Probit or Logit model and compute the average marginal effects. All these models are presented in Sarrias and Piras (2023) <doi:10.1016/j.jocm.2023.100432>.
Shiny wrappers for the RGL package. This package exposes RGL's ability to export WebGL visualization in a shiny-friendly format.
Sentiment Analysis via deep learning and gradient boosting models with a lot of the underlying hassle taken care of to make the process as simple as possible. In addition to out-performing traditional, lexicon-based sentiment analysis (see <https://benwiseman.github.io/sentiment.ai/#Benchmarks>), it also allows the user to create embedding vectors for text which can be used in other analyses. GPU acceleration is supported on Windows and Linux.
This package provides a sensitivity analysis approach for unmeasured confounding in observational data with multiple treatments and a binary outcome. This approach derives the general bias formula and provides adjusted causal effect estimates in response to various assumptions about the degree of unmeasured confounding. Nested multiple imputation is embedded within the Bayesian framework to integrate uncertainty about the sensitivity parameters and sampling variability. Bayesian Additive Regression Model (BART) is used for outcome modeling. The causal estimands are the conditional average treatment effects (CATE) based on the risk difference. For more details, see paper: Hu L et al. (2020) A flexible sensitivity analysis approach for unmeasured confounding with multiple treatments and a binary outcome with application to SEER-Medicare lung cancer data <arXiv:2012.06093>.
An efficient tool for fitting the nested common and shared atoms models using variational Bayes approximate inference for fast computation. Specifically, the package implements the common atoms model (Denti et al., 2023), its finite version (D'Angelo et al., 2023), and a hybrid finite-infinite model. All models use Gaussian mixtures with a normal-inverse-gamma prior distribution on the parameters. Additional functions are provided to help analyze the results of the fitting procedure. References: Denti, Camerlenghi, Guindani, Mira (2023) <doi:10.1080/01621459.2021.1933499>, Dâ Angelo, Canale, Yu, Guindani (2023) <doi:10.1111/biom.13626>.
The methods discussed in this package are new non-parametric methods based on sequential normal scores SNS (Conover et al (2017) <doi:10.1080/07474946.2017.1360091>), designed for sequences of observations, usually time series data, which may occur singly or in batches, and may be univariate or multivariate. These methods are designed to detect changes in the process, which may occur as changes in location (mean or median), changes in scale (standard deviation, or variance), or other changes of interest in the distribution of the observations, over the time observed. They usually apply to large data sets, so computations need to be simple enough to be done in a reasonable time on a computer, and easily updated as each new observation (or batch of observations) becomes available. Some examples and more detail in SNS is presented in the work by Conover et al (2019) <arXiv:1901.04443>.
The goal of snpsettest is to provide simple tools that perform set-based association tests (e.g., gene-based association tests) using GWAS (genome-wide association study) summary statistics. A set-based association test in this package is based on the statistical model described in VEGAS (versatile gene-based association study), which combines the effects of a set of SNPs accounting for linkage disequilibrium between markers. This package uses a different approach from the original VEGAS implementation to compute set-level p values more efficiently, as described in <https://github.com/HimesGroup/snpsettest/wiki/Statistical-test-in-snpsettest>.
Reliability of (normal) stress-strength models and for building two-sided or one-sided confidence intervals according to different approximate procedures.
Allows the creation and manipulation of C++ std::vector's in R.
This package provides option settings management that goes beyond R's default options function. With this package, users can define their own option settings manager holding option names, default values and (if so desired) ranges or sets of allowed option values that will be automatically checked. Settings can then be retrieved, altered and reset to defaults with ease. For R programmers and package developers it offers cloning and merging functionality which allows for conveniently defining global and local options, possibly in a multilevel options hierarchy. See the package vignette for some examples concerning functions, S4 classes, and reference classes. There are convenience functions to reset par() and options() to their factory defaults'.
Interactive shiny application for working with Structural Equation Modelling technique. Runtime examples are provided in the package function as well as at <https://kartikeyab.shinyapps.io/semwebappk/> .
Univariate time series forecasting with STL decomposition based Extreme Learning Machine hybrid model. For method details see Xiong T, Li C, Bao Y (2018). <doi:10.1016/j.neucom.2017.11.053>.
Computes multivariate normal (MVN) densities, and samples from MVN distributions, when the covariance or precision matrix is sparse.
This package implements a semi-supervised learning framework for finite mixture models under a mixed-missingness mechanism. The approach models both missing completely at random (MCAR) and entropy-based missing at random (MAR) processes using a logisticâ entropy formulation. Estimation is carried out via an Expectationâ -Conditional Maximisation (ECM) algorithm with robust initialisation routines for stable convergence. The methodology relates to the statistical perspective and informative missingness behaviour discussed in Ahfock and McLachlan (2020) <doi:10.1007/s11222-020-09971-5> and Ahfock and McLachlan (2023) <doi:10.1016/j.ecosta.2022.03.007>. The package provides functions for data simulation, model estimation, prediction, and theoretical Bayes error evaluation for analysing partially labelled data under a mixed-missingness mechanism.
High dimensional time to events data analysis with variable selection technique. Currently support LASSO, clustering and Bonferroni's correction.
This package implements the SparseStep model for solving regression problems with a sparsity constraint on the parameters. The SparseStep regression model was proposed in Van den Burg, Groenen, and Alfons (2017) <arXiv:1701.06967>. In the model, a regularization term is added to the regression problem which approximates the counting norm of the parameters. By iteratively improving the approximation a sparse solution to the regression problem can be obtained. In this package both the standard SparseStep algorithm is implemented as well as a path algorithm which uses golden section search to determine solutions with different values for the regularization parameter.
Implement the algorithm provided in scan for estimating the transmission route on railway network using passenger volume. It is a generalization of the scan statistic approach for railway network to identify the hot railway route for transmitting infectious diseases.
Test and estimates of location, tests of independence, tests of sphericity and several estimates of shape all based on spatial signs, symmetrized signs, ranks and signed ranks. For details, see Oja and Randles (2004) <doi:10.1214/088342304000000558> and Oja (2010) <doi:10.1007/978-1-4419-0468-3>.
This package provides a flexible framework for definition and application of time/depth- based rules for sets of parameters for single grains that can be used to create artificial sediment profiles. Such profiles can be used for virtual sample preparation and synthetic, for instance, luminescence measurements.
This package provides helper functions to compute linear predictors, time-dependent ROC curves, and Harrell's concordance index for Cox proportional hazards models as described in Therneau (2024) <https://CRAN.R-project.org/package=survival>, Therneau and Grambsch (2000, ISBN:0-387-98784-3), Hung and Chiang (2010) <doi:10.1002/cjs.10046>, Uno et al. (2007) <doi:10.1198/016214507000000149>, Blanche, Dartigues, and Jacqmin-Gadda (2013) <doi:10.1002/sim.5958>, Blanche, Latouche, and Viallon (2013) <doi:10.1007/978-1-4614-8981-8_11>, Harrell et al. (1982) <doi:10.1001/jama.1982.03320430047030>, Peto and Peto (1972) <doi:10.2307/2344317>, Schemper (1992) <doi:10.2307/2349009>, and Uno et al. (2011) <doi:10.1002/sim.4154>.