Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements tidy syllabification of transcription. Based on @kylebgorman's python implementation <https://github.com/kylebgorman/syllabify>.
Pleiotropy-informed significance analysis of genome-wide association studies (GWAS) with surrogate functional false discovery rates (sfFDR). The sfFDR framework adapts the fFDR to leverage informative data from multiple sets of GWAS summary statistics to increase power in study while accommodating for linkage disequilibrium. sfFDR provides estimates of key FDR quantities in a significance analysis such as the functional local FDR and q-value, and uses these estimates to derive a functional p-value for type I error rate control and a functional local Bayes factor for post-GWAS analyses (e.g., fine mapping and colocalization). The sfFDR framework is described in Bass and Wallace (2024) <doi:10.1101/2024.09.24.24314276>.
The SALSO algorithm is an efficient randomized greedy search method to find a point estimate for a random partition based on a loss function and posterior Monte Carlo samples. The algorithm is implemented for many loss functions, including the Binder loss and a generalization of the variation of information loss, both of which allow for unequal weights on the two types of clustering mistakes. Efficient implementations are also provided for Monte Carlo estimation of the posterior expected loss of a given clustering estimate. See Dahl, Johnson, Müller (2022) <doi:10.1080/10618600.2022.2069779>.
This package provides functionality to fit a zero-inflated estimator for small area estimation. This estimator is a combines a linear mixed effects regression model and a logistic mixed effects regression model via a two-stage modeling approach. The estimator's mean squared error is estimated via a parametric bootstrap method. Chandra and others (2012, <doi:10.1080/03610918.2011.598991>) introduce and describe this estimator and mean squared error estimator. White and others (2024+, <doi:10.48550/arXiv.2402.03263>) describe the applicability of this estimator to estimation of forest attributes and further assess the estimator's properties.
Implement a GAM-based (Generalized Additive Models) spatial surplus production model (spatial SPM), aimed at modeling northern shrimp population in Atlantic Canada but potentially to any stock in any location. The package is opinionated in its implementation of SPMs as it internally makes the choice to use penalized spatial gams with time lags. However, it also aims to provide options for the user to customize their model. The methods are described in Pedersen et al. (2022, <https://www.dfo-mpo.gc.ca/csas-sccs/Publications/ResDocs-DocRech/2022/2022_062-eng.html>).
An implementation of W3C WebDriver 2.0 (<https://w3c.github.io/webdriver/>), allowing interaction with a Selenium Server (<https://www.selenium.dev/documentation/grid/>) instance from R'. Allows a web browser to be automated from R'.
This package provides functions to estimate a strategic selection estimator. A strategic selection estimator is an agent error model in which the two random components are not assumed to be orthogonal. In addition this package provides generic functions to print and plot objects of its class as well as the necessary functions to create tables for LaTeX. There is also a function to create dyadic data sets.
This package provides a function that behaves nearly as base::source() but implements a caching mechanism on disk, project based. It allows to quasi source() R scripts that gather data but can fail or consume to much time to respond even if nothing new is expected. It comes with tools to check and execute on demand or when cache is invalid the script.
Srt file is a common subtitle format for videos, it contains subtitle and when the subtitle showed. This package is for align time of srt file, and also change color, style and position of subtitle in videos, the srt file will be read as a vector into R, and can be write into srt file after modified using this package.
The synchrosqueezed wavelet transform is implemented. The package is a translation of MATLAB Synchrosqueezing Toolbox, version 1.1 originally developed by Eugene Brevdo (2012). The C code for curve_ext was authored by Jianfeng Lu, and translated to Fortran by Dongik Jang. Synchrosqueezing is based on the papers: [1] Daubechies, I., Lu, J. and Wu, H. T. (2011) Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Applied and Computational Harmonic Analysis, 30. 243-261. [2] Thakur, G., Brevdo, E., Fukar, N. S. and Wu, H-T. (2013) The Synchrosqueezing algorithm for time-varying spectral analysis: Robustness properties and new paleoclimate applications. Signal Processing, 93, 1079-1094.
Calculate Kernel Density Estimation (KDE) for spatial data. The algorithm is inspired by the tool Heatmap from QGIS'. The method is described by: Hart, T., Zandbergen, P. (2014) <doi:10.1108/PIJPSM-04-2013-0039>, Nelson, T. A., Boots, B. (2008) <doi:10.1111/j.0906-7590.2008.05548.x>, Chainey, S., Tompson, L., Uhlig, S.(2008) <doi:10.1057/palgrave.sj.8350066>.
Characterize daily stream discharge and water quality data and subsample water quality data. Provide dates, discharge, and water quality measurements and streamsampler can find gaps, get summary statistics, and subsample according to common stream sampling protocols. Stream sampling protocols are described in Lee et al. (2016) <doi:10.1016/j.jhydrol.2016.08.059> and Lee et al. (2019) <doi:10.3133/sir20195084>.
Generate synthetic time series from commonly used statistical models, including linear, nonlinear and chaotic systems. Applications to testing methods can be found in Jiang, Z., Sharma, A., & Johnson, F. (2019) <doi:10.1016/j.advwatres.2019.103430> and Jiang, Z., Sharma, A., & Johnson, F. (2020) <doi:10.1029/2019WR026962> associated with an open-source tool by Jiang, Z., Rashid, M. M., Johnson, F., & Sharma, A. (2020) <doi:10.1016/j.envsoft.2020.104907>.
Estimation of two-state (survival) models and irreversible illness- death models with possibly interval-censored, left-truncated and right-censored data. Proportional intensities regression models can be specified to allow for covariates effects separately for each transition. We use either a parametric approach with Weibull baseline intensities or a semi-parametric approach with M-splines approximation of baseline intensities in order to obtain smooth estimates of the hazard functions. Parameter estimates are obtained by maximum likelihood in the parametric approach and by penalized maximum likelihood in the semi-parametric approach.
This package provides functions for creating and annotating a composite plot in ggplot2'. Offers background themes and shortcut plotting functions that produce figures that are appropriate for the format of scientific journals. Some methods are described in Min and Zhou (2021) <doi:10.3389/fgene.2021.802894>.
Este paquete tiene la finalidad de ayudar a aprender de una forma interactiva, teniendo ejemplos y la posibilidad de resolver nuevos al mismo tiempo. Apuntes de clase interactivos.
Implementation of various methods in estimation of species richness or diversity in Wang (2011)<doi:10.18637/jss.v040.i09>.
Generalized additive models under shape constraints on the component functions of the linear predictor. Models can include multiple shape-constrained (univariate and bivariate) and unconstrained terms. Routines of the package mgcv are used to set up the model matrix, print, and plot the results. Multiple smoothing parameter estimation by the Generalized Cross Validation or similar. See Pya and Wood (2015) <doi:10.1007/s11222-013-9448-7> for an overview. A broad selection of shape-constrained smoothers, linear functionals of smooths with shape constraints, and Gaussian models with AR1 residuals.
Machine learning provides algorithms that can learn from data and make inferences or predictions. Stochastic automata is a class of input/output devices which can model components. This work provides implementation an inference algorithm for stochastic automata which is similar to the Viterbi algorithm. Moreover, we specify a learning algorithm using the expectation-maximization technique and provide a more efficient implementation of the Baum-Welch algorithm for stochastic automata. This work is based on Inference and learning in stochastic automata was by Karl-Heinz Zimmermann(2017) <doi:10.12732/ijpam.v115i3.15>.
Functionality for spatio-temporal modeling of large data sets is provided. A Gaussian process in space and time is defined through a stochastic partial differential equation (SPDE). The SPDE is solved in the spectral space, and after discretizing in time and space, a linear Gaussian state space model is obtained. When doing inference, the main computational difficulty consists in evaluating the likelihood and in sampling from the full conditional of the spectral coefficients, or equivalently, the latent space-time process. In comparison to the traditional approach of using a spatio-temporal covariance function, the spectral SPDE approach is computationally advantageous. See Sigrist, Kuensch, and Stahel (2015) <doi:10.1111/rssb.12061> for more information on the methodology. This package aims at providing tools for two different modeling approaches. First, the SPDE based spatio-temporal model can be used as a component in a customized hierarchical Bayesian model (HBM). The functions of the package then provide parameterizations of the process part of the model as well as computationally efficient algorithms needed for doing inference with the HBM. Alternatively, the adaptive MCMC algorithm implemented in the package can be used as an algorithm for doing inference without any additional modeling. The MCMC algorithm supports data that follow a Gaussian or a censored distribution with point mass at zero. Covariates can be included in the model through a regression term.
Capable of deriving seasonal statistics, such as "normals", and analysis of seasonal data, such as departures. This package also has graphics capabilities for representing seasonal data, including boxplots for seasonal parameters, and bars for summed normals. There are many specific functions related to climatology, including precipitation normals, temperature normals, cumulative precipitation departures and precipitation interarrivals. However, this package is designed to represent any time-varying parameter with a discernible seasonal signal, such as found in hydrology and ecology.
This package provides peak functions, which enable us to detect peaks in time series. The methods implemented in this package are based on Girish Keshav Palshikar (2009) <https://www.researchgate.net/publication/228853276_Simple_Algorithms_for_Peak_Detection_in_Time-Series>.
Computes the entire solution paths for Support Vector Regression(SVR) with respect to the regularization parameter, lambda and epsilon in epsilon-intensive loss function, efficiently. We call each path algorithm svrpath and epspath. See Wang, G. et al (2008) <doi:10.1109/TNN.2008.2002077> for details regarding the method.
Generates region-specific Suess and Laws corrections for stable carbon isotope data from marine organisms collected between 1850 and 2023. Version 0.1.6 of SuessR contains four built-in regions: the Bering Sea ('Bering Sea'), the Aleutian archipelago ('Aleutian Islands'), the Gulf of Alaska ('Gulf of Alaska'), and the subpolar North Atlantic ('Subpolar North Atlantic'). Users can supply their own environmental data for regions currently not built into the package to generate corrections for those regions.