Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools are provided to expand vectors of short URLs into long URLs'. No API services are used, which may mean that this operates more slowly than API services do (since they usually cache results of expansions that every user of the service requests). You can setup your own caching layer with the memoise package if you wish to have a speedup during single sessions or add larger dependencies, such as Redis', to gain a longer-term performance boost at the expense of added complexity.
The solution of equality constrained least squares problem (LSE) is given through four analytics methods (Generalized QR Factorization, Lagrange Multipliers, Direct Elimination and Null Space method). We expose the orthogonal decomposition called Generalized QR Factorization (GQR) and also RQ factorization. Finally some codes for the solution of LSE applied in quaternions.
This package contains a suite of shiny applications meant to explore linear model inference feature through simulation and games.
This package provides a high level interface for torch providing utilities to reduce the the amount of code needed for common tasks, abstract away torch details and make the same code work on both the CPU and GPU'. It's flexible enough to support expressing a large range of models. It's heavily inspired by fastai by Howard et al. (2020) <doi:10.48550/arXiv.2002.04688>, Keras by Chollet et al. (2015) and PyTorch Lightning by Falcon et al. (2019) <doi:10.5281/zenodo.3828935>.
Constructs genotype x environment interaction (GxE) models where G is a weighted sum of genetic variants (genetic score) and E is a weighted sum of environments (environmental score) using the alternating optimization algorithm by Jolicoeur-Martineau et al. (2017) <arXiv:1703.08111>. This approach has greatly enhanced predictive power over traditional GxE models which include only a single genetic variant and a single environmental exposure. Although this approach was originally made for GxE modelling, it is flexible and does not require the use of genetic and environmental variables. It can also handle more than 2 latent variables (rather than just G and E) and 3-way interactions or more. The LEGIT model produces highly interpretable results and is very parameter-efficient thus it can even be used with small sample sizes (n < 250). Tools to determine the type of interaction (vantage sensitivity, diathesis-stress or differential susceptibility), with any number of genetic variants or environments, are available <arXiv:1712.04058>. The software can now produce mixed-effects LEGIT models through the lme4 package.
It is an extension of lmom R package: pel...()','cdf...()',qua...() function families are lumped and called from one function per each family respectively in order to create robust automatic tools to fit data with different probability distributions and then to estimate probability values and return periods. The implemented functions are able to manage time series with constant and/or missing values without stopping the execution with error messages. The package also contains tools to calculate several indices based on variability (e.g. SPI , Standardized Precipitation Index, see <https://climatedataguide.ucar.edu/climate-data/standardized-precipitation-index-spi> and <http://spei.csic.es/>) for multiple time series or spatially gridded values.
Probabilistic record linkage without direct identifiers using only diagnosis codes. Method is detailed in: Hejblum, Weber, Liao, Palmer, Churchill, Szolovits, Murphy, Kohane & Cai (2019) <doi: 10.1038/sdata.2018.298> ; Zhang, Hejblum, Weber, Palmer, Churchill, Szolovits, Murphy, Liao, Kohane & Cai (2021) <doi: 10.1093/jamia/ocab187>.
This package provides density, distribution and random generation functions for the Linear Ballistic Accumulation (LBA) model, a widely used choice response time model in cognitive psychology. The package supports model specifications, parameter estimation, and likelihood computation, facilitating simulation and statistical inference for LBA-based experiments. For details on the LBA model, see Brown and Heathcote (2008) <doi:10.1016/j.cogpsych.2007.12.002>.
This package provides function for the l1-ball prior on high-dimensional regression. The main function, l1ball(), yields posterior samples for linear regression, as introduced by Xu and Duan (2020) <arXiv:2006.01340>.
Local explanations of machine learning models describe, how features contributed to a single prediction. This package implements an explanation method based on LIME (Local Interpretable Model-agnostic Explanations, see Tulio Ribeiro, Singh, Guestrin (2016) <doi:10.1145/2939672.2939778>) in which interpretable inputs are created based on local rather than global behaviour of each original feature.
Simulation and estimation of univariate and multivariate log-GARCH models. The main functions of the package are: lgarchSim(), mlgarchSim(), lgarch() and mlgarch(). The first two functions simulate from a univariate and a multivariate log-GARCH model, respectively, whereas the latter two estimate a univariate and multivariate log-GARCH model, respectively.
This package provides a Low Rank Correction Variational Bayesian algorithm for high-dimensional multi-source heterogeneous quantile linear models. More details have been written up in a paper submitted to the journal Statistics in Medicine, and the details of variational Bayesian methods can be found in Ray and Szabo (2021) <doi:10.1080/01621459.2020.1847121>. It simultaneously performs parameter estimation and variable selection. The algorithm supports two model settings: (1) local models, where variable selection is only applied to homogeneous coefficients, and (2) global models, where variable selection is also performed on heterogeneous coefficients. Two forms of parameter estimation are output: one is the standard variational Bayesian estimation, and the other is the variational Bayesian estimation corrected with low-rank adjustment.
Fits look-up tables by filling entries with the mean or median values of observations fall in partitions of the feature space. Partitions can be determined by user of the package using input argument feature.boundaries, and dimensions of the feature space can be any combination of continuous and categorical features provided by the data set. A Predict function directly fetches corresponding entry value, and a default value is defined as the mean or median of all available observations. The table and other components are represented using the S4 class lookupTable.
For fitting Bayesian joint latent class and regression models using Gibbs sampling. See the documentation for the model. The technical details of the model implemented here are described in Elliott, Michael R., Zhao, Zhangchen, Mukherjee, Bhramar, Kanaya, Alka, Needham, Belinda L., "Methods to account for uncertainty in latent class assignments when using latent classes as predictors in regression models, with application to acculturation strategy measures" (2020) In press at Epidemiology <doi:10.1097/EDE.0000000000001139>.
Embarrassingly Parallel Linear Mixed Model calculations spread across local cores which repeat until convergence.
Time series analysis based on lambda transformer and variational seq2seq, built on Torch'.
This package performs Bayesian linear regression and forecasting in astronomy. The method accounts for heteroscedastic errors in both the independent and the dependent variables, intrinsic scatters (in both variables) and scatter correlation, time evolution of slopes, normalization, scatters, Malmquist and Eddington bias, upper limits and break of linearity. The posterior distribution of the regression parameters is sampled with a Gibbs method exploiting the JAGS library.
Reproduces the harmonized DB of the ESTAT survey of the same name. The survey data is served as separate spreadsheets with noticeable differences in the collected attributes. The tool here presented carries out a series of instructions that harmonize the attributes in terms of name, meaning, and occurrence, while also introducing a series of new variables, instrumental to adding value to the product. Outputs include one harmonized table with all the years, and three separate geometries, corresponding to the theoretical point, the gps location where the measurement was made and the 250m east-facing transect.
This package provides a collection of hypothesis tests and confidence intervals based on the likelihood ratio <https://en.wikipedia.org/wiki/Likelihood-ratio_test>.
This package provides a toolbox for R arrays. Flexibly split, bind, reshape, modify, subset and name arrays.
Connect to the Less Annoying CRM API with ease to get your crm data in a clean and tidy format. Less Annoying CRM is a simple CRM built for small businesses, more information is available on their website <https://www.lessannoyingcrm.com/>.
Robust test(s) for model diagnostics in regression. The current version contains a robust test for functional specification (linearity). The test is based on the robust bounded-influence test by Heritier and Ronchetti (1994) <doi:10.1080/01621459.1994.10476822>.
Generate a local library copy with relevant packages. All packages currently found within the search path - except base packages - will be copied to the directory provided and can be used later on with the .libPaths() function.
This package provides a bioinformatics pipeline for performing taxonomic assignment of DNA metabarcoding sequence data while considering geographic location. A detailed tutorial is available at <https://urodelan.github.io/Local_Taxa_Tool_Tutorial/>. A manuscript describing these methods is in preparation.