Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Facilitates building likelihood models in the Fisherian tradition following Richard Royall (1997, ISBN:978-0412044113) "Statistical Evidence: A Likelihood Paradigm". Defines generic methods for working with likelihoods (loglik(), score(), hess_loglik(), fim()) and provides functions for pure likelihood-based inference (support(), relative_likelihood(), likelihood_interval(), profile_loglik()). Includes a likelihood contributions model for heterogeneous observation types (exact, censored, etc.) assuming i.i.d. data.
This package produces high resolution, publication ready linkage maps and quantitative trait loci maps. Input can be output from R/qtl', simple text or comma delimited files. Output is currently a portable document file.
The package compiles functions for calculating prices of American put options with Least Squares Monte Carlo method. The option types are plain vanilla American put, Asian American put, and Quanto American put. The pricing algorithms include variance reduction techniques such as Antithetic Variates and Control Variates. Additional functions are given to derive "price surfaces" at different volatilities and strikes, create 3-D plots, quickly generate Geometric Brownian motion, and calculate prices of European options with Black & Scholes analytical solution.
Long non-coding RNAs identification and analysis. Default models are trained with human, mouse and wheat datasets by employing SVM. Features are based on intrinsic composition of sequence, EIIP value (electron-ion interaction pseudopotential), and secondary structure. This package can also extract other classic features and build new classifiers. Reference: Han S., et al. (2019) <doi:10.1093/bib/bby065>.
This is a Neural Network regression model implementation using Keras', consisting of 10 Long Short-Term Memory layers that are fully connected along with the rest of the inputs.
Plots empty Lexis grids, adds lifelines and highlights certain areas of the grid, like cohorts and age groups.
This package provides functions for validating and normalizing bibliographic codes such as ISBN, ISSN, and LCCN. Also includes functions to communicate with the WorldCat API, translate Call numbers (Library of Congress and Dewey Decimal) to their subject classifications or subclassifications, and provides various loadable data files such call number / subject crosswalks and code tables.
Labels are a common construct in statistical software providing a human readable description of a variable. While variable names are succinct, quick to type, and follow a language's naming conventions, labels may be more illustrative and may use plain text and spaces. R does not provide native support for labels. Some packages, however, have made this feature available. Most notably, the Hmisc package provides labelling methods for a number of different object. Due to design decisions, these methods are not all exported, and so are unavailable for use in package development. The labelVector package supports labels for atomic vectors in a light-weight design that is suitable for use in other packages.
This package provides functions for vectorised conditional recoding of variables. case_when() enables you to vectorise multiple if and else statements (like CASE WHEN in SQL'). if_else() is a stricter and more predictable version of ifelse() in base that preserves attributes. These functions are forked from dplyr with all package dependencies removed and behave identically to the originals.
Detect feedback loops (cycles, circuits) between species (nodes) in ordinary differential equation (ODE) models. Feedback loops are paths from a node to itself without visiting any other node twice, and they have important regulatory functions. Loops are reported with their order of participating nodes and their length, and whether the loop is a positive or a negative feedback loop. An upper limit of the number of feedback loops limits runtime (which scales with feedback loop count). Model parametrizations and values of the modelled variables are accounted for. Computation uses the characteristics of the Jacobian matrix as described e.g. in Thomas and Kaufman (2002) <doi:10.1016/s1631-0691(02)01452-x>. Input can be the Jacobian matrix of the ODE model or the ODE function definition; in the latter case, the Jacobian matrix is determined using numDeriv'. Graph-based algorithms from igraph are employed for path detection.
The main function of the package is to perform backward selection of fixed effects, forward fitting of the random effects, and post-hoc analysis using parallel capabilities. Other functionality includes the computation of ANOVAs with upper- or lower-bound p-values and R-squared values for each model term, model criticism plots, data trimming on model residuals, and data visualization. The data to run examples is contained in package LCF_data.
An effortless ndjson (newline-delimited JSON') logger, with two primary log-writing interfaces. It provides a set of wrappings for base R's message(), warning(), and stop() functions that maintain identical functionality, but also log the handler message to an ndjson log file. loggit also exports its internal loggit() function for powerful and configurable custom logging. No change in existing code is necessary to use this package, and should only require additions to fully leverage the power of the logging system. loggit also provides a log reader for reading an ndjson log file into a data frame, log rotation, and live echo of the ndjson log messages to terminal stdout for log capture by external systems (like containers). loggit is ideal for Shiny apps, data pipelines, modeling work flows, and more. Please see the vignettes for detailed example use cases.
This package provides functions to estimate survival and a treatment effect using a landmark estimation approach.
Calculates Land Surface Temperature from Landsat band 10 and 11. Revision of the Single-Channel Algorithm for Land Surface Temperature Retrieval From Landsat Thermal-Infrared Data. Jimenez-Munoz JC, Cristobal J, Sobrino JA, et al (2009). <doi: 10.1109/TGRS.2008.2007125>. Land surface temperature retrieval from LANDSAT TM 5. Sobrino JA, Jiménez-Muñoz JC, Paolini L (2004). <doi:10.1016/j.rse.2004.02.003>. Surface temperature estimation in Singhbhum Shear Zone of India using Landsat-7 ETM+ thermal infrared data. Srivastava PK, Majumdar TJ, Bhattacharya AK (2009). <doi: 10.1016/j.asr.2009.01.023>. Mapping land surface emissivity from NDVI: Application to European, African, and South American areas. Valor E (1996). <doi:10.1016/0034-4257(96)00039-9>. On the relationship between thermal emissivity and the normalized difference vegetation index for natural surfaces. Van de Griend AA, Owe M (1993). <doi:10.1080/01431169308904400>. Land Surface Temperature Retrieval from Landsat 8 TIRSâ Comparison between Radiative Transfer Equation-Based Method, Split Window Algorithm and Single Channel Method. Yu X, Guo X, Wu Z (2014). <doi:10.3390/rs6109829>. Calibration and Validation of land surface temperature for Landsat8-TIRS sensor. Land product validation and evolution. SkokoviÄ D, Sobrino JA, Jimenez-Munoz JC, Soria G, Julien Y, Mattar C, Cristóbal J. (2014).
Computes the probability density function, the cumulative distribution function, the hazard rate function, the quantile function and random generation for Lindley Power Series distributions, see Nadarajah and Si (2018) <doi:10.1007/s13171-018-0150-x>.
Determine a Prototype from a number of runs of Latent Dirichlet Allocation (LDA) measuring its similarities with S-CLOP: A procedure to select the LDA run with highest mean pairwise similarity, which is measured by S-CLOP (Similarity of multiple sets by Clustering with Local Pruning), to all other runs. LDA runs are specified by its assignments leading to estimators for distribution parameters. Repeated runs lead to different results, which we encounter by choosing the most representative LDA run as prototype.
Fast implementations to compute the genetic covariance matrix, the Jaccard similarity matrix, the s-matrix (the weighted Jaccard similarity matrix), and the (classic or robust) genomic relationship matrix of a (dense or sparse) input matrix (see Hahn, Lutz, Hecker, Prokopenko, Cho, Silverman, Weiss, and Lange (2020) <doi:10.1002/gepi.22356>). Full support for sparse matrices from the R-package Matrix'. Additionally, an implementation of the power method (von Mises iteration) to compute the largest eigenvector of a matrix is included, a function to perform an automated full run of global and local correlations in population stratification data, a function to compute sliding windows, and a function to invert minor alleles and to select those variants/loci exceeding a minimal cutoff value. New functionality in locStra allows one to extract the k leading eigenvectors of the genetic covariance matrix, Jaccard similarity matrix, s-matrix, and genomic relationship matrix via fast PCA without actually computing the similarity matrices. The fast PCA to compute the k leading eigenvectors can now also be run directly from bed'+'bim'+'fam files.
Temporary and permanent message queues for R. Built on top of SQLite databases. SQLite provides locking, and makes it possible to detect crashed consumers. Crashed jobs can be automatically marked as "failed", or put in the queue again, potentially a limited number of times.
Data used as examples in the loon package.
This package performs likelihood-based inference for stationary time series extremes. The general approach follows Fawcett and Walshaw (2012) <doi:10.1002/env.2133>. Marginal extreme value inferences are adjusted for cluster dependence in the data using the methodology in Chandler and Bate (2007) <doi:10.1093/biomet/asm015>, producing an adjusted log-likelihood for the model parameters. A log-likelihood for the extremal index is produced using the K-gaps model of Suveges and Davison (2010) <doi:10.1214/09-AOAS292>. These log-likelihoods are combined to make inferences about extreme values. Both maximum likelihood and Bayesian approaches are available.
This package implements the LS-PLS (least squares - partial least squares) method described in for instance Jørgensen, K., Segtnan, V. H., Thyholt, K., Næs, T. (2004) "A Comparison of Methods for Analysing Regression Models with Both Spectral and Designed Variables" Journal of Chemometrics, 18(10), 451--464, <doi:10.1002/cem.890>.
Estimating causal parameters in the presence of treatment spillover is of great interest in statistics. This package provides tools for instrumental variables estimation of average causal effects under network interference of unknown form. The target parameters are the local average direct effect, the local average indirect effect, the local average overall effect, and the local average spillover effect. The methods are developed by Hoshino and Yanagi (2023) <doi:10.48550/arXiv.2108.07455>.
Rapid satellite data streams in operational applications have clear benefits for monitoring land cover, especially when information can be delivered as fast as changing surface conditions. Over the past decade, remote sensing has become a key tool for monitoring and predicting environmental variables by using satellite data. This package presents the main applications in remote sensing for land surface monitoring and land cover mapping (soil, vegetation, water...). Tomlinson, C.J., Chapman, L., Thornes, E., Baker, C (2011) <doi:10.1002/met.287>.
This package implements the letter value boxplot which extends the standard boxplot to deal with both larger and smaller number of data points by dynamically selecting the appropriate number of letter values to display.