Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a toolbox for constructing potential landscapes for dynamical systems using Monte Carlo simulation. The method is based on the potential landscape definition by Wang et al. (2008) <doi:10.1073/pnas.0800579105> (also see Zhou & Li, 2016 <doi:10.1063/1.4943096> for further mathematical discussions) and can be used for a large variety of models.
This package implements survival-model-based imputation for censored laboratory measurements, including Tobit-type models with several distribution options. Suitable for data with values below detection or quantification limits, the package identifies the best-fitting distribution and produces realistic imputations that respect the censoring thresholds.
This package provides a set of spatial accessibility measures from a set of locations (demand) to another set of locations (supply). It aims, among others, to support research on spatial accessibility to health care facilities. Includes the locations and some characteristics of major public hospitals in Greece.
Efficient procedures for fitting and cross-validating the structurally-regularized time-dependent Cox models.
Bayesian variable selection, model choice, and regularized estimation for (spatial) generalized additive mixed regression models via stochastic search variable selection with spike-and-slab priors.
Fits bi-variate ellipses to stable isotope data using Bayesian inference with the aim being to describe and compare their isotopic niche.
Proposes a torch implementation of Graph Net architecture allowing different options for message passing and feature embedding.
Given bincount data from single-cell copy number profiling (segmented or unsegmented), estimates ploidy, and uses the ploidy estimate to scale the data to absolute copy numbers. Uses the modular quantogram proposed by Kendall (1986) <doi:10.1002/0471667196.ess2129.pub2>, modified by weighting segments according to confidence, and quantifying confidence in the estimate using a theoretical quantogram. Includes optional fused-lasso segmentation with the algorithm in Johnson (2013) <doi:10.1080/10618600.2012.681238>, using the implementation from glmgen by Arnold, Sadhanala, and Tibshirani.
This package provides a set of functions for querying and parsing data from Solr (<https://solr.apache.org/>) endpoints (local and remote), including search, faceting', highlighting', stats', and more like this'. In addition, some functionality is included for creating, deleting, and updating documents in a Solr database'.
We provide full functionality to smooth L1 penalized regression operators and to compute regression estimates thereof. For this, the objective function of a user-specified regression operator is first smoothed using Nesterov smoothing (see Y. Nesterov (2005) <doi:10.1007/s10107-004-0552-5>), resulting in a modified objective function with explicit gradients everywhere. The smoothed objective function and its gradient are minimized via BFGS, and the obtained minimizer is returned. Using Nesterov smoothing, the smoothed objective function can be made arbitrarily close to the original (unsmoothed) one. In particular, the Nesterov approach has the advantage that it comes with explicit accuracy bounds, both on the L1/L2 difference of the unsmoothed to the smoothed objective functions as well as on their respective minimizers (see G. Hahn, S.M. Lutz, N. Laha, C. Lange (2020) <doi:10.1101/2020.09.17.301788>). A progressive smoothing approach is provided which iteratively smoothes the objective function, resulting in more stable regression estimates. A function to perform cross validation for selection of the regularization parameter is provided.
This package provides a tidy approach to spatial network analysis, in the form of classes and functions that enable a seamless interaction between the network analysis package tidygraph and the spatial analysis package sf'.
This package provides a tool for producing synthetic versions of microdata containing confidential information so that they are safe to be released to users for exploratory analysis. The key objective of generating synthetic data is to replace sensitive original values with synthetic ones causing minimal distortion of the statistical information contained in the data set. Variables, which can be categorical or continuous, are synthesised one-by-one using sequential modelling. Replacements are generated by drawing from conditional distributions fitted to the original data using parametric or classification and regression trees models. Data are synthesised via the function syn() which can be largely automated, if default settings are used, or with methods defined by the user. Optional parameters can be used to influence the disclosure risk and the analytical quality of the synthesised data. For a description of the implemented method see Nowok, Raab and Dibben (2016) <doi:10.18637/jss.v074.i11>. Functions to assess identity and attribute disclosure for the original and for the synthetic data are included in the package, and their use is illustrated in a vignette on disclosure (Practical Privacy Metrics for Synthetic Data).
Streamlines geographic data transformation, storage and publication, simplifying data preparation and enhancing interoperability across formats and platforms.
Introduces a fast and efficient Surrogate Variable Analysis algorithm that captures variation of unknown sources (batch effects) for high-dimensional data sets. The algorithm is built on the irwsva.build function of the sva package and proposes a revision on it that achieves an order of magnitude faster running time while trading no accuracy loss in return.
The skew logistic distribution is a quantile-defined generalisation of the logistic distribution (van Staden and King 2015). Provides random numbers, quantiles, probabilities, densities and density quantiles for the distribution. It provides Quantile-Quantile plots and method of L-Moments estimation (including asymptotic standard errors) for the distribution.
This package provides tools to efficiently analyze and visualize laboratory data from aqueous static adsorption experiments. The package provides functions to plot Langmuir, Freundlich, and Temkin isotherms and functions to determine the statistical conformity of data points to the Langmuir, Freundlich, and Temkin adsorption models through statistical characterization of the isothermic least squares regressions lines. Scientific Reference: Dada, A.O, Olalekan, A., Olatunya, A. (2012) <doi:10.9790/5736-0313845>.
Estimates the parameter of small area in binary data without auxiliary variable using Empirical Bayes technique, mainly from Rao and Molina (2015,ISBN:9781118735787) with book entitled "Small Area Estimation Second Edition". This package provides another option of direct estimation using weight. This package also features alpha and beta parameter estimation on calculating process of small area. Those methods are Newton-Raphson and Moment which based on Wilcox (1979) <doi:10.1177/001316447903900302> and Kleinman (1973) <doi:10.1080/01621459.1973.10481332>.
This package provides a mixture model for clustering individuals (or sampling groups) into stocks based on their genetic profile. Here, sampling groups are individuals that are sure to come from the same stock (e.g. breeding adults or larvae). The mixture (log-)likelihood is maximised using the EM-algorithm after finding good starting values via a K-means clustering of the genetic data. Details can be found in: Foster, S. D.; Feutry, P.; Grewe, P. M.; Berry, O.; Hui, F. K. C. & Davies (2020) <doi:10.1111/1755-0998.12920>.
Allows TailwindCSS to be used in Shiny apps with just-in-time compiling, custom css with @apply directive, and custom tailwind configurations.
Interactive shiny application for working with Structural Equation Modelling technique. Runtime examples are provided in the package function as well as at <https://kartikeyab.shinyapps.io/semwebappk/> .
An implementation of the Similarity-First Search algorithm (SFS), a combinatorial algorithm which can be used to solve the seriation problem and to recognize some structured weighted graphs. The SFS algorithm represents a generalization to weighted graphs of the graph search algorithm Lexicographic Breadth-First Search (Lex-BFS), a variant of Breadth-First Search. The SFS algorithm reduces to Lex-BFS when applied to binary matrices (or, equivalently, unweighted graphs). Hence this library can be also considered for Lex-BFS applications such as recognition of graph classes like chordal or unit interval graphs. In fact, the SFS seriation algorithm implemented in this package is a multisweep algorithm, which consists in repeating a finite number of SFS iterations (at most n sweeps for a matrix of size n). If the data matrix has a Robinsonian structure, then the ranking returned by the multistep SFS algorithm is a Robinson ordering of the input matrix. Otherwise the algorithm can be used as a heuristic to return a ranking partially satisfying the Robinson property.
Draw syntenic relationships between genome assemblies. There are 3 functions which take a tab delimited file containing alignment data for syntenic blocks between genomes to produce either a linear alignment plot, an evolution highway style plot, or a painted ideogram representing syntenic relationships. There is also a function to convert alignment data in the DESCHRAMBLER/inferCAR format to the required data structure.
Imbibition causes seeds to expand, which results in the seed coat or testa being broken. Seed germination begins with imbibition. Imbibition aids in the transport of water into the developing ovules. Imbibition is required during the first stages of root water absorption.
The scrapeR package utilizes functions that fetch and extract text content from specified web pages. It handles HTTP errors and parses HTML efficiently. The package can handle hundreds of websites at a time using the scrapeR_in_batches() command.