Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements the kK-NN algorithm, an adaptive k-nearest neighbor classifier that adjusts the neighborhood size based on local data curvature. The method estimates local Gaussian curvature by approximating the shape operator of the data manifold. This approach aims to improve classification performance, particularly in datasets with limited samples.
Genome-wide association (GWAS) analyses of a biomarker that account for the limit of detection.
An R implementation of the LexRank algorithm described by G. Erkan and D. R. Radev (2004) <DOI:10.1613/jair.1523>.
Fast estimation of multinomial (MNL) and mixed logit (MXL) models in R. Models can be estimated using "Preference" space or "Willingness-to-pay" (WTP) space utility parameterizations. Weighted models can also be estimated. An option is available to run a parallelized multistart optimization loop with random starting points in each iteration, which is useful for non-convex problems like MXL models or models with WTP space utility parameterizations. The main optimization loop uses the nloptr package to minimize the negative log-likelihood function. Additional functions are available for computing and comparing WTP from both preference space and WTP space models and for predicting expected choices and choice probabilities for sets of alternatives based on an estimated model. Mixed logit models can include uncorrelated or correlated heterogeneity covariances and are estimated using maximum simulated likelihood based on the algorithms in Train (2009) <doi:10.1017/CBO9780511805271>. More details can be found in Helveston (2023) <doi:10.18637/jss.v105.i10>.
Constructs tree for continuous longitudinal data and survival data using baseline covariates as partitioning variables according to the LongCART and SurvCART algorithm, respectively. Later also included functions to calculate conditional power and predictive power of success based on interim results and probability of success for a prospective trial.
Model-based linear model trees adjusting for spatial correlation using a simultaneous autoregressive spatial lag, Wagner and Zeileis (2019) <doi:10.1111/geer.12146>.
"Learning with Subset Stacking" is a supervised learning algorithm that is based on training many local estimators on subsets of a given dataset, and then passing their predictions to a global estimator. You can find the details about LESS in our manuscript at <arXiv:2112.06251>.
Aids in learning statistical functions incorporating the result of calculus done with each function and how they are obtained, that is, which equation and variables are used. Also for all these equations and their related variables detailed explanations and interactive exercises are also included. All these characteristics allow to the package user to improve the learning of statistics basics by means of their use.
Log-analytic methods intended for testing multiplicative effects.
Lag-sequential analysis is a method of assessing of patterns (what tends to follow what?) in sequences of codes. The codes are typically for discrete behaviors or states. The functions in this package read a stream of codes, or a frequency transition matrix, and produce a variety of lag sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, z values, adjusted residuals, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests. The methods are described in Bakeman & Quera (2011) <doi:10.1017/CBO9781139017343>, O'Connor (1999) <doi:10.3758/BF03200753>, Wampold & Margolin (1982) <doi:10.1037/0033-2909.92.3.755>, and Wampold (1995, ISBN:0-89391-919-5).
This package provides a set of all-cause and cause-specific life expectancy sensitivity and decomposition methods, including Arriaga (1984) <doi:10.2307/2061029>, others documented by Ponnapalli (2005) <doi:10.4054/DemRes.2005.12.7>, lifetable, numerical, and other algorithmic approaches such as Horiuchi et al (2008) <doi:10.1353/dem.0.0033>, or Andreev et al (2002) <doi:10.4054/DemRes.2002.7.14>.
Measure similarity between texts. Offers a variety of processing tools and similarity metrics to facilitate flexible representation of texts and matching. Implements forms of Language Style Matching (Ireland & Pennebaker, 2010) <doi:10.1037/a0020386> and Latent Semantic Analysis (Landauer & Dumais, 1997) <doi:10.1037/0033-295X.104.2.211>.
Effectively simulates the discretization process inherent to Likert scales while minimizing distortion. It converts continuous latent variables into ordinal categories to generate Likert scale item responses. Particularly useful for accurately modeling and analyzing survey data that use Likert scales, especially when applying statistical techniques that require metric data.
It uses phenological and productivity-related variables derived from time series of vegetation indexes, such as the Normalized Difference Vegetation Index, to assess ecosystem dynamics and change, which eventually might drive to land degradation. The final result of the Land Productivity Dynamics indicator is a categorical map with 5 classes of land productivity dynamics, ranging from declining to increasing productivity. See www.sciencedirect.com/science/article/pii/S1470160X21010517/ for a description of the methods used in the package to calculate the indicator.
Density, distribution, quantile and random generation function for the logitnormal distribution. Estimation of the mode and the first two moments. Estimation of distribution parameters.
This package provides tools for maximum likelihood estimation of parameters of scientific models. Based on Goffe et al (1994) <doi:10.1016/0304-4076(94)90038-8>.
Constructs genotype x environment interaction (GxE) models where G is a weighted sum of genetic variants (genetic score) and E is a weighted sum of environments (environmental score) using the alternating optimization algorithm by Jolicoeur-Martineau et al. (2017) <arXiv:1703.08111>. This approach has greatly enhanced predictive power over traditional GxE models which include only a single genetic variant and a single environmental exposure. Although this approach was originally made for GxE modelling, it is flexible and does not require the use of genetic and environmental variables. It can also handle more than 2 latent variables (rather than just G and E) and 3-way interactions or more. The LEGIT model produces highly interpretable results and is very parameter-efficient thus it can even be used with small sample sizes (n < 250). Tools to determine the type of interaction (vantage sensitivity, diathesis-stress or differential susceptibility), with any number of genetic variants or environments, are available <arXiv:1712.04058>. The software can now produce mixed-effects LEGIT models through the lme4 package.
An implementation of logistic normal multinomial (LNM) clustering. It is an extension of LNM mixture model proposed by Fang and Subedi (2020) <arXiv:2011.06682>, and is designed for clustering compositional data. The package includes 3 extended models: LNM Factor Analyzer (LNM-FA), LNM Bicluster Mixture Model (LNM-BMM) and Penalized LNM Factor Analyzer (LNM-FA). There are several advantages of LNM models: 1. LNM provides more flexible covariance structure; 2. Factor analyzer can reduce the number of parameters to estimate; 3. Bicluster can simultaneously cluster subjects and taxa, and provides significant biological insights; 4. Penalty term allows sparse estimation in the covariance matrix. Details for model assumptions and interpretation can be found in papers: Tu and Subedi (2021) <arXiv:2101.01871> and Tu and Subedi (2022) <doi:10.1002/sam.11555>.
Locally sparse estimator of generalized varying coefficient model for asynchronous longitudinal data by kernel-weighted estimating equation.
Processing of Landsat or other multispectral satellite imagery. Includes relative normalization, image-based radiometric correction, and topographic correction options. The original package description was published as Goslee (2011) <doi:10.18637/jss.v043.i04>, and details of the topographic corrections in Goslee (2012) <doi:10.14358/PERS.78.9.973>.
This package provides fast and scalable Gibbs sampling algorithms for Bayesian Lasso regression model in high-dimensional settings. The package implements efficient partially collapsed and nested Gibbs samplers for Bayesian Lasso, with a focus on computational efficiency when the number of predictors is large relative to the sample size. Methods are described at Davoudabadi and Ormerod (2026) <https://github.com/MJDavoudabadi/LassoHiDFastGibbs>.
Read and write access to PNG image files using the LodePNG library. The package has no external dependencies.
This package provides methods for the interpolation of large spatial datasets. This package uses a basis function approach that provides a surface fitting method that can approximate standard spatial data models. Using a large number of basis functions allows for estimates that can come close to interpolating the observations (a spatial model with a small nugget variance.) Moreover, the covariance model for this method can approximate the Matern covariance family but also allows for a multi-resolution model and supports efficient computation of the profile likelihood for estimating covariance parameters. This is accomplished through compactly supported basis functions and a Markov random field model for the basis coefficients. These features lead to sparse matrices for the computations and this package makes of the R spam package for sparse linear algebra. An extension of this version over previous ones ( < 5.4 ) is the support for different geometries besides a rectangular domain. The Markov random field approach combined with a basis function representation makes the implementation of different geometries simple where only a few specific R functions need to be added with most of the computation and evaluation done by generic routines that have been tuned to be efficient. One benefit of this package's model/approach is the facility to do unconditional and conditional simulation of the field for large numbers of arbitrary points. There is also the flexibility for estimating non-stationary covariances and also the case when the observations are a linear combination (e.g. an integral) of the spatial process. Included are generic methods for prediction, standard errors for prediction, plotting of the estimated surface and conditional and unconditional simulation. See the LatticeKrigRPackage GitHub repository for a vignette of this package. Development of this package was supported in part by the National Science Foundation Grant 1417857 and the National Center for Atmospheric Research.
Reproduces the harmonized DB of the ESTAT survey of the same name. The survey data is served as separate spreadsheets with noticeable differences in the collected attributes. The tool here presented carries out a series of instructions that harmonize the attributes in terms of name, meaning, and occurrence, while also introducing a series of new variables, instrumental to adding value to the product. Outputs include one harmonized table with all the years, and three separate geometries, corresponding to the theoretical point, the gps location where the measurement was made and the 250m east-facing transect.