Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Constructs genotype x environment interaction (GxE) models where G is a weighted sum of genetic variants (genetic score) and E is a weighted sum of environments (environmental score) using the alternating optimization algorithm by Jolicoeur-Martineau et al. (2017) <arXiv:1703.08111>. This approach has greatly enhanced predictive power over traditional GxE models which include only a single genetic variant and a single environmental exposure. Although this approach was originally made for GxE modelling, it is flexible and does not require the use of genetic and environmental variables. It can also handle more than 2 latent variables (rather than just G and E) and 3-way interactions or more. The LEGIT model produces highly interpretable results and is very parameter-efficient thus it can even be used with small sample sizes (n < 250). Tools to determine the type of interaction (vantage sensitivity, diathesis-stress or differential susceptibility), with any number of genetic variants or environments, are available <arXiv:1712.04058>. The software can now produce mixed-effects LEGIT models through the lme4 package.
Split your rmarkdown or quarto files by sections into a tibble: titles, text, chunks. Rebuild the file from the tibble.
"Learning with Subset Stacking" is a supervised learning algorithm that is based on training many local estimators on subsets of a given dataset, and then passing their predictions to a global estimator. You can find the details about LESS in our manuscript at <arXiv:2112.06251>.
Lake morphometry metrics are used by limnologists to understand, among other things, the ecological processes in a lake. Traditionally, these metrics are calculated by hand, with planimeters, and increasingly with commercial GIS products. All of these methods work; however, they are either outdated, difficult to reproduce, or require expensive licenses to use. The lakemorpho package provides the tools to calculate a typical suite of these metrics from an input elevation model and lake polygon. The metrics currently supported are: fetch, major axis, minor axis, major/minor axis ratio, maximum length, maximum width, mean width, maximum depth, mean depth, shoreline development, shoreline length, surface area, and volume.
This package provides functions for computing the r and r* statistics for inference on an arbitrary scalar function of model parameters, plus some code for the (modified) profile likelihood.
An R implementation of the LexRank algorithm described by G. Erkan and D. R. Radev (2004) <DOI:10.1613/jair.1523>.
Without imposing stringent distributional assumptions or shape restrictions, nonparametric estimation has been popular in economics and other social sciences for counterfactual analysis, program evaluation, and policy recommendations. This package implements a novel density (and derivatives) estimator based on local polynomial regressions, documented in Cattaneo, Jansson and Ma (2022) <doi:10.18637/jss.v101.i02>: lpdensity() to construct local polynomial based density (and derivatives) estimator, and lpbwdensity() to perform data-driven bandwidth selection.
An implementation of list comprehensions as purely syntactic sugar with a minor runtime overhead. It constructs nested for-loops and executes the byte-compiled loops to collect the results.
Calculate point estimates of and valid confidence intervals for longitudinal summaries of nonparametric, algorithm-agnostic variable importance measures. For more details, see Williamson et al. (2024) <doi:10.48550/arXiv.2311.01638>.
This package implements a logistic box-cox model. This model is fully described in Xing, L. et al. (2021) <doi:10.1002/cjs.11587>.
Dataset and functions to explore quality of literary novels. The package is a part of the Riddle of Literary Quality project, and it contains the data of a reader survey about fiction in Dutch, a description of the novels the readers rated, and the results of stylistic measurements of the novels. The package also contains functions to combine, analyze, and visualize these data. For more details, see: Eder M, van Zundert J, Lensink S, van Dalen-Oskam K (2022). Replicating The Riddle of Literary Quality: The litRiddle package for R. In _Digital Humanities 2022: Conference Abstracts_, 636-637.
Computational routines for estimating local Gaussian parameters. Local Gaussian parameters are useful for characterizing and testing for non-linear dependence within bivariate data. See e.g. Tjostheim and Hufthammer, Local Gaussian correlation: A new measure of dependence, Journal of Econometrics, 2013, Volume 172 (1), pages 33-48 <DOI:10.1016/j.jeconom.2012.08.001>.
Analysis, imputation, and multiple imputation of count data using covariates. LORI uses a log-linear Poisson model where main row and column effects, as well as effects of known covariates and interaction terms can be fitted. The estimation procedure is based on the convex optimization of the Poisson loss penalized by a Lasso type penalty and a nuclear norm. LORI returns estimates of main effects, covariate effects and interactions, as well as an imputed count table. The package also contains a multiple imputation procedure. The methods are described in Robin, Josse, Moulines and Sardy (2019) <doi:10.1016/j.jmva.2019.04.004>.
Changes of landscape diversity and structure can be detected soon if relying on landscape class combinations and analysing patterns at multiple scales. LandComp provides such an opportunity, based on Juhász-Nagy's functions (Juhász-Nagy P, Podani J 1983 <doi:10.1007/BF00129432>). Functions can handle multilayered data. Requirements of the input: binary data contained by a regular square or hexagonal grid, and the grid should have projected coordinates.
"Lessons in Statistical Thinking" D.T. Kaplan (2014) <https://dtkaplan.github.io/Lessons-in-statistical-thinking/> is a textbook for a first or second course in statistics that embraces data wrangling, causal reasoning, modeling, statistical adjustment, and simulation. LSTbook supports the student-centered, tidy, pipeline-oriented computing style featured in the book.
Supplies a LazyData facility for packages which have data sets but do not provide LazyData: true. A single function is is included, requireData, which is a drop-in replacement for base::require, but carrying the additional functionality. By default, it suppresses package startup messages as well. See argument reallyQuitely'.
Estimate model parameters to determine whether two compounds have synergy, antagonism, or Loewe's Additivity.
Random forests are a statistical learning method widely used in many areas of scientific research essentially for its ability to learn complex relationships between input and output variables and also its capacity to handle high-dimensional data. However, current random forests approaches are not flexible enough to handle longitudinal data. In this package, we propose a general approach of random forests for high-dimensional longitudinal data. It includes a flexible stochastic model which allows the covariance structure to vary over time. Furthermore, we introduce a new method which takes intra-individual covariance into consideration to build random forests. The method is fully detailled in Capitaine et.al. (2020) <doi:10.1177/0962280220946080> Random forests for high-dimensional longitudinal data.
High dimensional longitudinal data analysis with Markov Chain Monte Carlo(MCMC). Currently support mixed effect regression with or without missing observations by considering covariance structures. It provides estimates by missing at random and missing not at random assumptions. In this R package, we present Bayesian approaches that statisticians and clinical researchers can easily use. The functions methodology is based on the book "Bayesian Approaches in Oncology Using R and OpenBUGS" by Bhattacharjee A (2020) <doi:10.1201/9780429329449-14>.
Managing and exploring parameter estimation results derived from Maximum Likelihood Estimation (MLE) using the likelihood package. It provides functions for organizing, visualizing, and summarizing MLE outcomes, streamlining statistical analysis workflows. By improving interpretation and facilitating model evaluation, it helps users gain deeper insights into parameter estimation and model fitting, making MLE result exploration more efficient and accessible. See Goffe et al. (1994) <doi:10.1016/0304-4076(94)90038-8> for details on MLE, and Canham and Uriarte (2006) <doi:10.1890/04-0657> for application of MLE using likelihood'.
This package contains LUE_BIOMASS(),LUE_BIOMASS_VPD(), LUE_YIELD() and LUE_YIELD_VPD() to estimate aboveground biomass and crop yield firstly by calculating the Absorbed Photosynthetically Active Radiation (APAR) and secondly the actual values of light use efficiency with and without vapour presure deficit Shi et al.(2007) <doi:10.2134/agronj2006.0260>.
Fast estimation of multinomial (MNL) and mixed logit (MXL) models in R. Models can be estimated using "Preference" space or "Willingness-to-pay" (WTP) space utility parameterizations. Weighted models can also be estimated. An option is available to run a parallelized multistart optimization loop with random starting points in each iteration, which is useful for non-convex problems like MXL models or models with WTP space utility parameterizations. The main optimization loop uses the nloptr package to minimize the negative log-likelihood function. Additional functions are available for computing and comparing WTP from both preference space and WTP space models and for predicting expected choices and choice probabilities for sets of alternatives based on an estimated model. Mixed logit models can include uncorrelated or correlated heterogeneity covariances and are estimated using maximum simulated likelihood based on the algorithms in Train (2009) <doi:10.1017/CBO9780511805271>. More details can be found in Helveston (2023) <doi:10.18637/jss.v105.i10>.
This is a Neural Network regression model implementation using Keras', consisting of 10 Long Short-Term Memory layers that are fully connected along with the rest of the inputs.
This package provides classes and methods for objects, whose indexing naturally starts from zero. Subsetting, indexing and mathematical operations are defined naturally between lagged objects and lagged and base R objects. Recycling is not used, except for singletons. The single bracket operator doesn't drop dimensions by default.