Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for performing and visualizing Local Fisher Discriminant Analysis(LFDA), Kernel Fisher Discriminant Analysis(KLFDA), and Semi-supervised Local Fisher Discriminant Analysis(SELF).
This package provides tools to teach students elemental statistics. The main topics covered are descriptive statistics, probability models (discrete and continuous variables) and statistical inference (confidence intervals and hypothesis tests). One of the main advantages of this package is that allows the user to read quite a variety of types of data files with one unique command. Moreover it includes shortcuts to simple but up-to-now not in R descriptive features such a complete frequency table or an histogram with the optimal number of intervals. Related to model distributions (both discrete and continuous), the package allows the student to easy plot the mass/density function, distribution function and quantile function just detailing as input arguments the known population parameters. The inference related tools are basically confidence interval and hypothesis testing. Having defined independent commands for these two tools makes it easier for the student to understand what the software is performing, and it also helps the student to have a better knowledge on which specific tool they need to use in each situation. Moreover, the hypothesis testing commands provide not only the numeric result on the screen but also a very intuitive graph (which includes the statistic distribution, the observed value of the statistic, the rejection area and the p-value) that is very useful for the student to visualise the process. The regression section includes up to now, a simple linear model, with one single command the student can obtain the numeric summary as well as the corresponding diagram with the adjusted regression model and a legend with basic information (formula of the adjusted model and R-squared).
It implements Expectation/Conditional Maximization Either (ECME) and rapidly converging algorithms as well as Bayesian inference for linear mixed models, which is described in Schafer, J.L. (1998) "Some improved procedures for linear mixed models". Dept. of Statistics, The Pennsylvania State University.
When the values of the outcome variable Y are either 0 or 1, the function lsm() calculates the estimation of the log likelihood in the saturated model. This model is characterized by Llinas (2006, ISSN:2389-8976) in section 2.3 through the assumptions 1 and 2. The function LogLik() works (almost perfectly) when the number of independent variables K is high, but for small K it calculates wrong values in some cases. For this reason, when Y is dichotomous and the data are grouped in J populations, it is recommended to use the function lsm() because it works very well for all K.
R6 classes to model traditional life insurance contracts like annuities, whole life insurances or endowments. Such life insurance contracts provide a guaranteed interest and are not directly linked to the performance of a particular investment vehicle, but they typically provide (discretionary) profit participation. This package provides a framework to model such contracts in a very generic (cash-flow-based) way and includes modelling profit participation schemes, dynamic increases or more general contract layers, as well as contract changes (like sum increases or premium waivers). All relevant quantities like premium decomposition, reserves and benefits over the whole contract period are calculated and potentially exported to Excel'. Mortality rates are given using the MortalityTables package.
This package performs Levins loop analysis of qualitatively-specified complex causal systems. Loop analysis makes qualitative predictions of variable change in a system of causally interdependent variables, where "qualitative" means direct causal relationships and indirect causal effects are coded as sign only (i.e. increases, decreases, no change, and ambiguous). This implementation includes output support for graphs in .dot file format for use with visualization software such as graphviz (<https://graphviz.org>). LoopAnalyst provides tools for the construction and output of community matrices, computation and output of community effect matrices, tables of correlations, adjoint, absolute feedback, weighted feedback and weighted prediction matrices, change in life expectancy matrices, and feedback, path and loop enumeration tools.
Fast binning of multiple variables using parallel processing. A summary of all the variables binned is generated which provides the information value, entropy, an indicator of whether the variable follows a monotonic trend or not, etc. It supports rebinning of variables to force a monotonic trend as well as manual binning based on pre specified cuts. The cut points of the bins are based on conditional inference trees as implemented in the partykit package. The conditional inference framework is described by Hothorn T, Hornik K, Zeileis A (2006) <doi:10.1198/106186006X133933>.
Extensive functions for Lmoments (LMs) and probability-weighted moments (PWMs), distribution parameter estimation, LMs for distributions, LM ratio diagrams, multivariate Lcomoments, and asymmetric (asy) trimmed LMs (TLMs). Maximum likelihood and maximum product spacings estimation are available. Right-tail and left-tail LM censoring by threshold or indicator variable are available. LMs of residual (resid) and reversed (rev) residual life are implemented along with 13 quantile operators for reliability analyses. Exact analytical bootstrap estimates of order statistics, LMs, and LM var-covars are available. Harri-Coble Tau34-squared Normality Test is available. Distributions with L, TL, and added (+) support for right-tail censoring (RC) encompass: Asy Exponential (Exp) Power [L], Asy Triangular [L], Cauchy [TL], Eta-Mu [L], Exp. [L], Gamma [L], Generalized (Gen) Exp Poisson [L], Gen Extreme Value [L], Gen Lambda [L, TL], Gen Logistic [L], Gen Normal [L], Gen Pareto [L+RC, TL], Govindarajulu [L], Gumbel [L], Kappa [L], Kappa-Mu [L], Kumaraswamy [L], Laplace [L], Linear Mean Residual Quantile Function [L], Normal [L], 3p log-Normal [L], Pearson Type III [L], Polynomial Density-Quantile 3 and 4 [L], Rayleigh [L], Rev-Gumbel [L+RC], Rice [L], Singh Maddala [L], Slash [TL], 3p Student t [L], Truncated Exponential [L], Wakeby [L], and Weibull [L].
Computes the Lomb-Scargle Periodogram and actogram for evenly or unevenly sampled time series. Includes a randomization procedure to obtain exact p-values. Partially based on C original by Press et al. (Numerical Recipes) and the Python module Astropy. For more information see Ruf, T. (1999). The Lomb-Scargle periodogram in biological rhythm research: analysis of incomplete and unequally spaced time-series. Biological Rhythm Research, 30(2), 178-201.
This package creates lowpass filters which are commonly used in ion channel recordings. It supports generation of random numbers that are filtered, i.e. follow a model for ion channel recordings, see <doi:10.1109/TNB.2018.2845126>. Furthermore, time continuous convolutions of piecewise constant signals with the kernel of lowpass filters can be computed.
This package provides tools for estimation and inference of conditional densities, derivatives and functions. This is the companion software for Cattaneo, Chandak, Jansson and Ma (2024) <doi:10.3150/23-BEJ1711>.
Consider linear regression model Y = Xb + error where the distribution function of errors is unknown, but errors are independent and symmetrically distributed. The package contains a function named LRMDE which takes Y and X as input and returns minimum distance estimator of parameter b in the model.
Local Polynomial Regression with Ridging.
Kernel regression smoothing with adaptive local or global plug-in bandwidth selection.
Select statistically similar research groups by backward selection using various robust algorithms, including a heuristic based on linear discriminant analysis, multiple heuristics based on the test statistic, and parallelized exhaustive search.
Estimation of latent class models with individual covariates for capture-recapture data. See Bartolucci, F. and Forcina, A. (2022), Estimating the size of a closed population by modeling latent and observed heterogeneity, Biometrics, 80(2), ujae017.
Fit and simulate latent position and cluster models for statistical networks. See Krivitsky and Handcock (2008) <doi:10.18637/jss.v024.i05> and Krivitsky, Handcock, Raftery, and Hoff (2009) <doi:10.1016/j.socnet.2009.04.001>.
This package provides a bridge between the loon and ggplot2 packages. Extends the grammar of ggplot to add clauses to create interactive loon plots. Existing ggplot(s) can be turned into interactive loon plots and loon plots into static ggplot(s); the function loon.ggplot() is the bridge from one plot structure to the other.
The landmark approach allows survival predictions to be updated dynamically as new measurements from an individual are recorded. The idea is to set predefined time points, known as "landmark times", and form a model at each landmark time using only the individuals in the risk set. This package allows the longitudinal data to be modelled either using the last observation carried forward or linear mixed effects modelling. There is also the option to model competing risks, either through cause-specific Cox regression or Fine-Gray regression. To find out more about the methods in this package, please see <https://isobelbarrott.github.io/Landmarking/articles/Landmarking>.
This package provides a ggplot2 extension that focusses on expanding the plotter's arsenal of guides. Guides in ggplot2 include axes and legends. legendry offers new axes and annotation options, as well as new legends and colour displays.
Calculates landscape metrics for categorical landscape patterns in a tidy workflow. landscapemetrics reimplements the most common metrics from FRAGSTATS (<https://www.fragstats.org/>) and new ones from the current literature on landscape metrics. This package supports terra SpatRaster objects as input arguments. It further provides utility functions to visualize patches, select metrics and building blocks to develop new metrics.
This package provides functions and tools for using open GIS and remote sensing command-line interfaces in a reproducible environment.
This package provides a nonparametric method to approximate Laplacian graph spectra of a network with ordered vertices. This provides a computationally efficient algorithm for obtaining an accurate and smooth estimate of the graph Laplacian basis. The approximation results can then be used for tasks like change point detection, k-sample testing, and so on. The primary reference is Mukhopadhyay, S. and Wang, K. (2018, Technical Report).
R functions and data sets for the work Paz, R.F., Balakrishnan, N and Bazán, J.L. (2018). L-logistic regression models: Prior sensitivity analysis, robustness to outliers and applications. Brazilian Journal of Probability and Statistics, <https://www.imstat.org/wp-content/uploads/2018/05/BJPS397.pdf>.