Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
It contains a function designed to the joint segmentation in the mean of several correlated series. The method is described in the paper X. Collilieux, E. Lebarbier and S. Robin. A factor model approach for the joint segmentation with between-series correlation (2015) <arXiv:1505.05660>.
Extends the fitdist() (from fitdistrplus') adding the Anderson-Darling ad.test() (from ADGofTest') and Kolmogorov Smirnov Test ks.test() inside, trying the distributions from stats package by default and offering a second function which uses mixed distributions to fit, this distributions are split with unsupervised learning, with Mclust() function (from mclust').
This package provides the function fancycut() which is like cut() except you can mix left open and right open intervals with point values, intervals that are closed on both ends and intervals that are open on both ends.
Estimation of mixed models including a subject-specific variance which can be time and covariate dependent. In the joint model framework, the package handles left truncation and allows a flexible dependence structure between the competing events and the longitudinal marker. The estimation is performed under the frequentist framework, using the Marquardt-Levenberg algorithm. (Courcoul, Tzourio, Woodward, Barbieri, Jacqmin-Gadda (2023) <arXiv:2306.16785>).
Estimate a FAVAR model by a Bayesian method, based on Bernanke et al. (2005) <DOI:10.1162/0033553053327452>.
Provide a range of plugins for fiery web servers that handle different aspects of server-side web security. Be aware that security cannot be handled blindly, and even though these plugins will raise the security of your server you should not build critical infrastructure without the aid of a security expert.
This package provides tools to perform fuzzy formal concept analysis, presented in Wille (1982) <doi:10.1007/978-3-642-01815-2_23> and in Ganter and Obiedkov (2016) <doi:10.1007/978-3-662-49291-8>. It provides functions to load and save a formal context, extract its concept lattice and implications. In addition, one can use the implications to compute semantic closures of fuzzy sets and, thus, build recommendation systems.
This package provides algorithms to fit linear regression models under several popular penalization techniques and functional linear regression models based on Majorizing-Minimizing (MM) and Alternating Direction Method of Multipliers (ADMM) techniques. See Boyd et al (2010) <doi:10.1561/2200000016> for complete introduction to the method.
Edit vectors to fill missing values, based on the vector itself.
This package contains the core functions associated with Fast Regularized Canonical Correlation Analysis. Please see the following for details: Raul Cruz-Cano, Mei-Ling Ting Lee, Fast regularized canonical correlation analysis, Computational Statistics & Data Analysis, Volume 70, 2014, Pages 88-100, ISSN 0167-9473 <doi:10.1016/j.csda.2013.09.020>.
Compute labels for a test set according to the k-Nearest Neighbors classification. This is a fast way to do k-Nearest Neighbors classification because the distance matrix -between the features of the observations- is an input to the function rather than being calculated in the function itself every time.
Project Customer Retention based on Beta Geometric, Beta Discrete Weibull and Latent Class Discrete Weibull Models.This package is based on Fader and Hardie (2007) <doi:10.1002/dir.20074> and Fader and Hardie et al. (2018) <doi:10.1016/j.intmar.2018.01.002>.
This package provides a small utility which wraps Rscript and provides access to all R functions from the shell.
An implementation of various learning algorithms based on fuzzy rule-based systems (FRBSs) for dealing with classification and regression tasks. Moreover, it allows to construct an FRBS model defined by human experts. FRBSs are based on the concept of fuzzy sets, proposed by Zadeh in 1965, which aims at representing the reasoning of human experts in a set of IF-THEN rules, to handle real-life problems in, e.g., control, prediction and inference, data mining, bioinformatics data processing, and robotics. FRBSs are also known as fuzzy inference systems and fuzzy models. During the modeling of an FRBS, there are two important steps that need to be conducted: structure identification and parameter estimation. Nowadays, there exists a wide variety of algorithms to generate fuzzy IF-THEN rules automatically from numerical data, covering both steps. Approaches that have been used in the past are, e.g., heuristic procedures, neuro-fuzzy techniques, clustering methods, genetic algorithms, squares methods, etc. Furthermore, in this version we provide a universal framework named frbsPMML', which is adopted from the Predictive Model Markup Language (PMML), for representing FRBS models. PMML is an XML-based language to provide a standard for describing models produced by data mining and machine learning algorithms. Therefore, we are allowed to export and import an FRBS model to/from frbsPMML'. Finally, this package aims to implement the most widely used standard procedures, thus offering a standard package for FRBS modeling to the R community.
This package provides a comprehensive framework in R for modeling and forecasting economic scenarios based on multi-level dynamic factor model. The package enables users to: (i) extract global and group-specific factors using a flexible multi-level factor structure; (ii) compute asymptotically valid confidence regions for the estimated factors, accounting for uncertainty in the factor loadings; (iii) obtain estimates of the parameters of the factor-augmented quantile regressions together with their standard deviations; (iv) recover full predictive conditional densities from estimated quantiles; (v) obtain risk measures based on extreme quantiles of the conditional densities; (vi) estimate the conditional density and the corresponding extreme quantiles when the factors are stressed.
The aim is to take in data.frame inputs and utilises methods, such as recursive feature engineering, to enable the features to be removed. What this does differently from the other packages, is that it gives you the choice to remove the variables manually, or it automated this process. Feature selection is a concept in machine learning, and statistical pipelines, whereby unimportant, or less predictive variables are eliminated from the analysis, see Boughaci (2018) <doi:10.1007/s40595-018-0107-y>.
This package provides a collection of functions which fit functional neural network models. In other words, this package will allow users to build deep learning models that have either functional or scalar responses paired with functional and scalar covariates. We implement the theoretical discussion found in Thind, Multani and Cao (2020) <arXiv:2006.09590> through the help of a main fitting and prediction function as well as a number of helper functions to assist with cross-validation, tuning, and the display of estimated functional weights.
Fit Elastic Net, Lasso, and Ridge regression and do cross-validation in a fast way. We build the algorithm based on Least Angle Regression by Bradley Efron, Trevor Hastie, Iain Johnstone, etc. (2004)(<doi:10.1214/009053604000000067 >) and some algorithms like Givens rotation and Forward/Back Substitution. In this way, many matrices to be computed are retained as triangular matrices which can eventually speed up the computation. The fitting algorithm for Elastic Net is written in C++ using Armadillo linear algebra library.
The penalized and non-penalized Minorize-Maximization (MM) method for frailty models to fit the clustered data, multi-event data and recurrent data. Least absolute shrinkage and selection operator (LASSO), minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) penalized functions are implemented. All the methods are computationally efficient. These general methods are proposed based on the following papers, Huang, Xu and Zhou (2022) <doi:10.3390/math10040538>, Huang, Xu and Zhou (2023) <doi:10.1177/09622802221133554>.
Fit occupancy models in Stan via brms'. The full variety of brms formula-based effects structures are available to use in multiple classes of occupancy model, including single-season models, models with data augmentation for never-observed species, dynamic (multiseason) models with explicit colonization and extinction processes, and dynamic models with autologistic occupancy dynamics. Formulas can be specified for all relevant distributional terms, including detection and one or more of occupancy, colonization, extinction, and autologistic depending on the model type. Several important forms of model post-processing are provided. References: Bürkner (2017) <doi:10.18637/jss.v080.i01>; Carpenter et al. (2017) <doi:10.18637/jss.v076.i01>; Socolar & Mills (2023) <doi:10.1101/2023.10.26.564080>.
This is an extremely fast implementation of a Naive Bayes classifier. This package is currently the only package that supports a Bernoulli distribution, a Multinomial distribution, and a Gaussian distribution, making it suitable for both binary features, frequency counts, and numerical features. Another feature is the support of a mix of different event models. Only numerical variables are allowed, however, categorical variables can be transformed into dummies and used with the Bernoulli distribution. The implementation is largely based on the paper "A comparison of event models for Naive Bayes anti-spam e-mail filtering" written by K.M. Schneider (2003) <doi:10.3115/1067807.1067848>. Any issues can be submitted to: <https://github.com/mskogholt/fastNaiveBayes/issues>.
In Australia, a financial year (or fiscal year) is the period from 1 July to 30 June of the following calendar year. As such, many databases need to represent and validate financial years efficiently. While the use of integer years with a convention that they represent the year ending is common, it may lead to ambiguity with calendar years. On the other hand, string representations may be too inefficient and do not easily admit arithmetic operations. This package tries to make validation of financial years quicker while retaining clarity.
This package provides a collection of toys to do things like generate Collatz and other interesting sequences, calculate a fraction which is a close approximation to some value (e.g., 22/7 or 355/113 for pi), and so on.
Computes functional rarity indices as proposed by Violle et al. (2017) <doi:10.1016/j.tree.2017.02.002>. Various indices can be computed using both regional and local information. Functional Rarity combines both the functional aspect of rarity as well as the extent aspect of rarity. funrar is presented in Grenié et al. (2017) <doi:10.1111/ddi.12629>.