Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions to build, evaluate, and visualize insurance rating models. It simplifies the process of modeling premiums, and allows to analyze insurance risk factors effectively. The package employs a data-driven strategy for constructing insurance tariff classes, drawing on the work of Antonio and Valdez (2012) <doi:10.1007/s10182-011-0152-7>.
Applying the family of the Bayesian Expectation-Maximization-Maximization (BEMM) algorithm to estimate: (1) Three parameter logistic (3PL) model proposed by Birnbaum (1968, ISBN:9780201043105); (2) four parameter logistic (4PL) model proposed by Barton & Lord (1981) <doi:10.1002/j.2333-8504.1981.tb01255.x>; (3) one parameter logistic guessing (1PLG) and (4) one parameter logistic ability-based guessing (1PLAG) models proposed by San Martà n et al (2006) <doi:10.1177/0146621605282773>. The BEMM family includes (1) the BEMM algorithm for 3PL model proposed by Guo & Zheng (2019) <doi:10.3389/fpsyg.2019.01175>; (2) the BEMM algorithm for 1PLG model and (3) the BEMM algorithm for 1PLAG model proposed by Guo, Wu, Zheng, & Chen (2021) <doi:10.1177/0146621621990761>; (4) the BEMM algorithm for 4PL model proposed by Zheng, Guo, & Kern (2021) <doi:10.1177/21582440211052556>; and (5) their maximum likelihood estimation versions proposed by Zheng, Meng, Guo, & Liu (2018) <doi:10.3389/fpsyg.2017.02302>. Thus, both Bayesian modal estimates and maximum likelihood estimates are available.
This package implements approximate Bayesian inference for Structural Equation Models (SEM) using a custom adaptation of the Integrated Nested Laplace Approximation as described in Rue et al. (2009) <doi:10.1111/j.1467-9868.2008.00700.x>. Provides a computationally efficient alternative to Markov Chain Monte Carlo (MCMC) for Bayesian estimation, allowing users to fit latent variable models using the lavaan syntax.
Models, analyzes, and forecasts financial intraday signals. This package currently supports a univariate state-space model for intraday trading volume provided by Chen (2016) <doi:10.2139/ssrn.3101695>.
This package provides facilities of general to specific model selection for exogenous regressors in 2SLS models. Furthermore, indicator saturation methods can be used to detect outliers and structural breaks in the sample.
This package provides a user-friendly toolbox for doing the statistical analysis of interval-valued responses in questionnaires measuring intrinsically imprecise human attributes or features (attitudes, perceptions, opinions, feelings, etc.). In particular, this package provides S4 classes, methods, and functions in order to compute basic arithmetic and statistical operations with interval-valued data; prepare customized plots; associate each interval-valued response to its equivalent Likert-type and visual analogue scales answers through the minimum theta-distance and the mid-point criteria; analyze the reliability of respondents answers from the internal consistency point of view by means of Cronbach's alpha coefficient; and simulate interval-valued responses in this type of questionnaires. The package also incorporates some real-life data that can be used to illustrate its working with several non-trivial reproducible examples. The methodology used in this package is based in many theoretical and applied publications from SMIRE+CoDiRE (Statistical Methods with Imprecise Random Elements and Comparison of Distributions of Random Elements) Research Group (<https://bellman.ciencias.uniovi.es/smire+codire/>) from the University of Oviedo (Spain).
We provide the collection of data-sets used in the book An Introduction to Statistical Learning with Applications in R, Second Edition'. These include many data-sets that we used in the first edition (some with minor changes), and some new datasets.
Implementing the interventional effects for mediation analysis for up to 3 mediators. The methods used are based on VanderWeele, Vansteelandt and Robins (2014) <doi:10.1097/ede.0000000000000034>, Vansteelandt and Daniel (2017) <doi:10.1097/ede.0000000000000596> and Chan and Leung (2020; unpublished manuscript, available on request from the author of this package). Linear regression, logistic regression and Poisson regression are used for continuous, binary and count mediator/outcome variables respectively.
Simulate an inhomogeneous self-exciting process (IHSEP), or Hawkes process, with a given (possibly time-varying) baseline intensity and an excitation function. Calculate the likelihood of an IHSEP with given baseline intensity and excitation functions for an (increasing) sequence of event times. Calculate the point process residuals (integral transforms of the original event times). Calculate the mean intensity process.
The core of the package is cvr2.ipflasso(), an extension of glmnet to be used when the (large) set of available predictors is partitioned into several modalities which potentially differ with respect to their information content in terms of prediction. For example, in biomedical applications patient outcome such as survival time or response to therapy may have to be predicted based on, say, mRNA data, miRNA data, methylation data, CNV data, clinical data, etc. The clinical predictors are on average often much more important for outcome prediction than the mRNA data. The ipflasso method takes this problem into account by using different penalty parameters for predictors from different modalities. The ratio between the different penalty parameters can be chosen from a set of optional candidates by cross-validation or alternatively generated from the input data.
Versatile tools and data for graph matching analysis with various forms of prior information that supports working with igraph objects, matrix objects, or lists of either.
An implementation of the initial guided analytics for parameter testing and controlband extraction framework. Functions are available for continuous and categorical target variables as well as for generating standardized reports of the conducted analysis. See <https://github.com/stefan-stein/igate> for more information on the technology.
The correction is achieved under the assumption that non-migrating cells of the essay approximately form a quadratic flow profile due to frictional effects, compare law of Hagen-Poiseuille for flow in a tube. The script fits a conical plane to give xyz-coordinates of the cells. It outputs the number of migrated cells and the new corrected coordinates.
Regression models for interval censored data. Currently supports Cox-PH, proportional odds, and accelerated failure time models. Allows for semi and fully parametric models (parametric only for accelerated failure time models) and Bayesian parametric models. Includes functions for easy visual diagnostics of model fits and imputation of censored data.
This package implements a Shiny Item Analysis module and functions for computing false positive rate and other binary classification metrics from inter-rater reliability based on Bartoš & Martinková (2024) <doi:10.1111/bmsp.12343>.
Calculates event rates and compares means and variances of groups of interval data corrected for missed arrival observations.
This package provides a collection of Item Response Theory (IRT) and Computerized Adaptive Testing (CAT) functions that are used in psychometrics.
Fast and multi-threaded implementation of isolation forest (Liu, Ting, Zhou (2008) <doi:10.1109/ICDM.2008.17>), extended isolation forest (Hariri, Kind, Brunner (2018) <doi:10.48550/arXiv.1811.02141>), SCiForest (Liu, Ting, Zhou (2010) <doi:10.1007/978-3-642-15883-4_18>), fair-cut forest (Cortes (2021) <doi:10.48550/arXiv.2110.13402>), robust random-cut forest (Guha, Mishra, Roy, Schrijvers (2016) <http://proceedings.mlr.press/v48/guha16.html>), and customizable variations of them, for isolation-based outlier detection, clustered outlier detection, distance or similarity approximation (Cortes (2019) <doi:10.48550/arXiv.1910.12362>), isolation kernel calculation (Ting, Zhu, Zhou (2018) <doi:10.1145/3219819.3219990>), and imputation of missing values (Cortes (2019) <doi:10.48550/arXiv.1911.06646>), based on random or guided decision tree splitting, and providing different metrics for scoring anomalies based on isolation depth or density (Cortes (2021) <doi:10.48550/arXiv.2111.11639>). Provides simple heuristics for fitting the model to categorical columns and handling missing data, and offers options for varying between random and guided splits, and for using different splitting criteria.
Pull data from the Impect Customer API <https://glossary.impect.com/api-design>. The package can retrieve data such as events or match sums.
An implementation of the Otsu's Image Segmentation Method described in the paper: "A C++ Implementation of Otsu's Image Segmentation Method". The algorithm is explained at <doi:10.5201/ipol.2016.158>.
This package provides API access to the <http://imdbapi.net> which maintains metadata about movies, games and television shows through a public API.
This package implements an S7 class for estimates based on influence functions, with forward mode automatic differentiation defined for standard arithmetic operations.
This package provides a set of functions to estimate interactions flexibly in the face of possibly many controls. Implements the procedures described in Blackwell and Olson (2022) <doi:10.1017/pan.2021.19>.
This package provides user-friendly functions for programmatic access to macroeconomic data from the International Monetary Fund's SDMX 3.0 IMF Data API <https://data.imf.org/en/Resource-Pages/IMF-API>.