Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a set of state-of-the-art probabilistic modeling approaches to derive estimates of individual customer lifetime values (CLV). Commonly, probabilistic approaches focus on modelling 3 processes, i.e. individuals attrition, transaction, and spending process. Latent customer attrition models, which are also known as "buy-'til-you-die models", model the attrition as well as the transaction process. They are used to make inferences and predictions about transactional patterns of individual customers such as their future purchase behavior. Moreover, these models have also been used to predict individualsâ long-term engagement in activities such as playing an online game or posting to a social media platform. The spending process is usually modelled by a separate probabilistic model. Combining these results yields in lifetime values estimates for individual customers. This package includes fast and accurate implementations of various probabilistic models for non-contractual settings (e.g., grocery purchases or hotel visits). All implementations support time-invariant covariates, which can be used to control for e.g., socio-demographics. If such an extension has been proposed in literature, we further provide the possibility to control for time-varying covariates to control for e.g., seasonal patterns. Currently, the package includes the following latent attrition models to model individuals attrition and transaction process: [1] Pareto/NBD model (Pareto/Negative-Binomial-Distribution), [2] the Extended Pareto/NBD model (Pareto/Negative-Binomial-Distribution with time-varying covariates), [3] the BG/NBD model (Beta-Gamma/Negative-Binomial-Distribution) and the [4] GGom/NBD (Gamma-Gompertz/Negative-Binomial-Distribution). Further, we provide an implementation of the Gamma/Gamma model to model the spending process of individuals.
This package provides functions to check whether a vector of p-values respects the assumptions of FDR (false discovery rate) control procedures and to compute adjusted p-values.
Stacking rings are tools used to stack pottery in a Kiln. A relatively large group of stacking rings was found in the area of the sanctuary of Dionysos in Miletus in the 1970s. Measurements and additional info is gathered in this package and made available for use by other researchers. The data along with its archaeological context and analysis has been published in "Archäologischer Anzeiger" (2020/1, <doi:10.34780/aa.v0i1.1014>).
Exploring fitted models by interactively taking 2-D and 3-D sections in data space.
It is an open source insurance claim simulation engine sponsored by the Casualty Actuarial Society. It generates individual insurance claims including open claims, reopened claims, incurred but not reported claims and future claims. It also includes claim data fitting functions to help set simulation assumptions. It is useful for claim level reserving analysis. Parodi (2013) <https://www.actuaries.org.uk/documents/triangle-free-reserving-non-traditional-framework-estimating-reserves-and-reserve-uncertainty>.
Model-based clustering of mixed data (i.e. data which consist of continuous, binary, ordinal or nominal variables) using a parsimonious mixture of latent Gaussian variable models.
Modular and unified R6-based interface for counterfactual explanation methods. The following methods are currently implemented: Burghmans et al. (2022) <doi:10.48550/arXiv.2104.07411>, Dandl et al. (2020) <doi:10.1007/978-3-030-58112-1_31> and Wexler et al. (2019) <doi:10.1109/TVCG.2019.2934619>. Optional extensions allow these methods to be applied to a variety of models and use cases. Once generated, the counterfactuals can be analyzed and visualized by provided functionalities.
Computing elliptical joint confidence regions at a specified confidence level. It provides the flexibility to estimate either classical or robust confidence regions, which can be visualized in 2D or 3D plots. The classical approach assumes normality and uses the mean and covariance matrix to define the confidence regions. Alternatively, the robustified version employs estimators like minimum covariance determinant (MCD) and M-estimator, making them less sensitive to outliers and departures from normality. Furthermore, the functions allow users to group the dataset based on categorical variables and estimate separate confidence regions for each group. This capability is particularly useful for exploring potential differences or similarities across subgroups within a dataset. Varmuza and Filzmoser (2009, ISBN:978-1-4200-5947-2). Johnson and Wichern (2007, ISBN:0-13-187715-1). Raymaekers and Rousseeuw (2019) <DOI:10.1080/00401706.2019.1677270>.
This package implements a basis function or functional data analysis framework for several techniques of multivariate analysis in continuous-time setting. Specifically, we introduced continuous-time analogues of several classical techniques of multivariate analysis, such as principal component analysis, canonical correlation analysis, Fisher linear discriminant analysis, K-means clustering, and so on. Details are in Biplab Paul, Philip T. Reiss, Erjia Cui and Noemi Foa (2025) "Continuous-time multivariate analysis" <doi: 10.1080/10618600.2024.2374570>.
Utility functions that allow checking the basic validity of a function argument or any other value, including generating an error and assigning a default in a single line of code. The main purpose of the package is to provide simple and easily readable argument checking to improve code robustness.
Helps automate Quarto website creation for small academic groups. Builds a database-like structure of people, projects and publications, linking them together with a string-based ID system. Then, provides functions to automate production of clean markdown for these structures, and in-built CSS formatting using CSS flexbox.
Implement various chromosomal instability metrics. CINmetrics (Chromosomal INstability metrics) provides functions to calculate various chromosomal instability metrics on masked Copy Number Variation(CNV) data at individual sample level. The chromosomal instability metrics have been implemented as described in the following studies: Baumbusch LO et al. 2013 <doi:10.1371/journal.pone.0054356>, Davidson JM et al. 2014 <doi:10.1371/journal.pone.0079079>, Chin SF et al. 2007 <doi:10.1186/gb-2007-8-10-r215>.
Allows users to seamlessly query several CDC PLACES APIs (<https://data.cdc.gov/browse?q=PLACES%20&sortBy=relevance>) by geography, state, measure, and release year. This package also contains a function to explore the available measures for each release year.
Deriving skill structures from skill assignment data for courses (sets of learning objects).
Procedures for making continuous cartogram. Procedures available are: flow based cartogram (Gastner & Newman (2004) <doi:10.1073/pnas.0400280101>), fast flow based cartogram (Gastner, Seguy & More (2018) <doi:10.1073/pnas.1712674115>), rubber band based cartogram (Dougenik et al. (1985) <doi:10.1111/j.0033-0124.1985.00075.x>).
Software which provides numerous functionalities for detecting and removing group-level effects from high-dimensional scientific data which, when combined with additional assumptions, allow for causal conclusions, as-described in our manuscripts Bridgeford et al. (2024) <doi:10.1101/2021.09.03.458920> and Bridgeford et al. (2023) <doi:10.48550/arXiv.2307.13868>. Also provides a number of useful utilities for generating simulations and balancing covariates across multiple groups/batches of data via matching and propensity trimming for more than two groups.
Allows one to assess the stability of individual objects, clusters and whole clustering solutions based on repeated runs of the K-means and K-medoids partitioning algorithms.
This package performs Bayesian posterior inference for deep Gaussian processes following Sauer, Gramacy, and Higdon (2023, <doi:10.48550/arXiv.2012.08015>). See Sauer (2023, <http://hdl.handle.net/10919/114845>) for comprehensive methodological details and <https://bitbucket.org/gramacylab/deepgp-ex/> for a variety of coding examples. Models are trained through MCMC including elliptical slice sampling of latent Gaussian layers and Metropolis-Hastings sampling of kernel hyperparameters. Vecchia-approximation for faster computation is implemented following Sauer, Cooper, and Gramacy (2023, <doi:10.48550/arXiv.2204.02904>). Optional monotonic warpings are implemented following Barnett et al. (2024, <doi:10.48550/arXiv.2408.01540>). Downstream tasks include sequential design through active learning Cohn/integrated mean squared error (ALC/IMSE; Sauer, Gramacy, and Higdon, 2023), optimization through expected improvement (EI; Gramacy, Sauer, and Wycoff, 2022 <doi:10.48550/arXiv.2112.07457>), and contour location through entropy (Booth, Renganathan, and Gramacy, 2024 <doi:10.48550/arXiv.2308.04420>). Models extend up to three layers deep; a one layer model is equivalent to typical Gaussian process regression. Incorporates OpenMP and SNOW parallelization and utilizes C/C++ under the hood.
Draw samples from the direct sampling spatial prior model as described in G. White, D. Sun, P. Speckman (2019) <arXiv:1906.05575>. The basic model assumes a Gaussian likelihood and derives a spatial prior based on thin-plate splines.
This package contains the support functions for the Time Series Analysis book. We present a function to calculate MSE and MAE for inputs of actual and forecast values. We also have the code for disaggregation as found in Wei and Stram (1990, <doi:10.1111/j.2517-6161.1990.tb01799.x>), and Hodgess and Wei (1996, "Temporal Disaggregation of Time Series").
This package creates a data dictionary from any dataframe or tibble in your R environment. You can opt to add variable labels. You can write the object directly to Excel.
This package contains functions to perform copula estimation by the non-parametric Bayesian method, Dirichlet-based Polya Tree. See Ning (2018) <doi:10.1080/00949655.2017.1421194>.
Includes various functions for playing drum sounds. beat() plays a drum sound from one of the six included drum kits. tempo() sets spacing between calls to beat() in bpm. Together the two functions can be used to create many different drum patterns.
Apply the Deductive Rational Method to a monthly series of flow or precipitation data to fill in missing data. The method is as described in: Campos, D.F., (1984, ISBN:9686194444).