Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides access to the PlanScore Application Programming Interface (<https://github.com/PlanScore/PlanScore/blob/main/API.md>) for scoring redistricting plans. Allows for upload of plans from block assignment files and shape files. For shapes in memory, such as from sf or redist', it processes them to save and upload. Includes tools for tidying responses and saving output from the website.
Chromatin immunoprecipitation DNA sequencing results in genomic tracks that show enriched regions or peaks where proteins are bound. This package implements fast C code that computes the true and false positives with respect to a database of annotated region labels.
Provide easy methods to translate pieces of text. Functions send requests to translation services online.
Calculates an acceptance sampling plan, (sample size and acceptance number) based in MIL STD 105E, Dodge Romig and MIL STD 414 tables and procedures. The arguments for each function are related to lot size, inspection level and quality level. The specific plan operating curve (OC), is calculated by the binomial distribution.
This package provides a collection of scripts and data files for the statistics text: "Process Improvement using Data" <https://learnche.org/pid/> and the online course "Experimentation for Improvement" found on Coursera. The package contains code for designed experiments, data sets and other convenience functions used in the book.
Computes probability-scale residuals and residual correlations for continuous, ordinal, binary, count, and time-to-event data Qi Liu, Bryan Shepherd, Chun Li (2020) <doi:10.18637/jss.v094.i12>.
This package provides functions and datasets to support Valliant, Dever, and Kreuter (2018), <doi:10.1007/978-3-319-93632-1>, "Practical Tools for Designing and Weighting Survey Samples". Contains functions for sample size calculation for survey samples using stratified or clustered one-, two-, and three-stage sample designs, and single-stage audit sample designs. Functions are included that will group geographic units accounting for distances apart and measures of size. Other functions compute variance components for multistage designs, sample sizes in two-phase designs, and a stopping rule for ending data collection. A number of example data sets are included.
Automated identification of printed array positions from high content microscopy images and the export of those positions as individual images written to output as multi-layered tiff files.
Simulate the dynamic of wolf populations using a specific Individual-Based Model (IBM) compiled in C, see Chapron et al. (2016) <doi:10.1016/j.ecolmodel.2016.08.012>.
This package provides a Shiny application for calculating phytosanitary inspection plans based on risks. It generates a diagram of pallets in a lot, highlights the units to be sampled, and documents them based on the selected sampling method (simple random or systematic sampling).
Designed for prediction error estimation through resampling techniques, possibly accelerated by parallel execution on a compute cluster. Newly developed model fitting routines can be easily incorporated. Methods used in the package are detailed in Porzelius Ch., Binder H. and Schumacher M. (2009) <doi:10.1093/bioinformatics/btp062> and were used, for instance, in Porzelius Ch., Schumacher M. and Binder H. (2011) <doi:10.1007/s00180-011-0236-6>.
Allow to run pylint on Python files with a R command or a RStudio addin. The report appears in the RStudio viewer pane as a formatted HTML file.
The goal of this package is to cover the most common steps in probability of default (PD) rating model development and validation. The main procedures available are those that refer to univariate, bivariate, multivariate analysis, calibration and validation. Along with accompanied monobin and monobinShiny packages, PDtoolkit provides functions which are suitable for different data transformation and modeling tasks such as: imputations, monotonic binning of numeric risk factors, binning of categorical risk factors, weights of evidence (WoE) and information value (IV) calculations, WoE coding (replacement of risk factors modalities with WoE values), risk factor clustering, area under curve (AUC) calculation and others. Additionally, package provides set of validation functions for testing homogeneity, heterogeneity, discriminatory and predictive power of the model.
This package provides a variety of tools relevant to the analysis of marine soundscape data. There are tools for downloading AIS (automatic identification system) data from Marine Cadastre <https://hub.marinecadastre.gov>, connecting AIS data to GPS coordinates, plotting summaries of various soundscape measurements, and downloading relevant environmental variables (wind, swell height) from the National Center for Atmospheric Research data server <https://gdex.ucar.edu/datasets/d084001/>. Most tools were developed to work well with output from Triton software, but can be adapted to work with any similar measurements.
Generates multivariate data with count and continuous variables with a pre-specified correlation matrix. The count and continuous variables are assumed to have Poisson and normal marginals, respectively. The data generation mechanism is a combination of the normal to anything principle and a connection between Poisson and normal correlations in the mixture. The details of the method are explained in Yahav et al. (2012) <DOI:10.1002/asmb.901>.
Visualizes a matrix object plainly as heatmap. It provides S3 functions to plot simple matrices and loading matrices.
Analysis and measurement of promotion effectiveness on a given target variable (e.g. daily sales). After converting promotion schedule into dummy or smoothed predictor variables, the package estimates the effects of these variables controlled for trend/periodicity/structural change using prophet by Taylor and Letham (2017) <doi:10.7287/peerj.preprints.3190v2> and some prespecified variables (e.g. start of a month).
This package contains the functions for construction and visualization of various families of the proximity catch digraphs (PCDs), see (Ceyhan (2005) ISBN:978-3-639-19063-2), for computing the graph invariants for testing the patterns of segregation and association against complete spatial randomness (CSR) or uniformity in one, two and three dimensional cases. The package also has tools for generating points from these spatial patterns. The graph invariants used in testing spatial point data are the domination number (Ceyhan (2011) <doi:10.1080/03610921003597211>) and arc density (Ceyhan et al. (2006) <doi:10.1016/j.csda.2005.03.002>; Ceyhan et al. (2007) <doi:10.1002/cjs.5550350106>). The PCD families considered are Arc-Slice PCDs, Proportional-Edge PCDs, and Central Similarity PCDs.
This package provides tools for calculating statistical power for experiments analyzed using linear mixed models. It supports standard designs, including randomized block, split-plot, and Latin Square designs, while offering flexibility to accommodate a variety of other complex study designs.
Utilities for the Pareto, piecewise Pareto and generalized Pareto distribution that are useful for reinsurance pricing. In particular, the package provides a non-trivial algorithm that can be used to match the expected losses of a tower of reinsurance layers with a layer-independent collective risk model. The theoretical background of the matching algorithm and most other methods are described in Ulrich Riegel (2018) <doi:10.1007/s13385-018-0177-3>.
This package implements two differentially private algorithms for estimating L2-regularized logistic regression coefficients. A randomized algorithm F is epsilon-differentially private (C. Dwork, Differential Privacy, ICALP 2006 <DOI:10.1007/11681878_14>), if |log(P(F(D) in S)) - log(P(F(D') in S))| <= epsilon for any pair D, D of datasets that differ in exactly one record, any measurable set S, and the randomness is taken over the choices F makes.
This package provides access to a high performant random distribution sampler for the Polya Gamma Distribution using either C++ headers for Rcpp or RcppArmadillo and R'.
This package provides a general framework for constructing partial dependence (i.e., marginal effect) plots from various types machine learning models in R.
Translating messages in R packages is managed using the po top-level directory and the gettext program. This package provides some helper functions for building this support in R packages, e.g. common validation & I/O tasks.