Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a collection of useful functions and datasets for the Data Science Course at IBAW.
Computes and decomposes Gini, Bonferroni and Zenga 2007 point and synthetic concentration indexes. Decompositions are intended: by sources, by subpopulations and by sources and subpopulations jointly. References, Zenga M. M.(2007) <doi:10.1400/209575> Zenga M. (2015) <doi:10.1400/246627> Zenga M., Valli I. (2017) <doi:10.26350/999999_000005> Zenga M., Valli I. (2018) <doi:10.26350/999999_000011>.
This is an substitute for the %V and %u formats which are not implemented on Windows. In addition, the package offers functions to convert from standard calender format yyyy-mm-dd to and from ISO 8601 week format yyyy-Www-d.
Facilitates fitting measurement error and missing data imputation models using integrated nested Laplace approximations, according to the method described in Skarstein, Martino and Muff (2023) <doi:10.1002/bimj.202300078>. See Skarstein and Muff (2024) <doi:10.48550/arXiv.2406.08172> for details on using the package.
An implementation of the induced smoothing (IS) idea to lasso regularization models to allow estimation and inference on the model coefficients (currently hypothesis testing only). Linear, logistic, Poisson and gamma regressions with several link functions are implemented. The algorithm is described in the original paper; see <doi:10.1177/0962280219842890> and discussed in a tutorial <doi:10.13140/RG.2.2.16360.11521>.
Kappa statistics is one of the most used methods to evaluate the effectiveness of inpsections based on attribute assessments in industry. However, its estimation by available methods does not provide its "real" or "intrinstic" value. This package provides functions for the computation of the intrinsic kappa value as it is described in: Rafael Sanchez-Marquez, Frank Gerhorst and David Schindler (2023) "Effectiveness of quality inspections of attributive characteristics â A novel and practical method for estimating the â intrinsicâ value of kappa based on alpha and beta statistics." <doi:10.1016/j.cie.2023.109006>.
Fits covariate dependent partial correlation matrices for integrative models to identify differential networks between two groups. The methods are described in Class et. al., (2018) <doi:10.1093/bioinformatics/btx750> and Ha et. al., (2015) <doi:10.1093/bioinformatics/btv406>.
This package provides tools for estimating incidence from biomarker data in cross- sectional surveys, and for calibrating tests for recent infection. Implements and extends the method of Kassanjee et al. (2012) <doi:10.1097/EDE.0b013e3182576c07>.
API wrapper that contains functions to retrieve data from the IsoMemo partnership databases. Web services for API: <https://isomemodb.com/api/v1/iso-data>.
Convert files to and from IDX format to vectors, matrices and arrays. IDX is a very simple file format designed for storing vectors and multidimensional matrices in binary format. The format is described on the website from Yann LeCun <http://yann.lecun.com/exdb/mnist/>.
Implementation of icosahedral grids in three dimensions. The spherical-triangular tessellation can be set to create grids with custom resolutions. Both the primary triangular and their inverted penta-hexagonal grids can be calculated. Additional functions are provided that allow plotting of the grids and associated data, the interaction of the grids with other raster and vector objects, and treating the grids as a graphs.
Up-to-date data from the Unicode CLDR Project (where CLDR stands for Common Locale Data Repository') are available here as a series of easy-to-parse datasets. Several functions are provided for extracting key elements from the tabular datasets.
Creation of tables of summary statistics or counts for clinical data (for TLFs'). These tables can be exported as in-text table (with the flextable package) for a Clinical Study Report (Word format) or a topline presentation (PowerPoint format), or as interactive table (with the DT package) to an html document for clinical data review.
This package implements the Interval Consensus Model (ICM) for analyzing continuous bounded interval-valued responses in psychometrics using Stan for Bayesian estimation. Provides functions for transforming interval data to simplex representations, fitting item response theory (IRT) models with isometric log-ratio (ILR) and sum log-ratio (SLR) link functions, and visualizing results. The package enables aggregation and analysis of interval-valued response data commonly found in psychological measurement and related disciplines. Based on Kloft et al. (2024) <doi:10.31234/osf.io/dzvw2>.
We use the ISR to handle with PCA-based missing data with high correlation, and the DISR to handle with distributed PCA-based missing data. The philosophy of the package is described in Guo G. (2024) <doi:10.1080/03610918.2022.2091779>.
Helper functions and example data sets to facilitate the estimation of IRTree models from data with different shape and using different software.
This package performs exploratory data analysis and variable screening for binary classification models using weight-of-evidence (WOE) and information value (IV). In order to make the package as efficient as possible, aggregations are done in data.table and creation of WOE vectors can be distributed across multiple cores. The package also supports exploration for uplift models (NWOE and NIV).
This package implements inequality constrained inference. This includes parameter estimation in normal (linear) models under linear equality and inequality constraints, as well as normal likelihood ratio tests involving inequality-constrained hypotheses. For inequality-constrained linear models, averaging over R-squared for different orderings of regressors is also included.
Allows the construction selection indices based on estimated breeding values in animal and plant breeding and to calculate several analytic measures around to assess its impact on genetic and phenotypic progress. The methodology thereby allows to analyze genetic gain of traits in the breeding goal which are not part of the actual index and automatically computes several analytic measures. It further allows to retrospectively derive realized economic weights from observed genetic trends. The framework is described in Simianer, H., Heise, J., Rensing, S., Pook, T. Geibel, J. and Reimer, C. (2023) <doi:10.1186/s12711-023-00807-0>.
This package provides a general-purpose workflow for image segmentation using TensorFlow models based on the U-Net architecture by Ronneberger et al. (2015) <arXiv:1505.04597> and the U-Net++ architecture by Zhou et al. (2018) <arXiv:1807.10165>. We provide pre-trained models for assessing canopy density and understory vegetation density from vegetation photos. In addition, the package provides a workflow for easily creating model input and model architectures for general-purpose image segmentation based on grayscale or color images, both for binary and multi-class image segmentation.
Introductory statistics methods to accompany "Investigating Statistical Concepts, Applications, and Methods" (ISCAM) by Beth Chance & Allan Rossman (2024) <https://rossmanchance.com/iscam4/>. Tools to introduce statistical concepts with a focus on simulation approaches. Functions are verbose, designed to provide ample output for students to understand what each function does. Additionally, most functions are accompanied with plots. The package is designed to be used in an educational setting alongside the ISCAM textbook.
This package provides a user-friendly interface, using Shiny, to analyse glucose-stimulated insulin secretion (GSIS) assays in pancreatic beta cells or islets. The package allows the user to import several sets of experiments from different spreadsheets and to perform subsequent steps: summarise in a tidy format, visualise data quality and compare experimental conditions without omitting to account for technical confounders such as the date of the experiment or the technician. Together, insane is a comprehensive method that optimises pre-processing and analyses of GSIS experiments in a friendly-user interface. The Shiny App was initially designed for EndoC-betaH1 cell line following method described in Ndiaye et al., 2017 (<doi:10.1016/j.molmet.2017.03.011>).
Carries out instrumental variable estimation of causal effects, including power analysis, sensitivity analysis, and diagnostics. See Kang, Jiang, Zhao, and Small (2020) <http://pages.cs.wisc.edu/~hyunseung/> for details.
Addresses the log of zero by developing a new family of estimators called iterated Ordinary Least Squares. This family nests standard approaches such as log-linear and Poisson regressions, offers several computational advantages, and corresponds to the correct way to perform the popular log(Y + 1) transformation. For more details about how to use it, see the notebook at: <https://www.davidbenatia.com/>.