Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabulary size is predetermined prior to the neural model training. SentencePiece implements subword units---e.g., byte-pair-encoding (BPE) and unigram language model---with the extension of direct training from raw sentences. SentencePiece allows us to make a purely end-to-end system that does not depend on language-specific pre- or post-processing.
This is a Python library that aims at making tensor learning simple and accessible. It allows performing tensor decomposition, tensor learning and tensor algebra easily. Its backend system allows seamlessly perform computation with NumPy, PyTorch, JAX, MXNet, TensorFlow or CuPy and run methodxs at scale on CPU or GPU.
XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
This package provides an implementation of today’s most used tokenizers, with a focus on performance and versatility.
QNNPACK is a library for low-precision neural network inference. It contains the implementation of common neural network operators on quantized 8-bit tensors.
TorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics. It offers:
A standardized interface to increase reproducibility
Reduces boilerplate
Automatic accumulation over batches
Metrics optimized for distributed-training
Automatic synchronization between multiple devices
The GeomLoss library provides efficient GPU implementations for:
Kernel norms (also known as Maximum Mean Discrepancies).
Hausdorff divergences, which are positive definite generalizations of the Chamfer-ICP loss and are analogous to log-likelihoods of Gaussian Mixture Models.
Debiased Sinkhorn divergences, which are affordable yet positive and definite approximations of Optimal Transport (Wasserstein) distances.
This package implements a variety of persistent homology algorithms. It provides an interface for
computing persistence cohomology of sparse and dense data sets
visualizing persistence diagrams
computing lowerstar filtrations on images
computing representative cochains
This package enables you to deserialize Lua torch-serialized objects from Python.
DeepXDE is a library for scientific machine learning and physics-informed learning. It includes implementations for the PINN (physics-informed neural networks), DeepONet (deep operator network) and MFNN (multifidelity neural network) algorithms.
This library is used internally as header-only library by PyTorch.
Dlib is a modern C++ toolkit containing machine learning algorithms and tools. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments.
This package provides a C++ and Python library for performing arbitrary optimizations on ONNX models, as well as a growing list of prepackaged optimization passes.
Not all possible optimizations can be directly implemented on ONNX graphs--- some will need additional backend-specific information---but many can, and the aim is to provide all such passes along with ONNX so that they can be re-used with a single function call.
Visualization and NeuroML import/export tools for the Brian 2 simulator.
This package provides multiplicatively convolutional fast integral transforms.
The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.
QNNPACK is a library for low-precision neural network inference. It contains the implementation of common neural network operators on quantized 8-bit tensors.
This package provides OCaml bindings for the MCL graph clustering algorithm.
This package is a high-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model, implemented in plain C/C++ without dependencies, with
AVX intrinsics support for x86 architectures
VSX intrinsics support for POWER architectures
Mixed F16 / F32 precision
4-bit and 5-bit integer quantization support
Zero memory allocations at runtime
Support for CPU-only inference
Efficient GPU support for NVIDIA
OpenVINO Support
C-style API
Dlib is a modern C++ toolkit containing machine learning algorithms and tools. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments.
Lap is a linear assignment problem solver using Jonker-Volgenant algorithm for dense (LAPJV) or sparse (LAPMOD) matrices.
This package provides a Python library for probabilistic modeling and inference.
Inquirer should ease the process of asking end user questions, parsing, validating answers, managing hierarchical prompts and providing error feedback.