Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
FANN is a neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks.
This package is a high-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model, implemented in plain C/C++ without dependencies, with
AVX intrinsics support for x86 architectures
VSX intrinsics support for POWER architectures
Mixed F16 / F32 precision
4-bit and 5-bit integer quantization support
Zero memory allocations at runtime
Support for CPU-only inference
Efficient GPU support for NVIDIA
OpenVINO Support
C-style API
This package provides a Python wrapper for the SentencePiece unsupervised text tokenizer.
This package provides an implementation of today’s most used tokenizers, with a focus on performance and versatility.
This package provides multiplicatively convolutional fast integral transforms.
The GeomLoss library provides efficient GPU implementations for:
Kernel norms (also known as Maximum Mean Discrepancies).
Hausdorff divergences, which are positive definite generalizations of the Chamfer-ICP loss and are analogous to log-likelihoods of Gaussian Mixture Models.
Debiased Sinkhorn divergences, which are affordable yet positive and definite approximations of Optimal Transport (Wasserstein) distances.
Inquirer should ease the process of asking end user questions, parsing, validating answers, managing hierarchical prompts and providing error feedback.
This package enables you to deserialize Lua torch-serialized objects from Python.
QNNPACK is a library for low-precision neural network inference. It contains the implementation of common neural network operators on quantized 8-bit tensors.
Kaldi is an extensible toolkit for speech recognition written in C++.
Lantern provides a C API to the libtorch machine learning library.
TensorFlow is a flexible platform for building and training machine learning models. This package provides the "lite" variant for mobile devices.
Hmmlearn is a set of algorithms for unsupervised learning and inference of Hidden Markov Models.
Low-precision, high-performance matrix-matrix multiplications and convolution library for server-side inference.
This package provides a machine learning library of popular datasets, model architectures, and common transformations to apply python-pytorch in the audio domain.
Interpretable ML (iML) is a set of data type objects, visualizations, and interfaces that can be used by any method designed to explain the predictions of machine learning models (or really the output of any function). It currently contains the interface and IO code from the Shap project, and it will potentially also do the same for the Lime project.
This package provides a collection of ordinal regression models for machine learning in Python. They are intended to be used with scikit-learn and are compatible with its API.
This package provides easy download of thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
These models can be applied on:
Text, for tasks like text classification, information extraction, question answering, summarization, translation, and text generation, in over 100 languages.
Images, for tasks like image classification, object detection, and segmentation.
Audio, for tasks like speech recognition and audio classification.
Transformer models can also perform tasks on several modalities combined, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
This package provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community. At the same time, each Python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them.
LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:
Faster training speed and higher efficiency
Lower memory usage
Better accuracy
Parallel and GPU learning supported (not enabled in this package)
Capable of handling large-scale data
ONNX is a format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.
This package provides simple access speech to text for using in Linux without being tied to a desktop environment, using the vosk-api. The user configuration lets you manipulate text using Python string operations. It has zero overhead, as this relies on manual activation and there are no background processes. Dictation is accessed manually with nerd-dictation begin and nerd-dictation end commands.
This package provides a fast (zero-copy) and safe (dedicated) format for storing tensors safely.
OneAPI Deep Neural Network Library (oneDNN) is a cross-platform performance library of basic building blocks for deep learning applications.
This library is used internally as header-only library by PyTorch.