Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
This is a real-time full-duplex speech recognition server, based on the Kaldi toolkit and the GStreamer framework and implemented in Python.
BoTorch is a library for Bayesian Optimization built on PyTorch.
This package provides common Python utilities and GitHub Actions for the Lightning suite of libraries.
Uniform Manifold Approximation and Projection is a dimension reduction technique that can be used for visualization similarly to t-SNE, but also for general non-linear dimension reduction.
ONNX is a format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.
This package provides fast machine learning algorithms including matrix factorization and divisive clustering for large sparse and dense matrices.
DLPack is an in-memory tensor structure for sharing tensors among frameworks.
GPyTorch is a Gaussian process library implemented using PyTorch.
PyG is a library built upon PyTorch to easily write and train Graph Neural Networks for a wide range of applications related to structured data.
This package provides a tensor-like library for functions and distributions.
Interpretable ML (iML) is a set of data type objects, visualizations, and interfaces that can be used by any method designed to explain the predictions of machine learning models (or really the output of any function). It currently contains the interface and IO code from the Shap project, and it will potentially also do the same for the Lime project.
Low-precision, high-performance matrix-matrix multiplications and convolution library for server-side inference.
fastText is a library for efficient learning of word representations and sentence classification.
This package is a stand-alone implementation of several NumPy dtype extensions used in machine learning libraries, including:
bfloat16: an alternative to the standardfloat16formatfloat8_*: several experimental 8-bit floating point representations including:float8_e4m3b11fnuzfloat8_e4m3fnfloat8_e4m3fnuzfloat8_e5m2float8_e5m2fnuz
int4anduint4: low precision integer types.
DMLC-Core is the backbone library to support all DMLC projects, offers the bricks to build efficient and scalable distributed machine learning libraries.
This package provides a Python wrapper for the SentencePiece unsupervised text tokenizer.
cleanlab automatically finds and fixes errors in any ML dataset. This data-centric AI package facilitates machine learning with messy, real-world data by providing clean labels during training.
fastText is a library for efficient learning of word representations and sentence classification.
This package provides logging utilities for the SpaCy natural language processing framework.
This package provides a speech recognition toolkit based on kaldi. It supports more than 20 languages and dialects - English, Indian English, German, French, Spanish, Portuguese, Chinese, Russian, Turkish, Vietnamese, Italian, Dutch, Catalan, Arabic, Greek, Farsi, Filipino, Ukrainian, Kazakh, Swedish, Japanese, Esperanto, Hindi, Czech, Polish. The program works offline, even on lightweight devices. Portable per-language models are about 50Mb each, and there are much bigger and precise models available.
Vosk API provides a streaming API allowing to use it on-the-fly and bindings for different programming languages. It allows quick reconfiguration of vocabulary for better accuracy, and supports speaker identification beside simple speech recognition.
SpaCy is a library for advanced Natural Language Processing in Python and Cython. It comes with pretrained pipelines and currently supports tokenization and training for 70+ languages. It features state-of-the-art speed and neural network models for tagging, parsing, named entity recognition, text classification and more, multi-task learning with pretrained transformers like BERT, as well as a production-ready training system and easy model packaging, deployment and workflow management.
This package provides simple access speech to text for using in Linux without being tied to a desktop environment, using the vosk-api. The user configuration lets you manipulate text using Python string operations. It has zero overhead, as this relies on manual activation and there are no background processes. Dictation is accessed manually with nerd-dictation begin and nerd-dictation end commands.
QNNPACK is a library for low-precision neural network inference. It contains the implementation of common neural network operators on quantized 8-bit tensors.