Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Lap is a linear assignment problem solver using Jonker-Volgenant algorithm for dense (LAPJV) or sparse (LAPMOD) matrices.
Lantern provides a C API to the libtorch machine learning library.
ML Collections is a library of Python collections designed for Machine Learning usecases.
This package provides a speech recognition toolkit based on kaldi. It supports more than 20 languages and dialects - English, Indian English, German, French, Spanish, Portuguese, Chinese, Russian, Turkish, Vietnamese, Italian, Dutch, Catalan, Arabic, Greek, Farsi, Filipino, Ukrainian, Kazakh, Swedish, Japanese, Esperanto, Hindi, Czech, Polish. The program works offline, even on lightweight devices. Portable per-language models are about 50Mb each, and there are much bigger and precise models available.
Vosk API provides a streaming API allowing to use it on-the-fly and bindings for different programming languages. It allows quick reconfiguration of vocabulary for better accuracy, and supports speaker identification beside simple speech recognition.
GPyTorch is a Gaussian process library implemented using PyTorch.
Visualization and NeuroML import/export tools for the Brian 2 simulator.
cleanlab automatically finds and fixes errors in any ML dataset. This data-centric AI package facilitates machine learning with messy, real-world data by providing clean labels during training.
This package provides a Python wrapper for the SentencePiece unsupervised text tokenizer.
Dlib is a modern C++ toolkit containing machine learning algorithms and tools. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments.
This is a small self-contained low-precision general matrix multiplication (GEMM) library. It is not a full linear algebra library. Low-precision means that the input and output matrix entries are integers on at most 8 bits. To avoid overflow, results are internally accumulated on more than 8 bits, and at the end only some significant 8 bits are kept.
SpaCy is a library for advanced Natural Language Processing in Python and Cython. It comes with pretrained pipelines and currently supports tokenization and training for 70+ languages. It features state-of-the-art speed and neural network models for tagging, parsing, named entity recognition, text classification and more, multi-task learning with pretrained transformers like BERT, as well as a production-ready training system and easy model packaging, deployment and workflow management.
TensorFlow is a flexible platform for building and training machine learning models. This package provides the "lite" variant for mobile devices.
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends
This package provides a tensor-like library for functions and distributions.
PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
PyNNDescent provides a Python implementation of Nearest Neighbor Descent for k-neighbor-graph construction and approximate nearest neighbor search.
LIBSVM is a machine learning library for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification.
This package provides a fast (zero-copy) and safe (dedicated) format for storing tensors safely.
This package provides a speech recognition toolkit based on kaldi. It supports more than 20 languages and dialects - English, Indian English, German, French, Spanish, Portuguese, Chinese, Russian, Turkish, Vietnamese, Italian, Dutch, Catalan, Arabic, Greek, Farsi, Filipino, Ukrainian, Kazakh, Swedish, Japanese, Esperanto, Hindi, Czech, Polish. The program works offline, even on lightweight devices. Portable per-language models are about 50Mb each, and there are much bigger and precise models available.
Vosk API provides a streaming API allowing to use it on-the-fly and bindings for different programming languages. It allows quick reconfiguration of vocabulary for better accuracy, and supports speaker identification beside simple speech recognition.
This package provides simple access speech to text for using in Linux without being tied to a desktop environment, using the vosk-api. The user configuration lets you manipulate text using Python string operations. It has zero overhead, as this relies on manual activation and there are no background processes. Dictation is accessed manually with nerd-dictation begin and nerd-dictation end commands.
This package provides common Python utilities and GitHub Actions for the Lightning suite of libraries.
fastText is a library for efficient learning of word representations and sentence classification.
ONNX is a format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.
LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:
Faster training speed and higher efficiency
Lower memory usage
Better accuracy
Parallel and GPU learning supported (not enabled in this package)
Capable of handling large-scale data