Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
TensorFlow is a flexible platform for building and training machine learning models. This package provides the "lite" variant for mobile devices.
This package provides fast machine learning algorithms including matrix factorization and divisive clustering for large sparse and dense matrices.
Visualization and NeuroML import/export tools for the Brian 2 simulator.
Thread-pool Controls provides Python helpers to limit the number of threads used in the threadpool-backed of common native libraries used for scientific computing and data science (e.g. BLAS and OpenMP).
PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
This package provides simple access speech to text for using in Linux without being tied to a desktop environment, using the vosk-api. The user configuration lets you manipulate text using Python string operations. It has zero overhead, as this relies on manual activation and there are no background processes. Dictation is accessed manually with nerd-dictation begin and nerd-dictation end commands.
Gloo is a collective communications library. It comes with a number of collective algorithms useful for machine learning applications. These include a barrier, broadcast, and allreduce.
This package provides a Python library for probabilistic modeling and inference.
Captum is a model interpretability and understanding library for PyTorch. Captum contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models. It has quick integration for models built with domain-specific libraries such as torchvision, torchtext, and others.
This package is a high-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model, implemented in plain C/C++ without dependencies, with
AVX intrinsics support for x86 architectures
VSX intrinsics support for POWER architectures
Mixed F16 / F32 precision
4-bit and 5-bit integer quantization support
Zero memory allocations at runtime
Support for CPU-only inference
Efficient GPU support for NVIDIA
OpenVINO Support
C-style API
This package provides a fast (zero-copy) and safe (dedicated) format for storing tensors safely.
This package provides a Python wrapper for the SentencePiece unsupervised text tokenizer.
This package provides easy download of thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
These models can be applied on:
Text, for tasks like text classification, information extraction, question answering, summarization, translation, and text generation, in over 100 languages.
Images, for tasks like image classification, object detection, and segmentation.
Audio, for tasks like speech recognition and audio classification.
Transformer models can also perform tasks on several modalities combined, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
This package provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community. At the same time, each Python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them.
Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which may include real-valued, discrete, and conditional dimensions.
PyG is a library built upon PyTorch to easily write and train Graph Neural Networks for a wide range of applications related to structured data.
Uniform Manifold Approximation and Projection is a dimension reduction technique that can be used for visualization similarly to t-SNE, but also for general non-linear dimension reduction.
This package provides a functional take on deep learning, compatible with your favorite libraries.
DMLC-Core is the backbone library to support all DMLC projects, offers the bricks to build efficient and scalable distributed machine learning libraries.
The MCL algorithm is short for the Markov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm for graphs (also known as networks) based on simulation of (stochastic) flow in graphs.
OpenFst is a library for constructing, combining, optimizing, and searching weighted finite-state transducers (FSTs).
PyTorch Lightning is just organized PyTorch; Lightning disentangles PyTorch code to decouple the science from the engineering.
NNPACK is an acceleration package for neural network computations. NNPACK aims to provide high-performance implementations of convnet layers for multi-core CPUs.
NNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives leveraged in leading deep learning frameworks, such as PyTorch, Caffe2, MXNet, tiny-dnn, Caffe, Torch, and Darknet.
This package provides a C++ and Python library for performing arbitrary optimizations on ONNX models, as well as a growing list of prepackaged optimization passes.
Not all possible optimizations can be directly implemented on ONNX graphs--- some will need additional backend-specific information---but many can, and the aim is to provide all such passes along with ONNX so that they can be re-used with a single function call.