Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Brian is a simulator for spiking neural networks written in Python. It is therefore designed to be easy to learn and use, highly flexible and easily extensible.
This package provides simple access speech to text for using in Linux without being tied to a desktop environment, using the vosk-api. The user configuration lets you manipulate text using Python string operations. It has zero overhead, as this relies on manual activation and there are no background processes. Dictation is accessed manually with nerd-dictation begin and nerd-dictation end commands.
This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa and achieve state-of-the-art performance in various tasks. Text is embedded in vector space such that similar text are closer and can efficiently be found using cosine similarity.
This package provides easy access to pretrained models for more than 100 languages, fine-tuned for various use-cases.
Further, this framework allows an easy fine-tuning of custom embeddings models, to achieve maximal performance on your specific task.
cleanlab automatically finds and fixes errors in any ML dataset. This data-centric AI package facilitates machine learning with messy, real-world data by providing clean labels during training.
DLPack is an in-memory tensor structure for sharing tensors among frameworks.
This package provides simple access speech to text for using in Linux without being tied to a desktop environment, using the vosk-api. The user configuration lets you manipulate text using Python string operations. It has zero overhead, as this relies on manual activation and there are no background processes. Dictation is accessed manually with nerd-dictation begin and nerd-dictation end commands.
Captum is a model interpretability and understanding library for PyTorch. Captum contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models. It has quick integration for models built with domain-specific libraries such as torchvision, torchtext, and others.
Lap is a linear assignment problem solver using Jonker-Volgenant algorithm for dense (LAPJV) or sparse (LAPMOD) matrices.
QNNPACK is a library for low-precision neural network inference. It contains the implementation of common neural network operators on quantized 8-bit tensors.
PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
fastText is a library for efficient learning of word representations and sentence classification.
This package provides provides an implementation of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for Python.
OneAPI Deep Neural Network Library (oneDNN) is a cross-platform performance library of basic building blocks for deep learning applications.
This package provides a Python implementation of the CMA-ES algorithm and a few related numerical optimization tools.
Dlib is a modern C++ toolkit containing machine learning algorithms and tools. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments.
ML Collections is a library of Python collections designed for Machine Learning usecases.
This package is a stand-alone implementation of several NumPy dtype extensions used in machine learning libraries, including:
bfloat16: an alternative to the standardfloat16formatfloat8_*: several experimental 8-bit floating point representations including:float8_e4m3b11fnuzfloat8_e4m3fnfloat8_e4m3fnuzfloat8_e5m2float8_e5m2fnuz
int4anduint4: low precision integer types.
PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
FANN is a neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks.
This library is used internally as header-only library by PyTorch.
This package enables you to deserialize Lua torch-serialized objects from Python.
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends
DeepXDE is a library for scientific machine learning and physics-informed learning. It includes implementations for the PINN (physics-informed neural networks), DeepONet (deep operator network) and MFNN (multifidelity neural network) algorithms.
This package provides a functional take on deep learning, compatible with your favorite libraries.