Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
ONNX is a format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.
This package provides common Python utilities and GitHub Actions for the Lightning suite of libraries.
ONNX is a format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.
Visualization and NeuroML import/export tools for the Brian 2 simulator.
FANN is a neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks.
This package provides fast machine learning algorithms including matrix factorization and divisive clustering for large sparse and dense matrices.
This package provides a Python library for probabilistic modeling and inference.
This package provides a reimplementation of OpenAI's Whisper model using CTranslate2, which is a inference engine for transformer models.
This package provides a Python library to easily read single characters and key strokes.
LIBSVM is a machine learning library for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification.
TorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics. It offers:
A standardized interface to increase reproducibility
Reduces boilerplate
Automatic accumulation over batches
Metrics optimized for distributed-training
Automatic synchronization between multiple devices
Brian is a simulator for spiking neural networks written in Python. It is therefore designed to be easy to learn and use, highly flexible and easily extensible.
LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:
Faster training speed and higher efficiency
Lower memory usage
Better accuracy
Parallel and GPU learning supported (not enabled in this package)
Capable of handling large-scale data
This package provides easy download of thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
These models can be applied on:
Text, for tasks like text classification, information extraction, question answering, summarization, translation, and text generation, in over 100 languages.
Images, for tasks like image classification, object detection, and segmentation.
Audio, for tasks like speech recognition and audio classification.
Transformer models can also perform tasks on several modalities combined, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
This package provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community. At the same time, each Python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them.
This tool provides ordinary differential equation solvers implemented in PyTorch. Backpropagation through ODE solutions is supported using the adjoint method for constant memory cost.
DLPack is an in-memory tensor structure for sharing tensors among frameworks.
Interpretable ML (iML) is a set of data type objects, visualizations, and interfaces that can be used by any method designed to explain the predictions of machine learning models (or really the output of any function). It currently contains the interface and IO code from the Shap project, and it will potentially also do the same for the Lime project.
Captum is a model interpretability and understanding library for PyTorch. Captum contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models. It has quick integration for models built with domain-specific libraries such as torchvision, torchtext, and others.
This is a modular Python implementation of t-Distributed Stochastic Neighbor Embedding (t-SNE), a popular dimensionality-reduction algorithm for visualizing high-dimensional data sets.
TensorFlow is a flexible platform for building and training machine learning models. This package provides the "lite" variant for mobile devices.
ML Collections is a library of Python collections designed for Machine Learning usecases.
Inquirer should ease the process of asking end user questions, parsing, validating answers, managing hierarchical prompts and providing error feedback.
This package provides simple access speech to text for using in Linux without being tied to a desktop environment, using the vosk-api. The user configuration lets you manipulate text using Python string operations. It has zero overhead, as this relies on manual activation and there are no background processes. Dictation is accessed manually with nerd-dictation begin and nerd-dictation end commands.
Autograd can automatically differentiate native Python and NumPy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments, as well as forward-mode differentiation, and the two can be composed arbitrarily. The main intended application of Autograd is gradient-based optimization.