Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides the NVIDIA tool for debugging CUDA applications running. CUDA-GDB is an extension to GDB, the GNU Project debugger. The tool provides developers with a mechanism for debugging CUDA applications running on actual hardware. This enables developers to debug applications without the potential variations introduced by simulation and emulation environments.
This package provides a an interactive profiler for CUDA and NVIDIA OptiX that provides detailed performance metrics and API debugging via a user interface and command-line tool. Users can run guided analysis and compare results with a customizable and data-driven user interface, as well as post-process and analyze results in their own workflows.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
This package provides the CUDA Deep Neural Network library.
CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-matrix multiplication (GEMM) and related computations at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN.
CUTLASS decomposes these ``moving parts'' into reusable, modular software components abstracted by C++ template classes. Primitives for different levels of a conceptual parallelization hierarchy can be specialized and tuned via custom tiling sizes, data types, and other algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.
This package provides a high-level library based on the cuBLAS and cuSPARSE libraries. It consists of two modules corresponding to two sets of API: the cuSolver API on a single GPU; and the cuSolverMG API on a single node multiGPU. Each of these can be used independently or in concert with other toolkit libraries. The intent of cuSolver is to provide useful LAPACK-like features, such as common matrix factorization and triangular solve routines for dense matrices, a sparse least-squares solver and an eigenvalue solver. In addition, cuSolver provides a new refactorization library useful for solving sequences of matrices with a shared sparsity pattern.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
This package provides a high-performance, GPU accelerated JPEG decoding functionality for image formats commonly used in deep learning and hyperscale multimedia applications. The library offers single and batched JPEG decoding capabilities which efficiently utilize the available GPU resources for optimum performance; and the flexibility for users to manage the memory allocation needed for decoding.
The nvJPEG library enables the following functions: use the JPEG image data stream as input; retrieve the width and height of the image from the data stream, and use this retrieved information to manage the GPU memory allocation and the decoding. A dedicated API is provided for retrieving the image information from the raw JPEG image data stream.
The encoding functions of the nvJPEG library perform GPU-accelerated compression of user’s image data to the JPEG bitstream. User can provide input data in a number of formats and colorspaces, and control the encoding process with parameters. Encoding functionality will allocate temporary buffers using user-provided memory allocator.
This package provides an interface for generating PTX code from both binary and text NVVM IR inputs.
NCCL (pronounced "Nickel") is a stand-alone library of standard communication routines for NVIDIA GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications.
This package provides Python low-level bindings for NVIDIA CUDA toolkit.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
The NVIDIA Management Library Headers (NVML) is a C-based API for monitoring and managing various states of the NVIDIA GPU devices. It provides a direct access to the queries and commands exposed via nvidia-smi.
This package provides the CUDA C++ developers with building blocks that make it easier to write safe and efficient code. It unifies three essential former CUDA C++ libraries into a single repository:
Thrust (former repo)
CUB (former repo)
libcudacxx (former repo)
This package provides a set of APIs which can be used at runtime to link together GPU devide code. It supports Link Time Optimization.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
This package provides the NVIDIA cuBLAS library. It includes several API extensions for providing drop-in industry standard BLAS APIs and GEMM APIs with support for fusions that are highly optimized for NVIDIA GPUs. The cuBLAS library also contains extensions for batched operations, execution across multiple GPUs, and mixed- and low-precision execution with additional tuning for the best performance.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
This package provides the CUDA compiler and the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
This package provides the CUDA run-time support libraries for NVIDIA GPUs, all of which are proprietary.
CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-matrix multiplication (GEMM) and related computations at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN.
CUTLASS decomposes these ``moving parts'' into reusable, modular software components abstracted by C++ template classes. Primitives for different levels of a conceptual parallelization hierarchy can be specialized and tuned via custom tiling sizes, data types, and other algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.
This package provides a system-wide performance analysis tool designed to visualize an application’s algorithms, identify the largest opportunities to optimize, and tune to scale efficiently across any quantity or size of CPUs and GPUs,from large servers to small systems-on-a-chip.