Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
(guix-science-nonfree packages machine-learning)Gloo is a collective communications library. It comes with a number of collective algorithms useful for machine learning applications. These include a barrier, broadcast, and allreduce.
(guix-science-nonfree packages machine-learning)PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
(guix-science-nonfree packages machine-learning)PyTorch is a Python package that provides two high-level features:
tensor computation (like NumPy) with strong GPU acceleration;
deep neural networks (DNNs) built on a tape-based autograd system.
You can reuse Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Note: currently this package does not provide GPU support.
(guix-science-nonfree packages machine-learning)TensorFlow is a flexible platform for building and training machine learning models. It provides a library for high performance numerical computation and includes high level Python APIs, including both a sequential API for beginners that allows users to build models quickly by plugging together building blocks and a subclassing API with an imperative style for advanced research.
(guix-science-nonfree packages machine-learning)JAX is Autograd and XLA, brought together for high-performance numerical computing, including large-scale machine learning research. With its updated version of Autograd, JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order.
(guix-science-nonfree packages machine-learning)TensorFlow is a flexible platform for building and training machine learning models. It provides a library for high performance numerical computation and includes high level Python APIs, including both a sequential API for beginners that allows users to build models quickly by plugging together building blocks and a subclassing API with an imperative style for advanced research.
(guix-science-nonfree packages machine-learning)JAX is Autograd and XLA, brought together for high-performance numerical computing, including large-scale machine learning research. With its updated version of Autograd, JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order.
(guix-science-nonfree packages machine-learning)Gloo is a collective communications library. It comes with a number of collective algorithms useful for machine learning applications. These include a barrier, broadcast, and allreduce.
(guix-science-nonfree packages machine-learning)Tensorboard is a visualization toolkit for TensorFlow, designed to provide metrics tracking, model visualization, and performance analysis. It allows users to generate interactive dashboards for monitoring training progress, visualizing computational graphs, and analyzing data distributions.
(guix-science-nonfree packages machine-learning)TensorPipe provides a tensor-aware channel to transfer rich objects from one process to another while using the fastest transport for the tensors contained therein.
MUMPS (MUltifrontal Massively Parallel sparse direct Solver) solves a sparse system of linear equations A x = b using Gaussian elimination.
MUMPS (MUltifrontal Massively Parallel sparse direct Solver) solves a sparse system of linear equations A x = b using Gaussian elimination.
MUMPS (MUltifrontal Massively Parallel sparse direct Solver) solves a sparse system of linear equations A x = b using Gaussian elimination.
PETSc, pronounced PET-see (the S is silent), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
MUMPS (MUltifrontal Massively Parallel sparse direct Solver) solves a sparse system of linear equations A x = b using Gaussian elimination.
SuiteSparse is a suite of sparse matrix algorithms, including: UMFPACK, multifrontal LU factorization; CHOLMOD, supernodal Cholesky; SPQR, multifrontal QR; KLU and BTF, sparse LU factorization, well-suited for circuit simulation; ordering methods (AMD, CAMD, COLAMD, and CCOLAMD); CSparse and CXSparse, a concise sparse Cholesky factorization package; and many other packages.
This package contains all of the above-mentioned parts.
MUMPS (MUltifrontal Massively Parallel sparse direct Solver) solves a sparse system of linear equations A x = b using Gaussian elimination.
Intel® Math Kernel Library (MKL) is a proprietary library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. Intel MKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.
Intel® Math Kernel Library (MKL) is a proprietary library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. Intel MKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.
Intel® Math Kernel Library (MKL) is a proprietary library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. Intel MKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.
Intel® Math Kernel Library (MKL) is a proprietary library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. Intel MKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.
Intel® Math Kernel Library (MKL) is a proprietary library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. Intel MKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.
Intel® Math Kernel Library (MKL) is a proprietary library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. Intel MKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.
Intel® Math Kernel Library (MKL) is a proprietary library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. Intel MKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.