Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
The Linear Algebra PACKage (LAPACK) is a standard software library for numerical linear algebra. The objective of LAPACK++ is to provide a convenient, performance oriented API for development in the C++ language, that, for the most part, preserves established conventions, while, at the same time, takes advantages of modern C++ features, such as: namespaces, templates, exceptions, etc.
The Basic Linear Algebra Subprograms (BLAS) have been around for many decades and serve as the de facto standard for performance-portable and numerically robust implementation of essential linear algebra functionality. The objective of BLAS++ is to provide a convenient, performance oriented API for development in the C++ language, that, for the most part, preserves established conventions, while, at the same time, takes advantages of modern C++ features, such as: namespaces, templates, exceptions, etc.
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
This project provides routines for performing low-rank matrix approximations based on randomized techniques.
PaStiX (Parallel Sparse matriX package) is a scientific library that provides a high performance parallel solver for very large sparse linear systems based on direct methods. Numerical algorithms are implemented in single or double precision (real or complex) using LLt, LDLt and LU with static pivoting (for non symmetric matrices having a symmetric pattern). This solver also provides some low-rank compression methods to reduce the memory footprint and/or the time-to-solution.
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
Composyx is a linear algebra C++ library focused on composability. Its purpose is to allow the user to express a large pannel of algorithms using a high-level interface to range from laptop prototypes to many node supercomputer parallel computations.
Librairies for Multivariate Data Analysis and Dimensionality Reduction for very large datasets.
Example showing how to use starpu for implementing a distributed gemm in C++.
Librairies for Multivariate Data Analysis and Dimensionality Reduction for very large datasets.
DPLASMA is the leading implementation of a dense linear algebra package for distributed, accelerated, heterogeneous systems. It is designed to deliver sustained performance for distributed systems where each node featuring multiple sockets of multicore processors, and if available, accelerators like GPUs or Intel Xeon Phi. DPLASMA achieves this objective through the state of the art PaRSEC runtime, porting the Parallel Linear Algebra Software for Multicore Architectures (PLASMA) algorithms to the distributed memory realm.
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
PaStiX (Parallel Sparse matriX package) is a scientific library that provides a high performance parallel solver for very large sparse linear systems based on direct methods. Numerical algorithms are implemented in single or double precision (real or complex) using LLt, LDLt and LU with static pivoting (for non symmetric matrices having a symmetric pattern). This solver also provides some low-rank compression methods to reduce the memory footprint and/or the time-to-solution.
Mini-chameleon is an educational purpose dense linear algebra solver. As provided, it essentially provides drivers while the actual computational routines remain to be completed. The goal is to implement a dense matrix-matrix product and an LU factorization, first targeting a sequential implementation, followed by an simd version, a shared-memory openmp one, a distributed memory MPI one, an MPI+openmp one and a runtime-based starpu one.
Librairies for Multivariate Data Analysis and Dimensionality Reduction for very large datasets.
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
Librairies for Multivariate Data Analysis and Dimensionality Reduction for very large datasets.
Library implementing Block-GMres with Inexact Breakdown and Deflated Restarting, Breakdown Free Block Conjudate Gradiant, Block General Conjugate Residual and Block General Conjugate Residual with Inner Orthogonalization and with inexact breakdown and deflated restarting.
Chameleon is a dense linear algebra solver relying on sequential task-based algorithms where sub-tasks of the overall algorithms are submitted to a run-time system. Such a system is a layer between the application and the hardware which handles the scheduling and the effective execution of tasks on the processing units. A run-time system such as StarPU is able to manage automatically data transfers between not shared memory area (CPUs-GPUs, distributed nodes).
This project provides routines for performing low-rank matrix approximations based on randomized techniques.
Ginkgo is a high-performance numerical linear algebra library for many-core systems, with a focus on solution of sparse linear systems.