Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
DCMTK is a collection of libraries and applications implementing large parts the DICOM standard. It includes software for examining, constructing and converting DICOM image files, handling offline media, sending and receiving images over a network connection, as well as demonstrative image storage and worklist servers.
The Insight Toolkit (ITK) is a toolkit for N-dimensional scientific image processing, segmentation, and registration. Segmentation is the process of identifying and classifying data found in a digitally sampled representation. Typically the sampled representation is an image acquired from such medical instrumentation as CT or MRI scanners. Registration is the task of aligning or developing correspondences between data. For example, in the medical environment, a CT scan may be aligned with a MRI scan in order to combine the information contained in both.
The Visualization Toolkit (VTK) is a C++ library for 3D computer graphics, image processing and visualization. It supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation. VTK has an extensive information visualization framework, has a suite of 3D interaction widgets, supports parallel processing, and integrates with various databases on GUI toolkits such as Qt and Tk.
TTK
The Open MPI Project is an MPI-3 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
This is a minimalist MPI hello world.
PaMPA is a C library for dynamic parallel remeshing and redistribution of unstructured meshes.
SimGrid is a scientific instrument to study the behavior of large-scale distributed systems such as grids, "clouds", HPC, and P2P systems. It can be used to evaluate heuristics, prototype applications or even assess legacy MPI applications.
SLURM is a fault-tolerant and highly scalable cluster management and job scheduling system for large and small clusters. It allocates access to resources (computer nodes) to users for some duration of time, provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes, and arbitrates contention for resources by managing a queue of pending work.
METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill-reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
StarPU is a run-time system that offers support for heterogeneous multicore machines. While many efforts are devoted to design efficient computation kernels for those architectures (e.g. to implement BLAS kernels on GPUs), StarPU not only takes care of offloading such kernels (and implementing data coherency across the machine), but it also makes sure the kernels are executed as efficiently as possible.
StarPU is a run-time system that offers support for heterogeneous multicore machines. While many efforts are devoted to design efficient computation kernels for those architectures (e.g. to implement BLAS kernels on GPUs), StarPU not only takes care of offloading such kernels (and implementing data coherency across the machine), but it also makes sure the kernels are executed as efficiently as possible.
StarPU is a run-time system that offers support for heterogeneous multicore machines. While many efforts are devoted to design efficient computation kernels for those architectures (e.g. to implement BLAS kernels on GPUs), StarPU not only takes care of offloading such kernels (and implementing data coherency across the machine), but it also makes sure the kernels are executed as efficiently as possible.
StarPU is a run-time system that offers support for heterogeneous multicore machines. While many efforts are devoted to design efficient computation kernels for those architectures (e.g. to implement BLAS kernels on GPUs), StarPU not only takes care of offloading such kernels (and implementing data coherency across the machine), but it also makes sure the kernels are executed as efficiently as possible.
StarPU is a run-time system that offers support for heterogeneous multicore machines. While many efforts are devoted to design efficient computation kernels for those architectures (e.g. to implement BLAS kernels on GPUs), StarPU not only takes care of offloading such kernels (and implementing data coherency across the machine), but it also makes sure the kernels are executed as efficiently as possible.
PARCOACH is an Open-source software dedicated to the collective errors detection in parallel applications.
SCOTCH is a set of programs and libraries which implement the static mapping and sparse matrix reordering algorithms developed within the SCOTCH project. Its purpose is to apply graph theory, with a divide and conquer approach, to scientific computing problems such as graph and mesh partitioning, static mapping, and sparse matrix ordering, in application domains ranging from structural mechanics to operating systems or bio-chemistry.
TopoMatch leverages on the Scotch library to handle any type of topologies and not only trees. Its main features are: - Handling any type of topologies (tgt Scotch format or hwloc format). - Handle large communication patterns (up to hundreds of thousands of processes and processing units) . - Manage binding constraints: you can specify a subset of the node onto which you want to do the mapping. - Manage oversubscribing: you can specify that more than one processes can be mapped onto a each processing unit. - Deal with logical numbering. Physical core numbering can be used with XML/HWLOC topologies. - Provide exhaustive search for small cases. - Adaptive algorithmic that provide a good trade-off between quality and speed. - Crucial sections of the code are multithreaded. - Optimize I/O to read large input files. - Portable on Unix-like systems (Linux, OS-X, etc.). - Many useful options (level of verbosity, topology optimization, partitioning, etc.).
SCOTCH is a set of programs and libraries which implement the static mapping and sparse matrix reordering algorithms developed within the SCOTCH project. Its purpose is to apply graph theory, with a divide and conquer approach, to scientific computing problems such as graph and mesh partitioning, static mapping, and sparse matrix ordering, in application domains ranging from structural mechanics to operating systems or bio-chemistry.
Fypp is a Python powered preprocessor. It can be used for any programming languages but its primary aim is to offer a Fortran preprocessor, which helps to extend Fortran with condititional compiling and template metaprogramming capabilities. Instead of introducing its own expression syntax, it uses Python expressions in its preprocessor directives, offering the consistency and versatility of Python when formulating metaprogramming tasks. It puts strong emphasis on robustness and on neat integration into developing toolchains.
The open source library MpiFx provides modern Fortran (Fortran 2003) wrappers around routines of the MPI library to make their use as simple as possible. Currently several data distribution routines are covered.
HDF5 is a suite that makes possible the management of extremely large and complex data collections.
Adiak is a library for recording meta-data about HPC simulations. An HPC application code may, for example, record what user invoked it, the version of the code being run, a computed time history showing density changes, or how long the application spent performing file IO. Adiak represents this metadata as Name/Value pairs. Names are arbitrary strings, with some standardization, and the values are represented by a flexible dynamic type system
HDF5 tools for use by GEOS