Parallel Approach for Bandwidth Minimization Problem
According to my previous post about Bandwidth Minimization Problem and Windows Supercomputing Contest 2006 at Kasetsart University, the committees has already received all codes including the one I mentioned early. Okay, let me express how it looks like. Firstly, I tested them all on 8-processor node, dual core, dual processor, and Hyper-Threading, one by one with 8-node graph. As a result, the codes could be classified into 4 classes.
- Fail - incorrect output or execution failure
- 7 ms
- 35 ms
- 70 ms
You might be able to guess which one was the class of the one I said. Yes, 7 ms is the correct answer. Then I digged into the code and found that it was just a sequential program. Only 5 MPI functions had been called in this code, MPI_Init(), MPI_Comm_Size(), MPI_Comm_Rank(), MPI_Finalize(), and MPI_Wtime(). The technique used in this code is IDA*. Wow! Increasing problem size didn’t help other parallel codes to overcome this sequential code. Many thanks to Amdahl and Gustafson. Eventually, the sequential code was beaten by other parallel codes using bigger problem, says 12-node graph, and more processors, says 16, 24, and 32. As a result, the sequential code took around 200 seconds and a parallel code took around 120 seconds for same input at 12-node graph. Note that these result times are included printf() to show all answers.
This is just a power of parallel computing! You don’t need to wise to write a very efficient code to run on just a single processor. It is more simpler to write a stupid code that is able to run on multiple machines. Welcome to the real world!
Technorati Tags: English, Cluster Computing, Grid Computing, Utility Computing, IT, Programming, Tips and Tricks, MPI, Parallel Computing
- sugree's blog
- 1177 reads
Post new comment