Reputation: 1093
I ran Hadoop MapReduce on 1.1GB file multiple times with a different number of mappers and reducers (e.g. 1 mapper and 1 reducer, 1 mapper and 2 reducers, 1 mapper and 4 reducers, ...)
Hadoop is installed on quad-core machine with hyper-threading.
The following is the top 5 result sorted by shortest execution time:
+----------+----------+----------+
| time | # of map | # of red |
+----------+----------+----------+
| 7m 50s | 8 | 2 |
| 8m 13s | 8 | 4 |
| 8m 16s | 8 | 8 |
| 8m 28s | 4 | 8 |
| 8m 37s | 4 | 4 |
+----------+----------+----------+
The result for 1 - 8 reducers and 1 - 8 mappers: column = # of mappers row = # of reducers
+---------+---------+---------+---------+---------+
| | 1 | 2 | 4 | 8 |
+---------+---------+---------+---------+---------+
| 1 | 16:23 | 13:17 | 11:27 | 10:19 |
+---------+---------+---------+---------+---------+
| 2 | 13:56 | 10:24 | 08:41 | 07:52 |
+---------+---------+---------+---------+---------+
| 4 | 14:12 | 10:21 | 08:37 | 08:13 |
+---------+---------+---------+---------+---------+
| 8 | 14:09 | 09:46 | 08:28 | 08:16 |
+---------+---------+---------+---------+---------+
(1) It looks that the program runs slightly faster when I have 8 mappers, but why does it slow down as I increase the number of reducers? (e.g. 8mappers/2reducers is faster than 8mappers/8reducers)
(2) When I use only 4 mappers, it's a bit slower simply because I'm not utilizing the other 4 cores, right?
Upvotes: 10
Views: 16626
Reputation: 5157
Quoting from "Hadoop The Definite Guide, 3rd edition", page 306
Because MapReduce jobs are normally I/O-bound, it makes sense to have more tasks than processors to get better utilization.
The amount of oversubscription depends on the CPU utilization of jobs you run, but a good rule of thumb is to have a factor of between one and two more tasks (counting both map and reduce tasks) than processors.
A processor in the quote above is equivalent to one logical core.
But this is just in theory and most likely each use case is different than another, like Niels detailed explanation, some tests need to be performed.
Upvotes: 0
Reputation: 10652
The optimal number of mappers and reducers has to do with a lot of things.
The main thing to aim for is the balance between the used CPU power, amount of data that is transported (in mapper, between mapper and reducer, and out the reducers) and the disk 'head movements'.
Each task in a mapreduce job works best if it can read/write the data 'with minimal disk head movements'. Usually described as "sequential reads/writes". But if the task is CPU bound the extra diskhead movements do not impact the job.
It seems to me that in this specific case you have
Possible ways to handle this kind of situation:
First do exactly what you did: Do some test runs and see which setting performs best given this specific job and your specific cluster.
Then you have three options:
Suggestions for shifting the load:
If CPU bound and all CPUs are fully loaded then reduce the CPU load:
If IO bound and you have some CPU capacity left:
Upvotes: 19