re3el
re3el

Reputation: 785

Block Size in hadoop

I am currently working on a four node multi cluster. Can anyone suggest me the appropriate block size for working on a 22GB input file? Thanks in advance.

Here are my performance results: 64M - 32 min. 128M - 19.4 min 256M - 15 min

Now, should I consider making it much larger to 1GB/2GB? Kindly explain if there are any issues if done so.

Edit: Also, if the performance increases with increasing block size for a 20GB input file why is the default block size 64MB or 128MB? Kindly answer similar question over here

Upvotes: 0

Views: 300

Answers (2)

WestCoastProjects
WestCoastProjects

Reputation: 63062

How heavy is the per-line processing? If it were simply a kind of "grep" then you should be fine to increase the block size up to 1GB . Why not simply try it out? Your performance numbers indicate a positive result in increasing the block size already.

The consideration for smaller block sizes would be if each line requires significant ancillary processing. But that is doubtful given your already established performance trends.

Upvotes: 0

Makubex
Makubex

Reputation: 419

What is the split size that you are going to use for processing this file? If it's slightly more than the default block size, then i'd suggest you to change the block size to the split size value. This should increase the chances of data locality for mappers thereby improving the job throughput.

Split size is computed by input format.

    protected long computeSplitSize(long blockSize, long minSize,
                                  long maxSize) {
    return Math.max(minSize, Math.min(maxSize, blockSize));
  }

minSize and maxSize can be manipulated using the below configuration parameters,

mapreduce.input.fileinputformat.split.minsize

mapreduce.input.fileinputformat.split.maxsize

You can find the detailed data flow in the FileInputFormat class.

Upvotes: 1

Related Questions