Zach
Zach

Reputation: 1273

Hadoop Reducer Values in Memory?

I'm writing a MapReduce job that may end up with a huge number of values in the reducer. I am concerned about all of these values being loaded into memory at once.

Does the underlying implementation of the Iterable<VALUEIN> values load values into memory as they are needed? Hadoop: The Definitive Guide seems to suggest this is the case, but doesn't give a "definitive" answer.

The reducer output will be far more massive than the values input, but I believe the output is written to disk as needed.

Upvotes: 9

Views: 5090

Answers (3)

Ravindra babu
Ravindra babu

Reputation: 38910

As quoted by other users, entire data was not loaded into memory. Have a look at some of mapred-site.xml parameters from Apache documentation link.

mapreduce.reduce.merge.inmem.threshold

Default value: 1000. It is the threshold, in terms of the number of files for the in-memory merge process.

mapreduce.reduce.shuffle.merge.percent

Default value is 0.66. The usage threshold at which an in-memory merge will be initiated, expressed as a percentage of the total memory allocated to storing in-memory map outputs, as defined by mapreduce.reduce.shuffle.input.buffer.percent.

mapreduce.reduce.shuffle.input.buffer.percent

Default value is 0.70. The percentage of memory to be allocated from the maximum heap size to storing map outputs during the shuffle.

mapreduce.reduce.input.buffer.percent

Default value is 0. The percentage of memory- relative to the maximum heap size- to retain map outputs during the reduce. When the shuffle is concluded, any remaining map outputs in memory must consume less than this threshold before the reduce can begin.

mapreduce.reduce.shuffle.memory.limit.percent

Default value is : 0.25. Maximum percentage of the in-memory limit that a single shuffle can consume

Upvotes: 0

Ion C. Olaru
Ion C. Olaru

Reputation: 21

It's not entirely in memory, some of it comes from the disk, looking at the code seems like the framework breaks the Iterable into segments, and load them form disk into memory 1 by one.

org.apache.hadoop.mapreduce.task.ReduceContextImpl org.apache.hadoop.mapred.BackupStore

Upvotes: 2

Girish Rao
Girish Rao

Reputation: 2669

You're reading the book correctly. The reducer does not store all values in memory. Instead, when looping through the Iterable value list, each Object instance is re-used, so it only keeps one instance around at a given time.

For example in the follow code, the objs ArrayList will have the expected size after the loop but every element will be the same b/c the Text val instance is re-used every iteration.

public static class ReducerExample extends Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterable<Text> values, Context context) {
    ArrayList<Text> objs = new ArrayList<Text>();
            for (Text val : values){
                    objs.add(val);
            }
    }
}

(If for some reason you did want to take further action on each val, you should make a deep copy and then store it.)

Of course even a single value could be larger than memory. In this case it's recommended to the developer to take steps to pare the data down in the preceding Mapper so that the value is not so large.

UPDATE: See pages 199-200 of Hadoop The Definitive Guide 2nd Edition.

This code snippet makes it clear that the same key and value objects are used on each 
invocation of the map() method -- only their contents are changed (by the reader's 
next() method). This can be a surprise to users, who might expect keys and vales to be 
immutable. This causes prolems when a reference to a key or value object is retained 
outside the map() method, as its value can change without warning. If you need to do 
this, make a copy of the object you want to hold on to. For example, for a Text object, 
you can use its copy constructor: new Text(value).

The situation is similar with reducers. In this case, the value object in the reducer's 
iterator are reused, so you need to copy any that you need to retain between calls to 
the iterator.

Upvotes: 15

Related Questions