Hernan
Hernan

Reputation: 1148

How to configure hadoop's mapper so that it takes <Text,IntWritable>

I'm using two mappers and two reducers. I'm getting the following error:

java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text

This is because the first reducer writes <Text, IntWritable> and my second mapper is getting <Text,IntWritable> but, as i read, mappers take <LongWritable, Text> by default.

So, i have to set the input format with something like:

job2.setInputFormatClass(MyInputFormat.class);

Is there a way to set the InputFormat class to receive <Text,IntWritable>?

Upvotes: 3

Views: 420

Answers (2)

Binary Nerd
Binary Nerd

Reputation: 13937

The input types to your mapper are set by the InputFormat as you suspect.

Generally when you're chaining jobs together like this, its best to use SequenceFileOutputFormat and in the next job SequenceFileInputFormat. This way the types are handled for you and you set the types to be the same, ie the mappers inputs are the same as the previous reducers outputs.

Upvotes: 2

vefthym
vefthym

Reputation: 7462

You don't need your own input format. All you need is to set SequenceFileOutputFormat for the first job and SequenceFileInputFormat for the second job.

TextInputFormat uses LongWritable keys and Text values, but SequenceFileInputFormat uses whatever types you used to store the output.

Upvotes: 2

Related Questions