Reputation: 187
In my mapreduce job, I just want to output some lines.
But if I code like this: context.write(data, null);
the program will throw java.lang.NullPointerException.
I don't want to code like below: context.write(data, new Text(""));
because I have to trim the blank space in every line in the output files.
Is there any good ways to solve it? Thanks in advance.
Sorry, it's my mistake. I checked the program carefully, found the reason is I set the Reducer as combiner.
If I do not use the combiner, the statement context.write(data, null); in reducer works fine. In the output data file, there is just the data line.
Share the NullWritable explanation from hadoop definitive guide:
NullWritable is a special type of Writable, as it has a zero-length serialization. No bytes are written to, or read from, the stream. It is used as a placeholder; for example, in MapReduce, a key or a value can be declared as a NullWritable when you don’t need to use that position—it effectively stores a constant empty value. NullWritable can also be useful as a key in SequenceFile when you want to store a list of values, as opposed to key-value pairs. It is an immutable singleton: the instance can be retrieved by calling NullWritable.get().
Upvotes: 3
Views: 14610