Reputation: 1255
I am writing a mapreduce application which takes input in (key, value) format and just displays the same data as output from reducer.
This is the sample input:
1500s 1
1960s 1
Aldus 1
In the below code, I am specifying the input format using <<>> and specified the delimiter as tab in the main(). When I run the code, I am running into the error message:
java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.LongWritable
at cscie63.examples.WordDesc$KVMapper.map(WordDesc.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Tried different things to debug but nothing helped.
public class WordDesc {
public static class KVMapper
extends Mapper<Text, LongWritable, Text, LongWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Text key, LongWritable value , Context context
) throws IOException, InterruptedException {
context.write(key,value);
}
}
public static class KVReducer
extends Reducer<Text,LongWritable,Text,LongWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, LongWritable value,
Context context
) throws IOException, InterruptedException {
context.write(key, value);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", "\t");
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: wordcount <in> [<in>...] <out>");
System.exit(2);
}
Job job = new Job(conf, "word desc");
job.setInputFormatClass(KeyValueTextInputFormat.class);
job.setJarByClass(WordDesc.class);
job.setMapperClass(KVMapper.class);
job.setCombinerClass(KVReducer.class);
job.setReducerClass(KVReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
for (int i = 0; i < otherArgs.length - 1; ++i) {
FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
}
FileOutputFormat.setOutputPath(job,
new Path(otherArgs[otherArgs.length - 1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Upvotes: 2
Views: 775
Reputation: 7462
I guess that this line job.setInputFormatClass(KeyValueTextInputFormat.class);
tells your program to treat your input as key value pairs of Text
. Therefore, when you require your input value to be a LongWritable
you get this Exception.
A quick fix would be to read your input as Text and then, if you want to use a LongWritable, parse it using:
public static class KVMapper
extends Mapper<Text, Text, Text, LongWritable>{
private final static LongWritable val = new LongWritable();
public void map(Text key, Text value, Context context) {
val.set(Long.parseLong(value.toString()));
context.write(key,val);
}
}
What it does is the following: value is Text, then value.toString()
gives the String representation of this Text and then Long.parseLong()
parses this String as long. Finally, val.set()
, transforms it to a a LongWritable.
By the way, I don't think that you need a Reducer for that... You could make it faster by setting the number of reduce tasks to 0.
Upvotes: 1