Reputation: 41
After successfully created hadoop environment, when I am going to run this wordcount example in hadoop with version 0.19.1, It gives me error like, how can I solve this
11/12/30 06:46:13 INFO mapred.FileInputFormat: Total input paths to process : 1
11/12/30 06:46:14 INFO mapred.JobClient: Running job: job_201112300255_0019
11/12/30 06:46:15 INFO mapred.JobClient: map 0% reduce 0%
11/12/30 06:46:20 INFO mapred.JobClient: Task Id : attempt_201112300255_0019_m_000003_0, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:425)
11/12/30 06:46:24 INFO mapred.JobClient: Task Id : attempt_201112300255_0019_m_000003_1, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:425)
11/12/30 06:46:28 INFO mapred.JobClient: Task Id : attempt_201112300255_0019_m_000003_2, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:425)
11/12/30 06:46:35 INFO mapred.JobClient: Task Id : attempt_201112300255_0019_m_000002_0, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:425)
11/12/30 06:46:39 INFO mapred.JobClient: Task Id : attempt_201112300255_0019_m_000002_1, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:425)
11/12/30 06:46:44 INFO mapred.JobClient: Task Id : attempt_201112300255_0019_m_000002_2, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:425)
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232)
at word.count.WordCount.main(WordCount.java:53)
Please help me out to resolve this error.
Upvotes: 0
Views: 930
Reputation: 10642
The hadoop version you use has a job tracker that manages the entire job. And for each sub part of that job (called a task) a task tracker that actually does the work. The output you provided is the job tracker output that essentially says: a task failed. To figure out what really went wrong inside that task you must have a look at the logfiles that belong to the failed task.
You can reach those logs via the mapreduce web interface of your cluster.
Upvotes: 2