Shawn
Shawn

Reputation: 1471

What can cause hadoop kill reducer task an retry

my hadoop job has a very high ‘Killed Task Attempts’ number on its reducer tasks, I check the status of killed task:

Request received to kill task 'attempt_201308122006_41526_r_000030_1' by user
-------
Task has been KILLED_UNCLEAN by the user

and no stdout and stderr logs

what could cause this ? and how can I solve it?

Upvotes: 0

Views: 2416

Answers (2)

user3204916
user3204916

Reputation: 36

If not the speculative execution, it could be the Fair Scheduler kicked in claiming task trackers for pool with minMaps and minReduces.

Upvotes: 2

Chris White
Chris White

Reputation: 30089

If you have speculative execution turned on, then you will potentially see a number of map / reduce tasks that will be 'killed'. This is due to hadoop running long running tasks on more than a single task tracker, and the first one to complete 'wins' while the others are killed off.

In general i would only worry about the task attempts that 'failed' in the job tracker

Try turning speculative execution off:

mapred.map.tasks.speculative.execution = false mapred.reduce.tasks.speculative.execution = false

Upvotes: 3

Related Questions