Shekhar
Shekhar

Reputation: 11788

# of failed Map Tasks exceeded allowed limit

I am trying my hands on Hadoop streaming using Python. I have written simple map and reduce scripts by taking help from here

map script is as follows :

#!/usr/bin/env python

import sys, urllib, re

title_re = re.compile("<title>(.*?)</title>", re.MULTILINE | re.DOTALL | re.IGNORECASE)

for line in sys.stdin:
    url = line.strip()
    match = title_re.search(urllib.urlopen(url).read())
    if match :
        print url, "\t", match.group(1).strip()

and reduce script is as follows :

#!/usr/bin/env python

from operator import itemgetter
import sys

for line in sys.stdin :
    line = line.strip()
    print line

After running these scripts using hadoop streaming jar, map tasks finish and I can see that they are 100% completed but reduce job get stuck at 22%, and after long period of time it gives ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. error.

I am not able to find out exact reason behind this.

My terminal window looks like as follows :

shekhar@ubuntu:/host/Shekhar/Softwares/hadoop-1.0.0$ hadoop jar contrib/streaming/hadoop-streaming-1.0.0.jar -mapper /host/Shekhar/HadoopWorld/MultiFetch.py -reducer /host/Shekhar/HadoopWorld/reducer.py -input /host/Shekhar/HadoopWorld/urls/* -output /host/Shekhar/HadoopWorld/titles3
Warning: $HADOOP_HOME is deprecated.

packageJobJar: [/tmp/hadoop-shekhar/hadoop-unjar2709939812732871143/] [] /tmp/streamjob1176812134999992997.jar tmpDir=null
12/05/27 11:27:46 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/05/27 11:27:46 INFO mapred.FileInputFormat: Total input paths to process : 3
12/05/27 11:27:46 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-shekhar/mapred/local]
12/05/27 11:27:46 INFO streaming.StreamJob: Running job: job_201205271050_0006
12/05/27 11:27:46 INFO streaming.StreamJob: To kill this job, run:
12/05/27 11:27:46 INFO streaming.StreamJob: /host/Shekhar/Softwares/hadoop-1.0.0/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201205271050_0006
12/05/27 11:27:46 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201205271050_0006
12/05/27 11:27:47 INFO streaming.StreamJob:  map 0%  reduce 0%
12/05/27 11:28:07 INFO streaming.StreamJob:  map 67%  reduce 0%
12/05/27 11:28:37 INFO streaming.StreamJob:  map 100%  reduce 0%
12/05/27 11:28:40 INFO streaming.StreamJob:  map 100%  reduce 11%
12/05/27 11:28:49 INFO streaming.StreamJob:  map 100%  reduce 22%
12/05/27 11:31:35 INFO streaming.StreamJob:  map 67%  reduce 22%
12/05/27 11:31:44 INFO streaming.StreamJob:  map 100%  reduce 22%
12/05/27 11:34:52 INFO streaming.StreamJob:  map 67%  reduce 22%
12/05/27 11:35:01 INFO streaming.StreamJob:  map 100%  reduce 22%
12/05/27 11:38:11 INFO streaming.StreamJob:  map 67%  reduce 22%
12/05/27 11:38:20 INFO streaming.StreamJob:  map 100%  reduce 22%
12/05/27 11:41:29 INFO streaming.StreamJob:  map 67%  reduce 22%
12/05/27 11:41:35 INFO streaming.StreamJob:  map 100%  reduce 100%
12/05/27 11:41:35 INFO streaming.StreamJob: To kill this job, run:
12/05/27 11:41:35 INFO streaming.StreamJob: /host/Shekhar/Softwares/hadoop-1.0.0/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201205271050_0006
12/05/27 11:41:35 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201205271050_0006
12/05/27 11:41:35 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201205271050_0006_m_000001
12/05/27 11:41:35 INFO streaming.StreamJob: killJob...
Streaming Job Failed!

Can anyone please help me??

EDIT job tracker details are as follows :

Hadoop job_201205271050_0006 on localhost

User: shekhar
Job Name: streamjob1176812134999992997.jar
Job File: file:/tmp/hadoop-shekhar/mapred/staging/shekhar/.staging/job_201205271050_0006/job.xml
Submit Host: ubuntu
Submit Host Address: 127.0.1.1
Job-ACLs: All users are allowed
Job Setup: Successful
Status: Failed
Failure Info:# of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201205271050_0006_m_000001
Started at: Sun May 27 11:27:46 IST 2012
Failed at: Sun May 27 11:41:35 IST 2012
Failed in: 13mins, 48sec
Job Cleanup: Successful
Black-listed TaskTrackers: 1
Kind    % Complete  Num Tasks   Pending Running Complete    Killed  Failed/Killed
Task Attempts
map 100.00%
3   0   0   2   1   4 / 0
reduce  100.00%
1   0   0   0   1   0 / 1

Upvotes: 1

Views: 14009

Answers (4)

Akash
Akash

Reputation: 11

Add the following line at the beginning of your Mapper and Reducer:

#!/usr/bin/python

Upvotes: 0

Tuo Lei
Tuo Lei

Reputation: 101

Check your stderr first. Your information is not enough to decide what error it is, stderr typically in: {your hadoop temp dir here}/mapred/local/userlogs/{your job id}/{your attemp id}/stderr

Sean's answer is the most case when you first use hadoop, so I guess you might get a 'env: python\r: No such file or directory' error. If so, simply replace your CR to LF to solve this problem. just run a script to replace \r with \n

Upvotes: 0

Sean
Sean

Reputation: 1064

this error is just a generic error, that too many Map tasks failed to complete:

of failed Map Tasks exceeded allowed limit

you can use the EMR Console to navigate to the logs for the individual Map / Reduce tasks. Then you should be able to see what the issue is.

In my case - I got this error when I made small mistakes, like setting the path to the Map script incorrectly.

steps to view the logs of the Tasks:

http://antipatterns.blogspot.nl/2013/03/amazon-emr-map-reduce-error-of-failed.html

Upvotes: 3

gagansekhon
gagansekhon

Reputation: 21

I just had the same error show up. In my case it turned out to be a parsing error. There was an "unexpected" new line at places which the stdin split the line at. I would suggest checking your data file. Once I removed the column which had these new lines it worked fine.

Upvotes: 2

Related Questions