Anna Ivanova
Anna Ivanova

Reputation: 61

Spark-streaming application hangs when I use yarn-mode

I have a problem with spark-streaming on yarn.

When I'm starting my script in local mode it works fine: I can receive and print events from flume.

from pyspark.streaming.flume import FlumeUtils
from pyspark.streaming import StreamingContext
from pyspark.storagelevel import StorageLevel
from pyspark import SparkConf, SparkContext

conf = SparkConf().setAppName("Test").setMaster('local[*]')
sc = SparkContext(conf=conf)

ssc = StreamingContext(sc, 1)
hostname = 'myhost.com'
port = 6668
addresses = [(hostname, port)]

flumeStream = FlumeUtils.createPollingStream(ssc, addresses, \
                                         storageLevel=StorageLevel(True, True, False, False, 2), \
                                         maxBatchSize=1000, parallelism=5)

flumeStream.pprint()

ssc.start() # Start the computation
ssc.awaitTermination() # Wait for the computation to terminate

Output:

-------------------------------------------
Time: 2017-03-03 00:49:34
-------------------------------------------

-------------------------------------------
Time: 2017-03-03 00:49:35
-------------------------------------------

17/03/03 00:49:35 WARN storage.BlockManager: Block input-0-1488476966735 replicated to only 0 peer(s) instead of 1 peers
-------------------------------------------
Time: 2017-03-03 00:49:36
-------------------------------------------
({u'timestamp': u'1488476971253', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.262000]; 2; 3341678; 3279.39; 97')
({u'timestamp': u'1488476971265', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.274000]; 4; 2690399; 69.24; 27')
({u'timestamp': u'1488476971276', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.285000]; 6; 7266957; 514.57; 25')
({u'timestamp': u'1488476971286', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.296000]; 8; 9220339; 3189.55; 5')
({u'timestamp': u'1488476971298', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.307000]; 10; 2897030; 1029.84; 56')
({u'timestamp': u'1488476971308', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.317000]; 12; 4710976; 1125.88; 35')
({u'timestamp': u'1488476971340', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.349000]; 14; 4894562; 707.43; 50')
({u'timestamp': u'1488476971371', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.380000]; 16; 7370409; 1056.91; 1')
({u'timestamp': u'1488476971402', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.411000]; 18; 6669529; 2868.7; 56')
({u'timestamp': u'1488476971433', u'Severity': u'4', u'Facility': u'3'}, u'<28>[2017-03-03 00:49:31.442000]; 20; 7823207; 791.02; 15')
...

-------------------------------------------
Time: 2017-03-03 00:49:37
-------------------------------------------

But if I'm trying to start in yarn-mode like:

conf = SparkConf().setAppName("Test")

or

conf = SparkConf().setAppName("Test").set("spark.executor.memory", "1g").set("spark.driver.memory", "2g")

then (it seems that) my application goes to infinite loop, probably waiting for something. Image on screen hangs like:

-------------------------------------------
Time: 2017-03-03 00:59:34
-------------------------------------------

Spark-tasks without spark-streaming work fine in yarn and local modes.

My infrastructure is:

I'm starting my spark-streaming application from node 2. From logs I see that connections between node 3 and resource manager work normally.

Any ideas, any help is appreciated! Thanks!

Upvotes: 1

Views: 300

Answers (1)

Anna Ivanova
Anna Ivanova

Reputation: 61

The problem was solved by adding one more node in the cluster.

Upvotes: 1

Related Questions