Reputation: 1645
I have a spark job where I do the following
I get a Container killed by YARN for exceeding memory limits error after some iterations .
Container killed by YARN for exceeding memory limits. 14.8 GB of 6 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead
I am unable to understand why the error says 14.8 GB of 6 GB physical memory used ?
I have tried increasing spark.yarn.executor.memoryOverhead I have used the following command
spark-submit --master yarn --deploy-mode cluster --num-executors 4 --executor-cores 2 --executor-memory 2G --conf spark.yarn.executor.memoryOverhead=4096 --py-files test.zip app_main.py
I am using spark 2.3
yarn.scheduler.minimum-allocation-mb = 512 MB
yarn.nodemanager.resource.memory-mb = 126 GB
Upvotes: 0
Views: 4545
Reputation: 637
This is one of the common error when memoryOverhead option is used, it is better to use other options to tune jobs.
http://ashkrit.blogspot.com/2018/09/anatomy-of-apache-spark-job.html post talks about this issue and how to deal with it.
Upvotes: 1