Reputation: 1506
I am working on spark 1.3 and my application is a spark streaming application. I use yarn as resource manager. My application runs fine for few days and then spark job loses executor periodically. When I look at node_manager logs, I found a exception:
containerId XXXX is running beyond physical memory limits. Current usage: 11.1 GB of 11 GB physical memory used; 13.4 GB of 23.1 GB virtual memory used. Killing container.
My questions for this exception are follow:
I understand 11G would memory of executor running. But i set 10G as executor memory in spark-defaults.conf. Then how is 11G assigned to executor and what is the virtual memory mentioned here ?
Is there any tools or way i can see on heap and off heap memory dump when container runs out of memory or is there a way I could connect remotely to container JVM and see which objects are causig memory leaks.
thanks
Upvotes: 1
Views: 3155
Reputation: 113
spark.yarn.executor.memoryOverhead
property:spark.yarn.executor.memoryOverhead
executorMemory * 0.10, with minimum of 384 The amount of off-heap memory (in megabytes) to be allocated per executor. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%).
In your case it's approximately 1G additionally.
-XX:+HeapDumpOnOutOfMemoryError
and -XX:HeapDumpPath=..
. Just add these properties by spark.executor.extraJavaOptions
.Upvotes: 1