liyong
liyong

Reputation: 407

The value of "spark.yarn.executor.memoryOverhead" setting?

The value of spark.yarn.executor.memoryOverhead in a Spark job with YARN should be allocated to App or just the max value?

Upvotes: 37

Views: 97224

Answers (1)

Indrajit Swain
Indrajit Swain

Reputation: 1483

spark.yarn.executor.memoryOverhead

Is just the max value .The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames

--executor-memory/spark.executor.memory

controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers.

The value of the spark.yarn.executor.memoryOverhead property is added to the executor memory to determine the full memory request to YARN for each executor. It defaults to max(executorMemory * 0.10, with minimum of 384).

The executors will use a memory allocation based on the property of spark.executor.memoryplus an overhead defined by spark.yarn.executor.memoryOverhead

Upvotes: 47

Related Questions