Fortunato
Fortunato

Reputation: 567

Why increase spark.yarn.executor.memoryOverhead?

I am trying to join two large spark dataframes and keep running into this error:

Container killed by YARN for exceeding memory limits. 24 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

This seems like a common issue among spark users, but I can't seem to find any solid descriptions of what spark.yarn.executor.memoryOverheard is. In some cases it sounds like it's a kind of memory buffer before YARN kills the container (e.g. 10GB was requested, but YARN won't kill the container until it uses 10.2GB). In other cases it sounds like it's being used to to do some kind of data accounting tasks that are completely separate from the analysis that I want to perform. My questions are:

Upvotes: 17

Views: 10341

Answers (1)

Alper t. Turker
Alper t. Turker

Reputation: 35229

Overhead options are nicely explained in the configuration document:

This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%).

This also includes user objects if you use one of the non-JVM guest languages (Python, R, etc...).

Upvotes: 3

Related Questions