Shay
Shay

Reputation: 505

Spark performance tuning - number of executors vs number for cores

I have two questions around performance tuning in Spark:

  1. I understand one of the key things for controlling parallelism in the spark job is the number of partitions that exist in the RDD that is being processed, and then controlling the executors and cores processing these partitions. Can I assume this to be true:

    • # of executors * # of executor cores shoud be <= # of partitions. i.e to say one partition is always processed in one core of one executor. There is no point having more executors*cores than the number of partitions
  2. I understand that having a high number of cores per executor can have a -ve impact on things like HDFS writes, but here's my second question, purely from a data processing point of view what is the difference between the two? For e.g. if I have 10 node cluster what would be the difference between these two jobs (assuming there's ample memory per node to process everything):

    1. 5 executors * 2 executor cores

    2. 2 executors * 5 executor cores

    Assuming there's infinite memory and CPU, from a performance point of view should we expect the above two to perform the same?

Upvotes: 8

Views: 1598

Answers (2)

Rohit Karlupia
Rohit Karlupia

Reputation: 166

Most of the time using larger executors (more memory, more cores) are better. One: larger executor with large memory can easily support broadcast joins and do away with shuffle. Second: since tasks are not created equal, statistically larger executors have better chance of surviving OOM issues. The only problem with large executors is GC pauses. G1GC helps.

Upvotes: 1

J Maurer
J Maurer

Reputation: 1044

In my experience, if I had a cluster with 10 nodes, I would go for 20 spark executors. The details of the job matter a lot, so some testing will help determine the optional configuration.

Upvotes: 0

Related Questions