astro_asz
astro_asz

Reputation: 2318

Is it true that with mesos I can start only one executor per node in spark-submit?

I would like to know if it is true that on mesos we can have only one executor per node?

Context I am running a spark-submit (Spark 2.0.1) job on a cluster of 5 nodes (workers) each with 80 CPU and 512 GB memory in coarse-grained mode.

Official documentation of Spark Running Spark on Mesos in Mesos Run Modes section, says that in coarse-grained mode (default) I can set two parameters: spark.executor.memory, spark.executor.cores and that spark.cores.max/spark.executor.cores will give me number of executors.

Question Is this correct or not?

I have been playing with spark-submit setup for a week now and the maximum number of executors I was able to get on my cluster is 5 (1 for driver and 4 for actual work). This is based on the Executors tab in Spark UI.

I have seen this StackOverflow question: Understanding resource allocation for spark jobs on mesos Where it is said:

In coarse-grained mode, Spark launch only one executor per host

In Mastering Apache Spark Schedulers in Mesos section it says

In coarse-grained mode, there is a single Spark executor per Mesos executor with many Spark tasks.

Which I don't understand what it means. Is there always only one Mesos_executor per node, and that implies one Spark_executor per node?

If all of this is not true and I can have more executors.

Question Is there some mesos setting that limits number of executors?

Upvotes: 2

Views: 632

Answers (1)

user9295560
user9295560

Reputation: 26

It is not true (anymore). SPARK-5095 Support launching multiple mesos executors in coarse grained mesos mode has been resolved in Spark 2.0 and according to the merged PR:

This PR implements two high-level features. These two features are co-dependent, so they're implemented both here:

  • Mesos support for spark.executor.cores
  • Multiple executors per slave

Upvotes: 1

Related Questions