Reputation: 332
I was looking in the Spark documentation for the answer, but couldn't find my answer.
When the spark works in Standalone mode with multiple slave nodes, does the master doing part of the job as well, or only manages the other nodes?
I would like to know the answer for this(Standalone) deploy mode, and also for the Mesos/Yarn options.
Thank you for your time and help.
Upvotes: 2
Views: 707
Reputation: 74669
When the spark works in Standalone mode with multiple slave nodes, does the master doing part of the job as well, or only manages the other nodes?
No.
In a clustering solution (e.g. Apache Mesos aka DC/OS, Hadoop YARN, Spark Standalone), the master is only responsible for managing CPU (as vcores) and memory in the entire cluster that nodes offer. The cluster manager does not participate in offering (CPU and memory) resources to an application (aka a framework).
Upvotes: 3