Reputation:
We are trying to run our spark cluster on yarn. We are having some performance issues especially when compared to the standalone mode.
We have a cluster of 5 nodes with each having 16GB RAM and 8 cores each. We have configured the minimum container size as 3GB and maximum as 14GB in yarn-site.xml. When submitting the job to yarn-cluster we supply number of executor = 10, memory of executor =14 GB. According to my understanding our job should be allocated 4 container of 14GB. But the spark UI shows only 3 container of 7.2GB each.
We are unable to ensure the container number and resources allocated to it. This causes detrimental performance when compared to the standalone mode.
Can you drop any pointer on how to optimize yarn performance?
This is the command I use for submitting the job:
$SPARK_HOME/bin/spark-submit --class "MyApp" --master yarn-cluster --num-executors 10 --executor-memory 14g target/scala-2.10/my-application_2.10-1.0.jar
Following the discussion I changed my yarn-site.xml file and also the spark-submit command.
Here is the new yarn-site.xml code :
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hm41</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>14336</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2560</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>13312</value>
</property>
And the new command for spark submit is
$SPARK_HOME/bin/spark-submit --class "MyApp" --master yarn-cluster --num-executors 4 --executor-memory 10g --executor-cores 6 target/scala-2.10/my-application_2.10-1.0.jar
With this I am able to get 6 cores on each machine but the memory usage of each node is still around 5G. I have attached the screen shot of SPARKUI and htop.
Upvotes: 9
Views: 6153
Reputation: 21
Just because YARN "thinks" it has 70GB (14GBx5), doesn't mean at run time there is 70GB available on the cluster. You could be running other Hadoop components (hive, HBase, flume, solr, or your own app, etc.) which consume memory. So the run-time decision YARN makes is based on what's currently available -- and it had only 52GB (3x14GB) available to you. By the way, the GB numbers are approximate because it is really computed as 1024MB per GB...so you will see decimals.
Use nmon or top to see what else is using memory on each node.
Upvotes: 0
Reputation: 201
The memory (7.2GB) you see in the SparkUI is the spark.storage.memoryFraction, which by default is 0.6. As for your missing executors, you should look in the YARN resource manager logs.
Upvotes: 3
Reputation: 5018
yarn.nodemanager.resource.memory-mb
is set the right way. In my understanding of your cluster it should be set to 14GB. This setting is responsible for giving the YARN know how much memory it can use on this specific node--num-executors
is the number of YARN containers would be started for executing on the cluster. You specify 10 containers with 14GB RAM each, but you don't have this many resources on your cluster! Second, you specify --master yarn-cluster
, which means that Spark Driver would run inside of the YARN Application Master that would require a separate container.yarn-cluster
with yarn-client
) and check how it is started, check WebUI and JVMs startedUpvotes: 1