Reputation: 648
Please help me to understand the error in either my understanding or configuration.
I am running Spark on YARN, and have set the minimum container memory allocation to 8GB in yarn-site.xml:
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>8192</value>
</property>
I can see this setting reflected in the Resource Manager UI:
However, when I ps the container's java process on the server the max heap size is set to 1024MB i.e. -Xmx1024m:
root 542 535 1 16:18 ? 00:05:58 /usr/lib/jvm/jre-1.8.0-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1583021363029_0011/container_1583021363029_0011_03_000003/tmp ...
The container's java process id is 542 :
Logs for container_1583021363029_0011_03_000003
0/03/02 16:18:57 INFO executor.CoarseGrainedExecutorBackend: Started daemon with process name: [email protected]
20/03/02 16:18:57 INFO util.SignalUtils: Registered signal handler for TERM
20/03/02 16:18:57 INFO util.SignalUtils: Registered signal handler for HUP
20/03/02 16:18:57 INFO util.SignalUtils: Registered signal handler for INT
Upvotes: 1
Views: 578
Reputation: 191743
Java opts and YARN container sizes are distinct properties
Maximum heap size settings can be set with
spark.driver.memory
in the cluster mode and through the--driver-memory
command line option in the client mode
Upvotes: 1