Bigo
Bigo

Reputation: 113

why is no IllegalArgumentException even the spark.executor.memory configuration is larger than the node's RAM size

i'm using spark2.0.1 and tests in local mode.

i have a driver application like the following:

object AnnoSp {
   def main(args: Array[String]) {
      val spark = SparkSession.builder
  .config("spark.executor.memory", "2000g")
  .config("spark.driver.memory", "4000g")
  .master("local[*]")
  .config("spark.app.name", "Anno BDG")
  .appName("Anno BDG")
  .getOrCreate()
  }
}

the testing node's RAM is only 4G , and I have set the spark.executor.memory as 4000g. I expected there is an exception of IllegalArgumentException when this application is submitted to spark ,but the application run successfully.why?

Upvotes: 0

Views: 76

Answers (1)

Artur Sukhenko
Artur Sukhenko

Reputation: 652

spark.executor.memory and spark.driver.memory represents the -Xmx value of java application and would be pointless to check because each node in cluster can have different amount of memory, thus app would have to ssh to each node and check the RAM/swap. (Also in yarn you don't know on which nodes your executors/driver will be started)

As to why is it possible to set it higher than your node has -> Why am I able to set -Xmx to a value greater than physical and virtual memory on the machine on both Windows and Solaris?

Upvotes: 1

Related Questions