Reputation: 169
I am running spark cluster in local mode using python pyspark.
One of the spark configuration options are set to:
"spark.executor.cores": "8"
"spark.cores.max": "8"
After setting all options:
SparkSession.builder.config(conf=spark_configuration)
I build the spark context:
SparkSession.builder.master("local[*]").appName(application_name).getOrCreate()
My machine has 16 cores and I see that the application consumes all available resources.
My question is how does the option "local[*]"
vs "spark.executor.cores": "8"
influence the spark driver (how many cores local executor will consume)?
Upvotes: 2
Views: 2121
Reputation: 5487
This is what I observed on a system with 12 cores:
When I mark executor cores as 4, total 3 executors will be created with 4 cores each on standalone mode.
But this is not the case with local mode. Even if I pass flag --num-executors 4
or change spark.driver.cores/spark.executor.cores/spark.executor.instances
nothing is changing the number of executors. All the time only one executor will be there with id as driver and cores will be equal to what we pass in master.
Upvotes: 3