arunK
arunK

Reputation: 418

Dataproc ignoring Spark configuration

I am running a below spark submit command in dataproc cluster, but I noticed that few of the spark configuration are being ignored. May I know the reason why they are being ignored?

gcloud dataproc jobs submit spark --cluster=<Cluster> --class=<class_name> --jars=<list_of_jars> --region=<region> --files=<list_of_files> --properties=spark.driver.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties",spark.executor.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties, spark.executor.instances=36, spark.executor.cores=4, spark.executor.memory=4G, spark.driver.memory=8G, spark.shuffle.service.enabled=true, spark.yarn.maxAppAttempts=1, spark.sql.shuffle.partitions=200, spark.executor.memoryOverhead=7680, spark.driver.maxResultSize=0, spark.port.maxRetries=250, spark.dynamicAllocation.initialExecutors=20, spark.dynamicAllocation.minExecutors=10"


Warning: Ignoring non-Spark config property:  spark.driver.maxResultSize
Warning: Ignoring non-Spark config property:  spark.driver.memory
Warning: Ignoring non-Spark config property:  spark.dynamicAllocation.minExecutors
Warning: Ignoring non-Spark config property:  spark.executor.cores
Warning: Ignoring non-Spark config property:  spark.port.maxRetries
Warning: Ignoring non-Spark config property:  spark.yarn.maxAppAttempts
Warning: Ignoring non-Spark config property:  spark.dynamicAllocation.initialExecutors
Warning: Ignoring non-Spark config property:  spark.executor.memory
Warning: Ignoring non-Spark config property:  spark.executor.memoryOverhead
Warning: Ignoring non-Spark config property:  spark.sql.shuffle.partitions
Warning: Ignoring non-Spark config property:  spark.executor.instances

Upvotes: 1

Views: 1615

Answers (2)

Amine Sagaama
Amine Sagaama

Reputation: 146

Can you try this one ?

gcloud dataproc jobs submit spark \
  --cluster=<Cluster> \
  --class=<class_name> \
  --jars=<list_of_jars> \
  --region=<region> \
  --files=<list_of_files> \
  --properties=^#^spark.driver.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties"#spark.executor.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties"#spark.executor.instances=36#spark.executor.cores=4#spark.executor.memory=4G#spark.driver.memory=8G#spark.shuffle.service.enabled=true#spark.yarn.maxAppAttempts=1#spark.sql.shuffle.partitions=200#spark.executor.memoryOverhead=7680#spark.driver.maxResultSize=0#spark.port.maxRetries=250#spark.dynamicAllocation.initialExecutors=20#spark.dynamicAllocation.minExecutors=10

Upvotes: 0

mck
mck

Reputation: 42332

Try below instead. They are not extraJavaOptions, but belongs to properties.

gcloud dataproc jobs submit spark --cluster=<Cluster> --class=<class_name> --jars=<list_of_jars> --region=<region> --files=<list_of_files> --properties=spark.driver.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties",spark.executor.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties",spark.executor.instances=36,spark.executor.cores=4,spark.executor.memory=4G,spark.driver.memory=8G,spark.shuffle.service.enabled=true,spark.yarn.maxAppAttempts=1,spark.sql.shuffle.partitions=200,spark.executor.memoryOverhead=7680,spark.driver.maxResultSize=0,spark.port.maxRetries=250,spark.dynamicAllocation.initialExecutors=20,spark.dynamicAllocation.minExecutors=10

in a more readable form:

gcloud dataproc jobs submit spark --cluster=<Cluster> --class=<class_name> --jars=<list_of_jars> --region=<region> --files=<list_of_files> 
--properties=spark.driver.extraJavaOptions="
    -Dconfig.file=application_dev.json
    -Dlog4j.configuration=log4j.properties
",spark.executor.extraJavaOptions="
    -Dconfig.file=application_dev.json
    -Dlog4j.configuration=log4j.properties
",
spark.executor.instances=36,
spark.executor.cores=4,
spark.executor.memory=4G,
spark.driver.memory=8G,
spark.shuffle.service.enabled=true,
spark.yarn.maxAppAttempts=1,
spark.sql.shuffle.partitions=200,
spark.executor.memoryOverhead=7680,
spark.driver.maxResultSize=0,
spark.port.maxRetries=250,
spark.dynamicAllocation.initialExecutors=20,
spark.dynamicAllocation.minExecutors=10

Upvotes: 2

Related Questions