Rajashekhar Meesala
Rajashekhar Meesala

Reputation: 329

Spark-submit on kubernetes, executor pods are still running even after spark job is finished. Due to which the resources are not free for new jobs

We are submitting spark job into kubernetes cluster using cluster mode and with some more memory configurations. My job is finishing in about 5 mins but my executor pods are still running after 30 - 40 mins. Due to this the new jobs are pending as the resources are still bound to running pods.

Below is spark submit command :

/spark-2.4.4-bin-hadoop2.7/bin/spark-submit --deploy-mode cluster --class com.Spark.MyMainClass --driver-memory 3g --driver-cores 1 --executor-memory 12g --executor-cores 3 --master k8s://https://masterhost:6443 --conf spark.kubernetes.namespace=default --conf spark.app.name=myapp1 --conf spark.executor.instances=3 --conf spark.kubernetes.driver.pod.name=myappdriver1 --conf spark.kubernetes.container.image=imagePath --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.driver.container.image=imagePath --conf spark.kubernetes.executor.container.image=imagePath local:///opt/spark/jars/MyApp.jar

Upvotes: 2

Views: 1187

Answers (1)

Loic
Loic

Reputation: 3370

You need to add

sparkSession.stop()

at the end of your code

Upvotes: 3

Related Questions