Spark_user
Spark_user

Reputation: 41

how to auto scale spark job in kubernetes cluster

Need an advice on running spark/kubernetes. I have Spark 2.3.0 which comes with native kubernetes support. I am trying to run the spark job using spark-submit with parameters master as"kubernetes-apiserver:port" & other required parameters like spark image and others as mentioned here . How to enable auto scaling / increase the no of worker nodes based on load? Is there a sample document I can follow ? Some basic example/document would be very helpful. Or is there any other way to deploy the spark on kubernetes which can help me achieve auto scale based on load.

Upvotes: 0

Views: 816

Answers (1)

runzhliu
runzhliu

Reputation: 64

Basically, Apache Spark 2.3.0 does not officially support auto scalling on K8S cluster, as you can see in future work after 2.3.0.

BTW, it's still a feature working in progress, but you can try on the k8s fork for Spark 2.2

Upvotes: 1

Related Questions