Rishikesh Teke
Rishikesh Teke

Reputation: 408

spark-shell on multinode spark cluster fails to spon executor on remote worker node

Installed spark cluster on standalone mode with 2 nodes on first node there is spark master running and on another node spark worker. When i try to run spark shell on worker node with word count code it runs fine but when i try to run spark shell on the master node it gives following output :

WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Executor is not triggered to run the job. Even though there is worker available to spark master its giving such a problem . Any help is appriciated , thanks

Upvotes: 2

Views: 362

Answers (1)

Alper t. Turker
Alper t. Turker

Reputation: 35249

You use client deploy mode so the best bet is that executor nodes cannot connect to the driver port on the local machine. It could be firewall issue or problem with advertised IP / hostname. Please make sure that:

  • spark.driver.bindAddress
  • spark.driver.host
  • spark.driver.port

use expected values. Please refer to the networking section of Spark documentation.

Less likely it is a lack of resources. Please check if you don't request more resources than provided by workers.

Upvotes: 2

Related Questions