Jes
Jes

Reputation: 2684

Spark - Add Worker from Local Machine (standalone spark cluster manager)?

When running spark 1.4.0 in a single machine, I can add worker by using this command "./bin/spark-class org.apache.spark.deploy.worker.Worker myhostname:7077". The official documentation points out another way by adding "myhostname:7077" to the "conf/slaves" file followed by executing the command "sbin/start-all.sh" which invoke the master and all workers listed in conf/slaves file. However, the later method doesn't work for me (with time-out error). Can anyone help me with this?

Here is my conf/slaves file (assume the master URL is myhostname:700):

myhostname:700

Upvotes: 0

Views: 1290

Answers (1)

Holden
Holden

Reputation: 7452

The conf.slaves file should just be the list of the hostnames, you don't need to include the port # that spark runs on (I think if you do it will try and ssh on that port which is probably where the timeout comes from).

Upvotes: 0

Related Questions