jakstack
jakstack

Reputation: 2205

Starting a single Spark Slave (or Worker)

When I do this

spark-1.3.0-bin-hadoop2.4% sbin/start-slave.sh

I get this message

failed to launch org.apache.spark.deploy.worker.Worker:
                         Default is conf/spark-defaults.conf.

Even though I have this:

spark-1.3.0-bin-hadoop2.4% ll conf | grep spark-defaults.conf
-rw-rwxr--+ 1 xxxx.xxxxx ama-unix  507 Apr 29 07:09 spark-defaults.conf
-rw-rwxr--+ 1 xxxx.xxxxx ama-unix  507 Apr 13 12:06 spark-defaults.conf.template

Any idea why?

Thanks

Upvotes: 7

Views: 19963

Answers (2)

rauljosepalma
rauljosepalma

Reputation: 81

I'm using spark 1.6.1, and you no longer need to indicate a worker number, so the actual usage is:

start-slave.sh spark://<master>:<port>

Upvotes: 8

yjshen
yjshen

Reputation: 6693

First of all, you should make sure you are using the command correctly,

Usage: start-slave.sh <worker#> <spark-master-URL>

where <worker#> is worker number you want to launch on the machine which you are running this script.
<spark-master-URL> is like spark://localhost:7077

Upvotes: 6

Related Questions