Reputation: 1556
Hi I was new to apache spark and i was trying to learn it
While creating a new standalone cluster I met with this error.
I started my master and it is active in port 7077, i can see that in the ui (port 8080)
While startting the server using the command
./bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.0.56:7077
I am meeting with a connection refused error
14/07/22 13:18:30 ERROR EndpointWriter: AssociationError [akka.tcp://sparkWorker@node- physical:55124] -> [akka.tcp://[email protected]:7077]: Error [Association failed with [akka.tcp://[email protected]:7077]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://[email protected]:7077]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: /192.168.0.56:7077
Please help me with the error i am sruck here for a long time.
I hope the information is enough. Please help
Upvotes: 9
Views: 25596
Reputation: 1
Don't overthink it. Just go to a separate page and manually specify the local host:
I had two workers:
http://localhost:8081/
http://localhost:8082/
Upvotes: 0
Reputation: 555
Use ifconfig command to know your private IP. Then, use that IP on your start-master.sh script like this:
./start-master.sh --host 192.168.1.15 --port 7077
At this point, use telnet command like this:
telnet 192.168.1.15 7077
and the ouput should confirm that you were able to connect without any issue:
Trying 192.168.1.15...
Connected to 192.168.1.15.
Finally, use start-worker.sh script like this:
start-worker.sh spark://192.168.1.15:7077
Upvotes: 0
Reputation: 587
I do not have a DNS and I added entries in /etc/hosts
in the master node to refer to the IPs and hostnames of all master and worker nodes. In worker nodes, I added the IP and hostname of the master node in /etc/hosts
.
Upvotes: 0
Reputation: 11
I had the similar problem in a docker container, I solved it by setting the IP for master and driver as localhost, specifically:
set('spark.master.hostname' ,'localhost')
set('spark.driver.hostname', 'localhost')
Upvotes: 1
Reputation: 3762
For Windows: spark-class org.apache.spark.deploy.master.Master -h [Interface IP to bind to]
Upvotes: 0
Reputation: 980
Change the SPARK_MASTER_HOST=< ip> in the spark-env.sh of the master node.
Then restart the master, if you grep the process you will see it changes from
java -cp /spark/conf/:/spark/jars/* -Xmx1g org.apache.spark.deploy.master.Master --host < HOST NAME> --port 7077 --webui-port 8080
to
java -cp /spark/conf/:/spark/jars/* -Xmx1g org.apache.spark.deploy.master.Master --host < HOST IP> --port 7077 --webui-port 8080
Upvotes: 4
Reputation: 71
Try "./sbin/start-master -h ". It works, when I specify the host name as IP address.
Upvotes: 6
Reputation: 2441
it seems like spark is very picky about IP and machine names. so, when starting your master, it will use your machine name to register spark master. if that name is not reachable from your workers, it will be almost impossible to reach.
a way to solve it, is to start your master like this:
SPARK_MASTER_IP=YOUR_SPARK_MASTER_IP ${SPARK_HOME}/sbin/start-master.sh
then, you will be able to connect your slaves like this
${SPARK_HOME}/sbin/start-slave.sh spark://YOUR_SPARK_MASTER_IP:PORT
i hope it helps!
Upvotes: 1
Reputation: 1526
In my case, I went to /etc/hosts and :
Upvotes: 8
Reputation: 193
did you add the entries of master and worker nodes in etc/hosts, if not add every machines ip and host name mappings in all the machines.
Upvotes: 0
Reputation: 731
Check if your firewall is turned off as it might be blocking the worker connection by either turning off the firewall temporarily:
$ sudo service iptables stop
or permanently:
$ sudo chkconfig iptables off
Upvotes: 1