Reputation: 31
I have created a cluster of 4 ubuntu vms adopting vagrant in order to run some basic spark code for test purposes. I set up passwordless ssh fingerprint on all machines and I disabled the firewall but I'm still getting some error connection while running
/usr/local/spark/bin/spark-submit --class "class.main" --deploy-mode client --master spark://<IP>:7077 /vagrant/.../class.main-assembly-1.0.jar "file:/vagrant/.../input.csv"
The same command, running on --master local[*] works as expected. The error readable form the web ui of the workers is:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/12/12 16:28:12 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 7415@node2
16/12/12 16:28:12 INFO SignalUtils: Registered signal handler for TERM
16/12/12 16:28:12 INFO SignalUtils: Registered signal handler for HUP
16/12/12 16:28:12 INFO SignalUtils: Registered signal handler for INT
16/12/12 16:28:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/12 16:28:13 INFO SecurityManager: Changing view acls to: root,vagrant
16/12/12 16:28:13 INFO SecurityManager: Changing modify acls to: root,vagrant
16/12/12 16:28:13 INFO SecurityManager: Changing view acls groups to:
16/12/12 16:28:13 INFO SecurityManager: Changing modify acls groups to:
16/12/12 16:28:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, vagrant); groups with view permissions: Set(); users with modify permissions: Set(root, vagrant); groups with modify permissions: Set()
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:70)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:174)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:270)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:188)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:71)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:70)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
... 4 more
Caused by: java.io.IOException: Failed to connect to /10.0.2.15:48219
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /10.0.2.15:48219
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
Has anyone seen anything similar and know a solution? Any help or suggestion would be appreciated. Thank you in advance
Upvotes: 1
Views: 1179
Reputation: 362
I'm guessing you're using Virtualbox and that all the nodes are on the same Virtualbox NAT network, I just ran into this myself. In order to make Spark work, you need to set up a "Host-only network" in Virtualbox and make sure all your machines are in that network (for example with 2 network adapters, one for NAT and the other in the host-only network).
When that's set up, you'll need to set SPARK_LOCAL_IP=172.1.2.*
and
SPARK_MASTER_IP=172.1.2.3
inside conf/spark-env.sh
on each master and slave node, making sure the master IP is the same everywhere and the local IP is that 172
IP address.
To start the master, run something like:
~/spark-2.1.0-bin-hadoop2.7/sbin/start-master.sh -h 172.1.2.3
To start the slaves, run something like:
~/spark-2.1.0-bin-hadoop2.7/sbin/start-slave.sh -h 172.1.2.4 spark://172.1.2.3:7077
Finally, to run an application, do something like:
spark-submit --master spark://172.1.2.3:7077 --class org.apache.spark.examples.SparkPi ~/spark-2.1.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.0.jar 100
Upvotes: 1