Reputation: 101
I have started PySpark shell locally on my MacBook, connected to my master node on remote server by:
$ PYSPARK_PYTHON=python3 /vagrant/spark-2.0.0-bin-hadoop2.7/bin/pyspark --master spark://[server-ip]:7077
I tried executing simple Spark example from website:
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
df = spark.read.json("/path/to/spark-2.0.0-bin-hadoop2.7/examples/src/main/resources/people.json")
I have got error
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
I have enough memory on my server and also on my local machine, but I am getting this weird error again and again. I have 6G for my Spark cluster, my script is using only 4 cores with 1G memory per node.
[
I have Googled for this error and tried to setup different memory configs, also disabled firewall on both machines, but it does not helped me. I have no idea how to fix it.
Is someone faced the same problem? Any ideas?
Upvotes: 3
Views: 1885
Reputation: 330073
You are submitting application in the client mode. It means that driver process is started on your local machine.
When executing Spark applications all machines have to be able to communicate with each other. Most likely your driver process is not reachable from the executors (for example it is using private IP or is hidden behind firewall). If that is the case you can confirm that by checking executor logs (go to application, select on of the workers with the status EXITED
and check stderr
. You "should" see that executor is failing due to org.apache.spark.rpc.RpcTimeoutException
).
There are two possible solutions:
Upvotes: 4