vathan Lal
vathan Lal

Reputation: 153

Apache Spark and Mesos running on a single node

I am interested in testing Spark running on Mesos. I created a Hadoop 2.6.0 single-node cluster in my Virtualbox and installed Spark on it. I can successfully process files in HDFS using Spark.

Then I installed Mesos Master and Slave on the same node. I tried to run Spark as a framework in Mesos using these instructions. I get the following error with Spark:

WARN TaskSchedulerImpl: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have sufficient resources

Sparkshell is successfully registered as a framework in the Mesos. Is there anything wrong with using a single-node setup? Or whether I need to add more Spark worker nodes?

I am very new to Spark and my aim is to just test Spark, HDFS, and Mesos.

Upvotes: 0

Views: 306

Answers (1)

Fontaine007
Fontaine007

Reputation: 597

If you have allocated enough resources for spark slaves, the cause might be firewall blocking the communication. Take a look at my other answer:

Apache Spark on Mesos: Initial job has not accepted any resources

Upvotes: 0

Related Questions