Reputation: 387
I have pseudo distributed hadoop 2.2.0 Environment setup in my laptop.I can run mapreduce applications(including Pig and Hive jobs) and the status of the applications can be seen from the Web UI http://localhost:8088
I have downloaded the Spark library and just used the file system-HDFS for the spark applications.when I launch a spark application,it is getting launched and the execution also gets completed successfully as expected.
But the Web UI http://localhost:8088
is not listing the Spark application completed/launched.
Please suggest if there is any other additional configuration is required for seeing Spark applications in the Web UI.
(Note: http://localhost:50070
this Web UI shows the files correctly,when tried writing files to HDFS via Spark applications)
Upvotes: 1
Views: 12417
Reputation: 684
You might have figured it out but for others who are starting with Spark.You can see all the spark jobs on
after your spark context is initiated(port can be different eg 4041). Based on Standalone installation you can see the master and slave status on
(for slave port is usually 8081 onward). you need to Spark-submit jobs to yarn-cluster or client to see the same on hadoop webservices.
Upvotes: 5