PTDS
PTDS

Reputation: 227

Install Spark on an existing Hadoop cluster

I am not a system administrator, but I may need to do some administrative task and hence need some help.

We have a (remote) Hadoop cluster and people usually run map-reduce jobs on the cluster.

I am planning to install Apache Spark on the cluster so that all the machines in the cluster may be utilized. This should be possible and I have read from http://spark.apache.org/docs/latest/spark-standalone.html "You can run Spark alongside your existing Hadoop cluster by just launching it as a separate service on the same machines..."

If you have done this before, please give me the detailed steps so that the Spark cluster may be created.

Upvotes: 7

Views: 9131

Answers (1)

Nicomak
Nicomak

Reputation: 2339

If you have Hadoop already installed on your cluster and want to run spark on YARN it's very easy:

Step 1: Find the YARN Master node (i.e. which runs the Resource Manager). The following steps are to be performed on the master node only.

Step 2: Download the Spark tgz package and extract it somewhere.

Step 3: Define these environment variables, in .bashrc for example:

# Spark variables
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=<extracted_spark_package>
export PATH=$PATH:$SPARK_HOME/bin

Step 4: Run your spark job using the --master option to yarn-client or yarn-master:

spark-submit \
--master yarn-client \
--class org.apache.spark.examples.JavaSparkPi \
$SPARK_HOME/lib/spark-examples-1.5.1-hadoop2.6.0.jar \
100

This particular example uses a pre-compiled example job which comes with the Spark installation.

You can read this blog post I wrote for more details on Hadoop and Spark installation on a cluster.

You can read the post which follows to see how to compile and run your own Spark job in Java. If you want to code jobs in Python or Scala, its convenient to use a notebook like IPython or Zeppelin. Read more about how to use those with your Hadoop-Spark cluster here.

Upvotes: 9

Related Questions