Reputation: 159
I installed hadoop 2.7 on my mac. Then i want to install spark on it. But there is no any document for this.can anybody explain step by step how to install spark on hadoop?
Upvotes: 1
Views: 1699
Reputation: 226
Here are the steps I took to install Apache Spark to a Linux Centos system with hadoop:
sudo yum install java-11-openjdk
)tar xvf spark-2.4.5-bin-hadoop2.7.tgz
)sudo mv spark-2.4.5-bin-hadoop2.7/ /opt/spark
)/opt/spark/bin/spark-shell
if you wish to work with Scala or /opt/spark/bin/pyspark
if you want to work with PythonUpvotes: 0
Reputation: 3692
For running spark on yarn cluster there is lot of steps to install hadoop and spark and all so i write one blog on it step by step you can install it and run spark shell on yarn see the below link
Upvotes: 0
Reputation: 919
Steps to Install Apache Spark
1) Open Apache Spark Website http://spark.apache.org/
2) Click on Downloads Tab a new Page will get open
3) Choose Pre-built for Hadoop 2.7 and later
4) Choose Direct Download
5) Click on Download Spark: spark-2.0.2-bin-hadoop2.7.tgz and save it on your desired location.
6) Go to the Downloaded Tar file and Extract it.
7) Again Extract the spark-2.0.2-bin-hadoop2.7.tar [File name will differ as version changes] to generate spark-2.0.2-bin-hadoop2.7 folder
8) Now open Shell Prompt and go to the bin directory of spark-2.0.2-bin-hadoop2.7 folder [Folder name will differ as version changes ]
9) Execute command spark-shell.sh
You will be in Spark Shell you can execute the spark commands
https://spark.apache.org/docs/latest/quick-start.html <-- Quick start Guide from spark
Hope this Helps!!!
Upvotes: 2