ruchi
ruchi

Reputation: 996

Multi Node Cluster Hadoop Setup

Pseudo-Distributed single node cluster implementation

I am using window 7 with CYGWIN and installed hadoop-1.0.3 successfully. I still start services job tracker,task tracker and namenode on port (localhost:50030,localhost:50060 and localhost:50070).I have completed single node implementation.

Now I want to implement Pseudo-Distributed multiple node cluster . I don't understand how to divide in master and slave system through network ips?

Upvotes: 4

Views: 1332

Answers (2)

Naresh
Naresh

Reputation: 5397

For your ssh problem just follow the link of single node cluster :

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

and yes, you need to specify the ip's of master and slave in conf file for that you can refer this url : http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

I hope this helps.

Upvotes: 1

Alok Tripathi
Alok Tripathi

Reputation: 1

Try to create number of VM you want to add in your cluster.Make sure that those VM are having the same hadoop version . Figure out the IPs of each VM. you will find files named master and slaves in $HADOOP_HOME/conf mention the IP of VM to conf/master file which you want to treat as master and and do the same with conf/slaves with slave nodes IP.

Make sure these nodes are having Passwordless-ssh connection. Format your namenode and then run start-all.sh.

Thanks,

Upvotes: 0

Related Questions