ByanJati
ByanJati

Reputation: 83

Why Hadoop using namenode and datanode?

We know that servers which used for big data processing should be tolerant with hardware failure. I mean, if we had 3 server (A, B, C) and suddenly the B server is down, A and C could replace it position. But in hadoop, we know hadoop using namenode and datanode, which is when the namenode is down,we can't process the data anymore and it sounds lack of tolerant with hardware failure.

is there any reason with this design arch of hadoop?

Upvotes: 0

Views: 82

Answers (1)

Rajesh N
Rajesh N

Reputation: 2574

The issue you have mentioned is known as sinlgle point of failure which exists in older hadoop versions.

Try newer versions of hadoop like 2.x.x. Hadoop from version 2.0.0 avoids this single point of failure by assigning two namenodes namely active and standby namenodes. When active namenode fails due to hardware or power issues, the standby namenode will act as active namenode.

Check this link: Hadoop High Availability for further details.

Upvotes: 3

Related Questions