Sitakant Mishra
Sitakant Mishra

Reputation: 73

High-Availability not working in Hadoop cluster

I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.

I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.

Any help is greatly appreciated. Thanking in advance.

Upvotes: 0

Views: 363

Answers (1)

Sitakant Mishra
Sitakant Mishra

Reputation: 73

The problem was with the limit of open files on a linux machine. I increased the limit of open files and then the initialization of shared edits worked.

Upvotes: 2

Related Questions