msknapp
msknapp

Reputation: 1685

ERROR namenode.FSNamesystem: FSNamesystem initialization failed

I'm running hadoop in pseudo distributed mode on an ubuntu VM. I recently decided to increase the RAM and number of cores available to my VM, and that seems to have completely screwed hdfs. First, it was in safemode and I manually released that using:

hadoop dfsadmin -safemode leave

Then I ran:

hadoop fsck -blocks

and practically every block was corrupt or missing. So I figured, this is just for my learning, I deleted everything in "/user/msknapp" and everything in "/var/lib/hadoop-0.20/cache/mapred/mapred/.settings". So the block errors were gone. Then I try:

hadoop fs -put myfile myfile

and get (abridged):

    12/01/07 15:05:29 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
    at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
12/01/07 15:05:29 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/01/07 15:05:29 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/msknapp/myfile" - Aborting...
put: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
12/01/07 15:05:29 ERROR hdfs.DFSClient: Exception closing file /user/msknapp/myfile : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
    at ...

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/msknapp/myfile could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1490)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:653)
    at ...

So I tried to stop and restart the namenode and datanode. No luck:

hadoop namenode

    12/01/07 15:13:47 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:683)
    at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:690)
    at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:60)
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:469)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:297)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:358)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1239)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1248)
12/01/07 15:13:47 ERROR namenode.NameNode: java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:683)
    at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:690)
    at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:60)
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:469)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:297)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:358)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1239)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1248)

12/01/07 15:13:47 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/

Would somebody please help me out here? I have been trying to fix this for hours.

Upvotes: 1

Views: 13158

Answers (2)

Somnath Kadam
Somnath Kadam

Reputation: 6357

Following error means, fsimage file not have permission

namenode.NameNode: java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/image/fsimage (Permission denied)

so give permission to fsimage file,

$chmod -R 777 fsimage

Upvotes: 0

inquire
inquire

Reputation: 761

Go into where you have configured the hdfs. delete everything there, format namenode and you are good to go. It usually happens if you don't shut down your cluster properly!

Upvotes: 6

Related Questions