user1574688
user1574688

Reputation: 11

cannot write a file into hdfs - getting error hdfs in safe mode

When I try to copy a file from my local directory into the HDFS I get the following error:

[cloudera@localhost ~]$ hadoop fs -copyFromLocal hello.txt /user/cloudera/my_data


copyFromLocal: Cannot create file/user/cloudera/my_data/hello.txt._COPYING_. Name node is in safe mode.

Then I executed the command :

[cloudera@localhost ~]$ su
Password: 
[root@localhost cloudera]# hdfs dfsadmin -safemode leave
safemode: Access denied for user root. Superuser privilege is required

and further executed the command to store in the file into the HDFS I am getting the same error.

Again I executed the command :

[cloudera@localhost ~]$ su - root
Password: 
[root@localhost ~]# hdfs dfsadmin -safemode leave

I am getting the same error.

I am using cloudera distribution hadoop.

Upvotes: 1

Views: 3312

Answers (3)

Gyanendra Dwivedi
Gyanendra Dwivedi

Reputation: 5538

From the apache documentation here

During start up the NameNode loads the file system state from the fsimage and the edits log file. It then waits for DataNodes to report their blocks so that it does not prematurely start replicating the blocks though enough replicas already exist in the cluster. During this time NameNode stays in Safemode. Safemode for the NameNode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. Normally the NameNode leaves Safemode automatically after the DataNodes have reported that most file system blocks are available. If required, HDFS could be placed in Safemode explicitly using bin/hadoop dfsadmin -safemode command.

In most of the case, the process completes within a reasonable time after HDFS is started. However, you can force HDFS to come out of safemode via below command:

hadoop dfsadmin -safemode leave

It is strongly recommended to run fsck to recover from inconsistent state.

Upvotes: 3

Aman
Aman

Reputation: 3261

Try with

 hadoop dfsadmin -safemode leave    

This should work...

Upvotes: 0

SachinJose
SachinJose

Reputation: 8522

Namenode will be in safemode for sometimes after restart, If you wait for some time(depends on number of blocks) namenode will leave safe mode automatically.

You can forcefully do the same using the hdfs dfsadmin -safemode leave command, Only HDFS admin user can execute this command, so switch to hdfs user before executing this command.

su hdfs

Upvotes: 1

Related Questions