Marc
Marc

Reputation: 16512

HBase can't creates its directory in HDFS

I'm following this tutorial to install hbase and hadoop but I'm facing a problem.

Everything is fine until the last step

HBase creates its directory in HDFS. To see the created directory, browse to Hadoop bin and type the following command.

$ ./bin/hadoop fs -ls /hbase If everything goes well, it will give you the following output.

Found 7 items drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/.tmp

...

but when I run this command I get /hbase :No such file or directory

This is my config

Hadoop configuration

core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
   <property>
      <name>dfs.replication</name >
      <value>1</value>
   </property>

   <property>
      <name>dfs.name.dir</name>
      <value>file:///home/marc/hadoopinfra/hdfs/namenode</value>
   </property>

   <property>
      <name>dfs.data.dir</name>
      <value>file:///home/marc/hadoopinfra/hdfs/datanode</value>
   </property>
</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>

Hbase configuration hbase-site.xml

<configuration>
   <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:8030/hbase</value>
</property>
   <property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/marc/zookeeper</value>
   </property>
   <property>
       <name>hbase.cluster.distributed</name>
       <value>true</value>
    </property>
</configuration>

I can browse http://localhost:50070 and http://localhost:8088/cluster

How can I troubleshoot this?

EDIT

Based on Saurabh Suman's answer, I created the hbase folder but it stays empty.

In hbase-marc-master-marc-pc.log, I have the following exception. Is it related?

2017-07-01 20:31:59,349 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN]
    at org.apache.hadoop.ipc.Client.call(Client.java:1411)
    at org.apache.hadoop.ipc.Client.call(Client.java:1364)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:748)
2017-07-01 20:31:59,351 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN]
    at org.apache.hadoop.ipc.Client.call(Client.java:1411)
    at org.apache.hadoop.ipc.Client.call(Client.java:1364)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:748)

Upvotes: 1

Views: 4975

Answers (3)

Paul Cao
Paul Cao

Reputation: 1

I meet the same issue, and I notice that: core-site.xml hdfs://localhost:9000 hbase-site.xml hdfs://localhost:8030/hbase

I changed the port to 9000, and it worked.

Upvotes: 0

Tom G.
Tom G.

Reputation: 71

The log indicates that HBase has issues with becoming an active master and therefore it starts to shutdown.

My assumption is that HBase was never able to start properly and therefore it didn't create the /hbase directory on its own. Further, this would be the reason why the /hbase directory is still empty.

I reproduced your error on my virtual machine and fixed it with this modified setup.


OS CentOS Linux release 7.2.1511

Virtualization software Vagrant and Virtualbox

Java

java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

core-site.xml (HDFS)

<configuration>
   <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:8020</value>
   </property>
</configuration>

hbase-site.xml (HBase)

<configuration>
   <property>
      <name>hbase.rootdir</name>
      <value>file:/home/hadoop/HBase/HFiles</value>
   </property>

   <property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/hadoop/zookeeper</value>
   </property>
   <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
   </property>
   <property>
      <name>hbase.rootdir</name>
      <value>hdfs://localhost:8020/hbase</value>
   </property>
</configuration>

Directory owner and permission adjustments

sudo su # Become root user
cd /usr/local/

chown -R hadoop:root hadoop
chmod -R 755 hadoop

chown -R hadoop:root Hbase
chmod -R 755 Hbase

Result

After starting HBase with this setup, it automatically created the /hbase directory and filled it with contents.

[hadoop@localhost conf]$ hdfs dfs -ls /hbase
Found 7 items
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/.tmp
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/MasterProcWALs
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/WALs
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/data
-rw-r--r--   1 hadoop supergroup         42 2017-07-03 14:26 /hbase/hbase.id
-rw-r--r--   1 hadoop supergroup          7 2017-07-03 14:26 /hbase/hbase.version
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/oldWALs

Upvotes: 4

Saurabh Suman
Saurabh Suman

Reputation: 26

We need to edit only those thing in configuration files which can't be created by itself. so,you need to manually create the directory in HDFS. hdfs dfs -mkdir /hbase

Upvotes: 1

Related Questions