fair_data
fair_data

Reputation: 118

hadoop hdfs points to file:/// not hdfs://

So I installed Hadoop via Cloudera Manager cdh3u5 on CentOS 5. When I run cmd

hadoop fs -ls /

I expected to see the contents of hdfs://localhost.localdomain:8020/

However, it had returned the contents of file:///

Now, this goes without saying that I can access my hdfs:// through

hadoop fs -ls hdfs://localhost.localdomain:8020/

But when it came to installing other applications such as Accumulo, accumulo would automatically detect Hadoop Filesystem in file:///

Question is, has anyone ran into this issue and how did you resolve it?

I had a look at HDFS thrift server returns content of local FS, not HDFS , which was a similar issue, but did not solve this issue. Also, I do not get this issue with Cloudera Manager cdh4.

Upvotes: 6

Views: 11939

Answers (2)

Radhakrishnan Rk
Radhakrishnan Rk

Reputation: 561

We should specify data node data directory and name node meta data directory.

dfs.name.dir,

dfs.namenode.name.dir,

dfs.data.dir,

dfs.datanode.data.dir,

fs.default.name

in core-site.xml file and format name node.

To format HDFS Name Node:

hadoop namenode -format

Enter 'Yes' to confirm formatting name node. Restart HDFS service and deploy client configuration to access HDFS.

If you have already did the above steps. Ensure client configuration is deployed correctly and it points to the actual cluster endpoints.

Upvotes: 0

Donald Miner
Donald Miner

Reputation: 39943

By default, Hadoop is going to use local mode. You probably need to set fs.default.name to hdfs://localhost.localdomain:8020/ in $HADOOP_HOME/conf/core-site.xml.

To do this, you add this to core-site.xml:

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost.localdomain:8020/</value>
</property>

The reason why Accumulo is confused is because it's using the same default configuration to figure out where HDFS is... and it's defaulting to file://

Upvotes: 11

Related Questions