Saurabh Gokhale
Saurabh Gokhale

Reputation: 46425

Mounting of HDFS to local directory failing

I'm currently trying to implement mounting of hdfs to a local directory on ubuntu machine. I'm using hadoop-fuse-dfs package.

So, I'm executing this below command

ubuntu@dev:~$ hadoop-fuse-dfs dfs://localhost:8020 /mnt/hdfs

Output

INFO /var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.5.0-Packaging-Hadoop-2013-11-20_14-31-53/hadoop-2.0.0+1518-1.cdh4.5.0.p0.24~precise/src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs

But, when I try to access the mounted hdfs locally, I see the error message (please check the snapshot attached)

ls: cannot access /mnt/hdfs: No such file or directory
total 4.0K
d????????? ? ?      ?         ?            ? hdfs

PS : I've already executed following commands, but still I get same output.

$ sudo adduser ubuntu fuse
$ sudo addgroup ubuntu fuse

Am I missing something ? Please suggest some workaround.

Upvotes: 2

Views: 3008

Answers (2)

Adnan Yaqoob
Adnan Yaqoob

Reputation: 66

You need to use hostname instead of localhost. I faced the same issue, after changing localhost to hostname which is also defined in hosts file, it got fixed.

hadoop-fuse-dfs dfs://{hostname}:8020 /mnt/hdfs

According to Cloudera

In an HA deployment, use the HDFS nameservice instead of the NameNode URI; that is, use the value of dfs.nameservices in hdfs-site.xml.

Upvotes: 1

JariOtranen
JariOtranen

Reputation: 126

This happens at least when hadoop-fuse-dfs can not connect to filesystem metadata operations running by default on port 8020 e.g. due to network configuration issues.

You can test from your host that connection works prior running hadoop-fuse-dfs e.g. by

telnet your-name-node 8020

GET /

Upvotes: 0

Related Questions