user3930942
user3930942

Reputation: 41

Not able to mount HDFS using Hadoop-Fuse

I have a 2 node Hadoop cluster running on AWS EC2. I am trying to mount the HDFS on a different EC2 instance. The client is running Ubuntu 12.04.4 LTS and I have installed Hadoop-Fuse.

# apt-cache policy hadoop-0.20-fuse
hadoop-0.20-fuse:
Installed: 0.20.2+923.479-1~maverick-cdh3
Candidate: 0.20.2+923.479-1~maverick-cdh3
Version table:
*** 0.20.2+923.479-1~maverick-cdh3 0
   500 http://archive.cloudera.com/debian/ maverick-cdh3/contrib amd64 Packages
   100 /var/lib/dpkg/status

After I try to mount it, I get the following:

# hadoop-fuse-dfs dfs://10.0.0.160:9000 /mnt/tmp
INFO fuse_options.c:165 Adding FUSE arg /mnt/tmp

When I run "df" command, I don't see it there and get the input/output error

# df -h
df: `/mnt/tmp': Input/output error
Filesystem                Size  Used Avail Use% Mounted on
/dev/xvda1                 30G  3.5G   25G  13% /

Also the path shows the following:

# ls -alh /mnt
ls: cannot access /mnt/tmp: Input/output error
total 8.0K
drwxr-xr-x  3 root root 4.0K Aug 11 19:42 .
drwxr-xr-x 25 root root 4.0K Aug 11 17:35 ..
d?????????  ? ?    ?       ?            ? tmp

Any way I can mount it?

Upvotes: 4

Views: 1724

Answers (2)

tk421
tk421

Reputation: 5957

Unfortunately, hadoop-fuse-dfs doesn't have good error messages or documentation.

In order for hadoop-fuse-dfs to work properly you need the Namenode's RPC port which is dfs.namenode.servicerpc-address in hdfs-site.xml.

# hadoop-fuse-dfs dfs://NAMENODE:RPCPORT /mnt/tmp

Upvotes: 2

deepdive
deepdive

Reputation: 10982

Remove openjdk java version Install Oracle JRE

Upvotes: -1

Related Questions