PintoUbuntu
PintoUbuntu

Reputation: 41

Unable to connect to HDFS data node from remote client

I'm currently experimenting with a legacy application built using Hadoop 2.3.0 (I know.. don't ask). Everything was working fine as long as I was running the client on the same machine as the single node hadoop deployment. Now that I shifted the client application to another machine on the local network, I'm unable to connect to the data nodes.

2018-04-02 14:33:29.661/IST WARN  [hadoop.hdfs.BlockReaderFactory] I/O error constructing remote block reader.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3044)
at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:744)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:659)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.PushbackInputStream.read(PushbackInputStream.java:186)
at java.util.zip.ZipInputStream.readFully(ZipInputStream.java:403)
at java.util.zip.ZipInputStream.readLOC(ZipInputStream.java:278)
at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:122)
at opennlp.tools.util.model.BaseModel.loadModel(BaseModel.java:220)
at opennlp.tools.util.model.BaseModel.<init>(BaseModel.java:181)
at opennlp.tools.tokenize.TokenizerModel.<init>(TokenizerModel.java:125)

And further..

2018-04-02 14:33:29.666/IST WARN  [hadoop.hdfs.DFSClient] Failed to connect to localhost/127.0.0.1:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection refused
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3044)
at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:744)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:659)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.PushbackInputStream.read(PushbackInputStream.java:186)
at java.util.zip.ZipInputStream.readFully(ZipInputStream.java:403)
at java.util.zip.ZipInputStream.readLOC(ZipInputStream.java:278)
at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:122)
at opennlp.tools.util.model.BaseModel.loadModel(BaseModel.java:220)
at opennlp.tools.util.model.BaseModel.<init>(BaseModel.java:181)
at opennlp.tools.tokenize.TokenizerModel.<init>(TokenizerModel.java:125)

Now I'm able to monitor the hadoop deployment from the client's web browser, and everything seems to be working fine there.

Hadoop monitoring UI screenshot

I've read the answers here and here, but I'm still getting the same error. I can't get the client to stop looking up localhost/127.0.0.1:50010 instead of the correct IP address (or hostname) of the data node.

My first concern is whether I'm missing some configuration to be done on the client application. My application uses a variable named HADOOP_URL to connect to the database and its value is correctly set to the hostname of the cluster, which in turn resolves to the remote IP in /etc/hosts. It may be that I'm missing some more configuration to be set at the client side. Would be nice to have some ideas here.

However, this answer suggests that the Namenode informs the client about the Datanode's hostname. This supports the possibility that my client is able to connect to the Namenode, and therefore, client side configuration is working fine.

So lastly, I need to find a way for the Namenode to return hostname that I set instead of returning localhost/127.0.0.1. How do I go about fixing this?

Upvotes: 1

Views: 6004

Answers (2)

Ken
Ken

Reputation: 33

So lastly, I need to find a way for the Namenode to return hostname that I set instead of returning localhost/127.0.0.1. How do I go about fixing this?

=> According to this article, maybe here is the config that you need

By default HDFS clients connect to DataNodes using the IP address provided by the NameNode. Depending on the network configuration this IP address may be unreachable by the clients. The fix is letting clients perform their own DNS resolution of the DataNode hostname. The following setting enables this behavior.

<property>
  <name>dfs.client.use.datanode.hostname</name>
  <value>true</value>
  <description>Whether clients should use datanode hostnames when
    connecting to datanodes.
  </description>
</property>

Upvotes: 2

Pratyush Kulwal
Pratyush Kulwal

Reputation: 165

  • Read IP of your HDFS file location and add this IP into /etc/hosts of your host machine (where spark resides)

NOTE: If you are using a virtual machine - Change your VM settings to Host only and restart machine

  • Just for ensuring, make sure that connections set between the two machines is password-less ssh. There is a good article here: SSH passwordless

  • While using the spark command make sure you use user@HDFS-hostname

Example: lines=sc.textFile("hdfs://[email protected]:8020/user/jack/ulysses10.txt")

Upvotes: 1

Related Questions