shaft
shaft

Reputation: 2229

How to configure HDFS to listen to 0.0.0.0

I have an hdfs cluster listening on 192.168.50.1:9000 which means it only accepts connections via that IP. I would like it to listen at 0.0.0.0:9000. When I enter 127.0.0.1 localhost master in the /etc/hosts, then it starts at 127.0.0.1:9000, which prevents all nodes to connect.

This question is similar to this one How to make Hadoop servers listening on all IPs, but for hdfs, not yarn.

Is there an equivalent setting for core-site.xml like yarn.resourcemanager.bind-host or any other way to configure this? If not, then what's the reasoning behind this? Is it a security feature?

Upvotes: 5

Views: 2494

Answers (2)

edi
edi

Reputation: 937

Well the question is quite old already, however, usually you do not need to configure the bind address 0.0.0.0 because it is the default value! You'd rather have an entry in the /etc/hosts file 127.0.0.1 hostname which hadoop resolves to 127.0.0.1. Consequently you need to remove that entry and hadoop will bind to all interfaces (0.0.0.0) without any additional config entries in the config files.

Upvotes: 1

tk421
tk421

Reputation: 5967

For the NameNode you need to set these to 0.0.0.0 in your hdfs-site.xml:

  • dfs.namenode.rpc-bind-host
  • dfs.namenode.servicerpc-bind-host
  • dfs.namenode.lifeline.rpc-bind-host
  • dfs.namenode.http-bind-host
  • dfs.namenode.https-bind-host

The DataNodes use 0.0.0.0 by default.

If you ever need to find a config variable for HDFS, refer to hdfs-default.xml.

Also very useful, if you look at any of the official Hadoop docs, at the bottom left corner of the page are all the default values for the various XML files.

enter image description here

So you can go to Apache Hadoop 2.8.0 or your specific version and find the settings you're looking for.

Upvotes: 6

Related Questions