cscan
cscan

Reputation: 3850

Hadoop cannot assign request address

I'm having difficulty starting up a hadoop cluster. The name node and job trackers are throwing an exception on start saying that they cannot assign the requested address.

Here my core-site.xml file:

<configuration>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://name.node.private.ip:9000</value>
     </property>
</configuration>

and here is my mapred-site.xml file:

<configuration>
     <property>
         <name>mapred.job.tracker</name>
         <value>job.tracker.private.ip:9001</value>
     </property>
     <property>
         <name>mapreduce.job.counters.limit</name>
         <value>1000</value>
     </property>
     <property>
         <name>mapred.tasktracker.map.tasks.maximum</name>
         <value>50</value>
     </property>
     <property>
         <name>mapred.tasktracker.reduce.tasks.maximum</name>
         <value>50</value>
     </property>
</configuration>

Additionally my job tracker's master file contains its private ip and the slaves file contains the private ips of the four slaves. The name node's master file contains its private ip and the slaves file contains the private ips of the four slaves. Each slave node's master file is blank and the slave file contains its private ip.

The /etc/hosts files are unmodified and look like this:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost6 localhost6.localdomain6

Upvotes: 0

Views: 960

Answers (1)

cscan
cscan

Reputation: 3850

You'll need to modify the /etc/hosts file. The following format works:

127.0.0.1   privateDns subdomain localhost
privateIp   privateDns subdomain

So, if the private ip is 172.31.0.1 and the privateDns is ip-172-31-0-1.us-west-2.compute.internal it would like like the following

127.0.0.1  ip-172-31-0-1.us-west-2.compute.internal ip-172-31-0-1 localhost
172.31.0.1 ip-172-31-0-1.us-west-2.compute.internal ip-172-31-0-1

Upvotes: 1

Related Questions