K.S.
K.S.

Reputation: 9

org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length

I have set up hadoop cluster on 2 machines. One machine has both master and slave-1. 2nd machine has slave-2. When I started the cluster with start-all.sh, I got following error in secondarynamenode's .out file:

java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "ip-10-179-185-169/10.179.185.169"; destination host is: "hadoop-master":9000;

Following is my JPS output

98366 Jps
96704 DataNode
97284 NodeManager
97148 ResourceManager
96919 SecondaryNameNode

Can someone help me tackle this error ?

Upvotes: 0

Views: 3904

Answers (2)

Vishnu Priyaa
Vishnu Priyaa

Reputation: 149

Might be a problem with the port number you are using. Try this : https://stackoverflow.com/a/60701948/8504709

Upvotes: 0

Yue Hu
Yue Hu

Reputation: 11

I also had this problem.

Please check core-site.xml (this should be under the dir where you downloaded Hadoop, for me the path is: /home/algo/hadoop/etc/hadoop/core-site.xml)

The file should look like this:

<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/algo/hdfs/tmp</value>
        </property>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://localhost:9000</value>
        </property>
</configuration>

Solution: using hdfs://localhost:9000 as ip:port.

Upvotes: 1

Related Questions