Curious
Curious

Reputation: 41

Exception ": org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length" from java

I am trying to connect to remote HDFS from Java program running in my desktop's Eclipse. I am able to connect. But get this Exception while trying to read data:

Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data

Can some one please help with this?

I have a very basic code for reading test data.Error is coming from hdfs.open();

FileSystem hdfs =null;
    String uriPath = "hdfs://" + Constants.HOST + ":" + Constants.PORT+ "/test/hello_world.txt";
    String hadoopBase ="hdfs://" + Constants.HOST + ":" + Constants.PORT;
    Configuration conf = new Configuration();
    conf.set("fs.default.name", hadoopBase);
    URI uri;
    InputStream inputStream = null;
    try {
        uri = new URI(uriPath);
        hdfs =  FileSystem.get(uri, conf);
        Path path = new Path(uri);
        inputStream = hdfs.open(path);
        IOUtils.copyBytes(inputStream, System.out, 4096, false);
    } catch (URISyntaxException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    } finally {
        try {
            hdfs.close();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        IOUtils.closeStream(inputStream);
    }

Here is the full Exception:

java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:785)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1485)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1337)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy11.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:826)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:815)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:804)
at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:319)
at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:281)
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:270)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1115)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:325)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:321)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:333)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:786)
at DataUtil.readData(DataUtil.java:29)
at main(Main.java:24)
Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1800)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1155)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1052)

Upvotes: 4

Views: 19830

Answers (4)

Michael
Michael

Reputation: 71

In my case, everything was very prosaic: i've tried to connect to port 9870, where the UI for the namenode is deployed, while the port of namenode itself was 8020! That's what, probably, @Vishnu Priyaa ment here.

Connection to 8020 helped, i've succefully downloaded a file from it's filesystem. So make sure, that the port you're trying to connect to is 8020 or the same, as the port from "overview" tab in namenode's UI: enter image description here

Upvotes: 0

Oleg Saltanovich
Oleg Saltanovich

Reputation: 1

First of all you need to check the real response data length in the namenode.log of the active namenode. The message must be like :

WARN org.apache.hadoop.ipc.Server: Large response size 786010791 for call Call#3

After you know the response data length you can change the parameter ipc.maximum.response.length accordingly to the real size.

BTW the issue may be on the client side, as was in my case. So just add the parameter to the client core-site.xml or directly to the command. For example:

hadoop distcp -D ipc.maximum.response.length=1073741824 ...

Upvotes: 0

Vishnu Priyaa
Vishnu Priyaa

Reputation: 149

Check your core-site.xml :

<property>
    <name>fs.default.name</name>
    <value>hdfs://host:port</value>
</property>

This port can be 9000 or 8020. Make sure that you are using the same port in your code or command

Upvotes: 7

Moon.Hou
Moon.Hou

Reputation: 45

try this solution: add this config to hdfs-site.xml

<property>
     <name>ipc.maximum.data.length</name>
     <value>134217728</value>
</property>

Upvotes: 1

Related Questions