leon
leon

Reputation: 10395

Hadoop Datanode Xcievers Error

I have built a storage system using HDFS API. I am now running some performance test on to the system. I created a large number of concurrent file retrieving requests through SIEGE (for example: siege -c 500 -r 1 "http://bucket1.s3.bigdatapro.org/1.jpg"). However, I encountered the following problems on datanodes as such:

013-06-17 21:08:56,987 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(146.169.35.109:50010, storageID=DS-1983611132-146.169.35.109-50010-1350751070203, infoPort=50075, ipcPort=50020):DataXceiver
java.io.IOException: xceiverCount 4097 exceeds the limit of concurrent xcievers 4096
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:156)

I have already setup this:

<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

Shall I increase this to a higher value? Is there any side-effect setting this to really high value? like 10000000? I have also increated maximum open files to 50000. Am I still missing something? Or have I done something wrong?

Thanks

Upvotes: 0

Views: 6389

Answers (1)

Raviteja Chirala
Raviteja Chirala

Reputation: 49

You can increase the xcievers count further based on how many threads your application needs in concurrent. In my research, I have found that the more you put is proved to be bad as we have got hit by performance in big time.

Also in your Datanodes, check the ulimit count if it's creating any issue.

Upvotes: 1

Related Questions