Reputation: 2585
I want to know how HDFS keep high performance if it writes many files at the same time before I understand it completely.
For example, currently there are 100 files for reading or writing at one data node. I think it doesn't use only a few threads to do normal synchronous IO operations. Does HDFS create 100 worker threads to handle with them, or use some asynchronous IO mechanism without so many threads?
Upvotes: 0
Views: 193
Reputation: 11
Yes, the datanode will serve the request with 100 threads. An Hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is dfs.datanode.max.xcievers. The default uppper bound is 256.
Upvotes: 1