flypen
flypen

Reputation: 2585

How does HDFS write many files to lower-layer local file systems at the same time?

I want to know how HDFS keep high performance if it writes many files at the same time before I understand it completely.

For example, currently there are 100 files for reading or writing at one data node. I think it doesn't use only a few threads to do normal synchronous IO operations. Does HDFS create 100 worker threads to handle with them, or use some asynchronous IO mechanism without so many threads?

Upvotes: 0

Views: 193

Answers (1)

Liyin Liang
Liyin Liang

Reputation: 11

Yes, the datanode will serve the request with 100 threads. An Hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is dfs.datanode.max.xcievers. The default uppper bound is 256.

Upvotes: 1

Related Questions