Adrian Seungjin Lee
Adrian Seungjin Lee

Reputation: 1666

What exactly Non DFS Used means?

This is what I saw on Web UI recently

 Configured Capacity     :   232.5 GB
 DFS Used    :   112.44 GB
 Non DFS Used    :   119.46 GB
 DFS Remaining   :   613.88 MB
 DFS Used%   :   48.36 %
 DFS Remaining%  :   0.26 %

and I'm so confused that non-dfs Used takes up more than half of capacity,

which I think means half of hadoop storage is being wasted

After spending meaningless time searching, I just formatted namenode, and started from scratch.

And then I copied one huge text file(about 19gigabytes) from local to HDFS (successed).

Now the UI says

Configured Capacity  :   232.5 GB
DFS Used     :   38.52 GB
Non DFS Used     :   45.35 GB
DFS Remaining    :   148.62 GB
DFS Used%    :   16.57 %
DFS Remaining%   :   63.92 %

before copying, DFS Used and Non DFS Used were both 0.

Because DFS Used is approximately double the original text file size and I configured 2 copy,

I guess that DFS Used is composed up of 2 copies of original and meta.

But still I don't have any idea where Non DFS Used came from and why is that takes up so much capcity more than DFS Used.

What happend? Did I made mistake?

Upvotes: 26

Views: 30954

Answers (5)

world watera
world watera

Reputation: 3

One more thing.
Non DFS used = 100GB(Total) - 30 GB( Reserved) - 10 GB (DFS used) - 50GB(Remaining) = 10 GB
Because ext3/ext4 default reserve 5% (refer to reserved block count ), so it should be
Non DFS used = 100GB(Total) - 30 GB( Reserved from App) - 5 GB(Reserved from FS) - 10 GB (DFS used) - 50GB(Remaining) = 5 GB

From sudo tune2fs -l /dev/sdm1 get the "Reserved block count"
BTW, tune2fs -m 0.2 /dev/sdm1 to tune the reserved space.

Upvotes: 1

Sumukh
Sumukh

Reputation: 749

The non-dfs will be some cache files that will be stored by the node manager. You can check the path under yarn.nodemanager.local-dirs property in the yarn-site.xml

You can refer to the default yarn-site.xml for details.

Upvotes: 0

haridsv
haridsv

Reputation: 9693

The correct simplified definition is: "Any data that is not written by HDFS in the same filesystem(s) as the dfs.data.dirs. In other words, if you use hdfs dfs commands to copy data, it ends up under dfs.data.dirs but then it is considered "DFS usage", and if you use regular cp command to copy files into dfs.data.dirs, then it will become "non-DFS usage".

Upvotes: 2

Tim Fei
Tim Fei

Reputation: 426

"Non DFS used" is calculated by following formula:

Non DFS Used = Configured Capacity - Remaining Space - DFS Used

It is still confusing, at least for me.

Because Configured Capacity = Total Disk Space - Reserved Space.

So Non DFS used = ( Total Disk Space - Reserved Space) - Remaining Space - DFS Used

Let's take a example. Assuming I have 100 GB disk, and I set the reserved space (dfs.datanode.du.reserved) to 30 GB.

In the disk, the system and other files used up to 40 GB, DFS Used 10 GB. If you run df -h , you will see the available space is 50GB for that disk volume.

In HDFS web UI, it will show

Non DFS used = 100GB(Total) - 30 GB( Reserved) - 10 GB (DFS used) - 50GB(Remaining) = 10 GB

So it actually means, you initially configured to reserve 30G for non dfs usage, and 70 G for HDFS. However, it turns out non dfs usage exceeds the 30G reservation and eat up 10 GB space which should belongs to HDFS!

The term "Non DFS used" should really be renamed to something like "How much configured DFS capacity are occupied by non dfs use"

And one should stop try to figure out why the non dfs use are so high inside hadoop.

One useful command is lsof | grep delete, which will help you identify those open file which has been deleted. Sometimes, Hadoop processes (like hive, yarn, and mapred and hdfs) may hold reference to those already deleted files. And these references will occupy disk space.

Also du -hsx * | sort -rh | head -10 helps list the top ten largest folders.

Upvotes: 38

highlycaffeinated
highlycaffeinated

Reputation: 19867

Non DFS used is any data in the filesystem of the data node(s) that isn't in dfs.data.dirs. This would include log files, mapreduce shuffle output and local copies of data files (if you put them on a data node). Use du or a similar tool to see whats taking up the space in your filesystem.

Upvotes: 7

Related Questions