serg
serg

Reputation: 1023

How to delete files from the HDFS?

I just downloaded Hortonworks sandbox VM, inside it there are Hadoop with the version 2.7.1. I adding some files by using the

hadoop fs -put /hw1/* /hw1

...command. After it I am deleting the added files, by the

hadoop fs -rm /hw1/*

...command, and after it cleaning the recycle bin, by the

hadoop fs -expunge

...command. But the DFS Remaining space not changed after recyle bin cleaned. Even I can see that the data was truly deleted from the /hw1/ and the recyle bin. I have the fs.trash.interval parameter = 1.

Actually I can find all my data split in chunks in the /hadoop/hdfs/data/current/BP-2048114545-10.0.2.15-1445949559569/current/finalized/subdir0/subdir2 folder, and this is really surprises me, because I expect them to be deleted.

So my question how to delete the data the way that they really will be deleted? After few adding and deletion I got exhausted free space.

Upvotes: 32

Views: 167193

Answers (6)

Karol
Karol

Reputation: 51

If you also need to skip trash following command works for me

hdfs dfs -rm -R -skipTrash /path/to/HDFS/file

Upvotes: 5

Giorgos Myrianthous
Giorgos Myrianthous

Reputation: 39950

You can use

hdfs dfs -rm -R /path/to/HDFS/file

since hadoop dfs has been deprecated.

Upvotes: 21

maxteneff
maxteneff

Reputation: 1531

Your problem is inside of the basis of HDFS. In HDFS (and in many other file systems) physical deleting of files isn't the fastest operations. As HDFS is distributed file system and usually replicate at least 3 replicas on different servers of the deleted file then each replica (which may consist of many blocks on different hard drives) must be deleted in the background after your request to delete the file.

Official documentation of Hadoop tells us the following:

The deletion of a file causes the blocks associated with the file to be freed. Note that there could be an appreciable time delay between the time a file is deleted by a user and the time of the corresponding increase in free space in HDFS.

Upvotes: 14

Flowra
Flowra

Reputation: 1418

what works for me :

hadoop fs -rmr -R <your Directory>

Upvotes: 5

BruceWayne
BruceWayne

Reputation: 3374

Try hadoop fs -rm -R URI

-R option deletes the directory and any content under it recursively.

Upvotes: 17

serg
serg

Reputation: 1023

Durga Viswanath Gadiraju is right it is question of time, maybe my PC is slow, and also uses VM, after 10 minutes files are physically deleted, if you are using the algorythm that used by me in the question. Note set up the fs.trash.interval parameter = 1. Or by default files won't be deleted faster than 6 hours.

Upvotes: 1

Related Questions