user2650277
user2650277

Reputation: 6729

Most efficient way to delete old files using bash

I am currently executing this command every 30 min via a bash script(on Centos 6) to delete files that are around 1 hr old.The problem is that the find command is using 45% of my cpu at all times.Is there a way to optimise it.There is about 200k Items in cache at any point in time

find /dev/shm/cache -type f -mmin +59 -exec rm -f {} \;

Upvotes: 4

Views: 126

Answers (2)

janos
janos

Reputation: 124648

You can try to run the process as lower priority using nice:

nice -n 19 find ...

Another thing, it might not make a difference in performance, but to delete matching files with find, a simpler way is -delete instead of -exec:

find /dev/shm/cache -type f -mmin +59 -delete

... that is if your version of find supports it (thanks @chepner for pointing out) (and modern versions do...)

Upvotes: 5

Barmar
Barmar

Reputation: 780724

Your command is starting a new invocation of rm for each file that's found, which can be very expensive. You can use an alternate syntax that sends multiple arguments to rm, in batches as large as the OS allows. This is done by ending the command with + instead of ;

find /dev/shm/cache -type f -mmin +59 -exec rm -f {} +

You can also use the -delete option, as in janos's answer; it should be even more efficient because it doesn't have to run an external process at all. I'm showing this answer because it generalizes to other commands as well, which may not have equivalent options, e.g.

-exec grep foo {} +

Upvotes: 2

Related Questions