Reputation: 11
I want to fill with random bytes ALL files RECURSIVELLY from the root directory. ONLY FILES, folders are to be left untoutched
I DON´T WANT TO DELETE the files, just fill them with random bytes
In my research, the fill command for a single file could be something like:
dd if=/dev/random of=target-files bs=1M
And to find all files recursivelly I should use:
find . -name "* .*"
My questions are:
Is it possible to achieve my goal by joining this two commands? (pipe them? how?)
Is there another easier way to achieve the same results?
Thanks for any help ;-)
Upvotes: 1
Views: 561
Reputation: 1689
I love dd (really: I love it!); but, in this case, you can use shred, especially designed for achieve something like this. It's part of the GNU coreutils package (installed on [almost] all linux systems).
find rootdirectory -type f -print0 | xargs --null --max-args=1 shred --force --iterations=4 --random-source=/dev/urandom --verbose
Upvotes: 0
Reputation: 15204
And a third approach :)
find / -type f -printf "%s\t%p\n" | while read size name ; do dd if=/dev/urandom of="${name}" bs=1c count=${size}; done
Be prepared to wait a LONG time :}
Upvotes: 0
Reputation: 3827
Find + xargs + dd + wc would do that.
find / -type f -print0 | xargs -0 -Ifilename sh -c 'dd if=/dev/urandom of="filename" bs=`wc -c "filename" | cut -d" " -f1` count=1'
How it works:
Upvotes: 0
Reputation: 199
You may try to pipe the find result to while loop, and dd /dev/udrandom into each files.
$ find myfolder -type f |while read fd; do dd if=/dev/urandom of=$fd bs=1M count=1;done
If you want to retain file size, may do some calculation on finding 1k block count and pass as args in dd.
$ find myfolder -type f|while read fd; do FSIZ=`stat -c %s $fd`; CNT=`expr $FSIZ / 1024`;dd if=/dev/urandom of=$fd bs=1k count=$CNT;done
Upvotes: 1