Reputation: 41
I tried recovering a disk image from an NTFS hard drive that is 50% unreadable. And I guess a side effect of that is a lot of the files recovered have the correct filename and type, and take up the same file size as the original file, however instead of containing any useful data, they are just filled with 00 00 00 00 etc. in the HEX editor. Since these files aren't of any use but still take up disk space, is there a way to automate finding and deleting them all?
Upvotes: 2
Views: 832
Reputation: 1
Finding files using https://stackoverflow.com/a/20226139/969504 answer:
find 2>/dev/null -type f -size +0c -exec bash -c '<"$0" tr -d "\0" | read -n 1 || echo "$0"' {} ';'
Upvotes: 0
Reputation: 1
I tested [^0] and [^\000] cases on: 1000B 2000B 3000B ... 5000000B lenght zero-filled files.
Looks like both works properly.
Upvotes: 0
Reputation: 41
After some research, I came up with
grep --ignore-case -r -L --null [^0] * | xargs -0 rm
Upvotes: 1