Reputation: 2745
I want to develop an application that could open file and replace all bytes in that physical location in order to making recovery hard (some kind of data wiper).
So how I could be sure if I open a file with QFile
which is n Mb and write n Mb of dummy data over that data will be overwrite in same physical location (in both Windows and Linux)?
Upvotes: 1
Views: 133
Reputation: 98495
This smells of an XY problem: what you really want is to make the data inaccessible. Overwriting the data of the file itself is only one of the possible approaches.
Another approach is to make the problem smaller: instead of merely overwriting the file, let's never store it as plaintext, but encrypted e.g. using AES. As soon as you the key is inaccessible, the data becomes inaccessible. The key is small - 16 to 32 bytes in size.
Losing such a small key is much easier.
An approach that I've found works quite well is to distribute the bytes of the key across several key files that are long enough for the filesystem to use dedicated blocks for. 128kb seems to be sufficient. I.e. use 16 key files, 128kb each, to store 16-byte-long keys. Memory-map the key files so that the filesystem is likely to allocate dedicated blocks for them, rather than coalescing them with other data. On first use, fill the key files with random data.
For each key you store, distribute it across the key files, putting one byte of the key at the same offset in each file. I.e. key[key_no][key_offset] <-> key_file[key_offset][key_no]
. To lose the protected file, overwrite its key with random data. Each protected file has one key - do not share the keys.
The adversary would need to recover the prior contents of multiple key files at the same point in time. Even if they succeed in recovering a few key files, each key file recovered only provides 1/16th of the key and reduces a brute force effort by a factor of 256.
Upvotes: 1
Reputation: 945
For "usual" file systems, on HDD drives, it should be enough just to seek to the start of the file and write the right amount of bytes. They will be put to the same location.
However, it's quite hard to do that this way in SSD, because you need to deal with write aplification, when data is not actually written into the same location, even if the operating system thinks it is. Instead, for SSD the TRIM command should be used, which marks the blocks free, and SSD controller will rewrite them with zeroes to be able to reuse them later. In modern file systems, such as ext4
or ntfs
it is already done for deleted files.
Resuming, on HDD your method is good and applicable. For SSD it will only make a few copies of the data, so I would better avoid that and just delete the file, hoping that the FS driver will send TRIM to the SSD controller for me.
Upvotes: 1