Reputation: 3099
I'm surprised about big file ftruncate
and fsync
operations. I wrote a program that create an empty file on a Linux 64 bits system, truncate it to 0xffffffff bytes and after, fsync
it.
After all operations, file is correctly created with this length.
I see that ftruncate
cost about 1442 microseconds and fsync
cost only 4 microseconds.
Is normal this high performance? Are really wrotten all bytes on disc? If not, how can I ensure this sync?
#include <sys/time.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <iostream>
static const size_t __tamFile__ = 0xffffffff;
int main(int, char **)
{
std::string fichero("./testTruncate.dat");
unlink(fichero.c_str());
int fd = open(fichero.c_str(), O_CREAT | O_RDWR, S_IRUSR | S_IWUSR);
if (fd != -1)
{
struct timeval t1, t2;
timerclear(&t1);
timerclear(&t2);
gettimeofday(&t1, NULL);
ftruncate(fd, __tamFile__);
gettimeofday(&t2, NULL);
unsigned long long msecTruncate = static_cast<unsigned long long>((((t2.tv_sec * 1E6) + t2.tv_usec) - ((t1.tv_sec * 1E6) + t1.tv_usec))) ;
gettimeofday(&t1, NULL);
fdatasync(fd);
gettimeofday(&t2, NULL);
unsigned long long msecFsync = static_cast<unsigned long long>((((t2.tv_sec * 1E6) + t2.tv_usec) - ((t1.tv_sec * 1E6) + t1.tv_usec))) ;
std::cout << "Total microsec truncate: " << msecTruncate << std::endl;
std::cout << "Total microsec fsync: " << msecFsync << std::endl;
close(fd);
}
return 0;
}
Upvotes: 0
Views: 704
Reputation: 37248
Which Linux kernel version do you have, which filesystem, and which mount options (in particular, are barriers enabled?)?
On Linux 2.6.32 64-bit, ext4 with barriers enabled (the default), I get
$ ~/src/cpptest/truncsync
Total microsec truncate: 32
Total microsec fsync: 266
Total microsec close: 14
Otherwise the same, but with a NFS mounted filesystem, I get
$ ./truncsync
Total microsec truncate: 38297
Total microsec fsync: 6
Total microsec close: 6
$ ./truncsync
Total microsec truncate: 3454967
Total microsec fsync: 8
Total microsec close: 330
Upvotes: 0
Reputation: 182764
I wrote a program that create an empty file on a Linux 64 bits system, truncate it to 0xffffffff bytes and after, fsync it.
Unless you write something to it, it's extremely possible the file contains holes.
From TLPI:
What happens if a program seeks past the end of a file, and then performs I/O? A call to read() will return 0, indicating end-of-file. Somewhat surprisingly, it is possible to write bytes at an arbitrary point past the end of the file.
The space in between the previous end of the file and the newly written bytes is referred to as a file hole. From a programming point of view, the bytes in a hole exist, and reading from the hole returns a buffer of bytes containing 0 (null bytes).
File holes don’t, however, take up any disk space. The file system doesn’t allocate any disk blocks for a hole until, at some later point, data is written into it.
Upvotes: 7