Reputation: 545
In general, once the handle to the file is open, the file is open, and nobody changing the directory structure can change that - the file can be moved, renamed, or put something else in its place - it remains open by construction, as in Linux/Unix there is no real delete for files, but only unlink
, which doesn't necessarily delete the file - it just removes the link from the directory. Result: The file handle will stay valid whatever happens to the file.
However, if the underlying device disappears (e.g. the file is on a USB stick that is removed from the system) then the file won't be accessible any longer.
I have a program that opens a huge binary file (> 4 GB) of some other application at start. Afterwards, it watches the file for changes, by querying
long int pos = fseek(filepointer, 0L, SEEK_END);
quite often (every few milliseconds) and reverts to the previous positions, if the result is different from pos_before
. In this case, fgets
is used to read the new data from the file.
Hence, only the tail of the file is scanned for changes, making the whole process fairly lightweight. However, it is affected by the potential problem that the always opened file pointer may become invalid if the file system is changed (see above).
The code does not need to be portable to any non-Linux/Unix systems.
Question:
fcntl(fileno(filepointer), F_GETFD)
for testing. Alternative question:
fseek(filepointer, 0L, SEEK_END);
(might be very slow and cause a lot of I/O), or_filelength(fileno(filepointer));
(unclear if this will cause lots of I/O)stat(filename, &st); st.st_size;
(unclear if this will cause any I/O)Upvotes: 2
Views: 1249
Reputation: 70981
How can I detect if the file pointer is still valid after having opened the file successfully
If the FILE*
was not explicitly fclose()
d by the process that opened (or inherited) it and if the process in question did not invoke Undefined Behaviour the FILE*
is valid by definition.
In case any underlying layer cannot fulfill requests issued by the FILE* fp
's process (typically raised indirectly via calls into LIBC like fread()
, fwrite()
or fseek()
, or directly by doing for example read(fileno(fp))
) the failing functions should return indicating an error condition and set errno
accordingly, this typically would be EIO
.
Just implement complete error checking and handling and you won't run into any issues.
Upvotes: 2
Reputation: 6537
Well, usually an open file will prevent unmounting the filesystem, so it shouldn't just disappear under you. Though with USB disks etc, there is of course the possibility of the user pulling the device without asking the system.
But it would be nice of the process to not prevent clean unmounting. That requires two things:
Running stat(2)
periodically on the path would be the way to do this. You can detect changes to the file from modifications to mtime
, ctime
, the file size. Errors and changes in the inode number or the containing device (st_dev
) might indicate the file is no longer accessible or isn't the same file any more. React depending on application requirements.
(That is, assuming you're interested in the file currently pointed to by that name, and not in the same inode you opened.)
As for I/O, it's likely that periodically stat
ing something would keep the inode cached in memory, so the issue would be more about memory use than I/O. (Until you do this on enough to files to not be able to cache them, which leads to trashing, an issue of both memory and I/O...) Seeking to the end of the file would also similarly require loading the length of the file, I can't see why that would cause any significant I/O.
Another choice would be to use inotify(7)
on the file or the whole directory to detect changes without polling. It can also detect unmount events.
Upvotes: 3