Reputation: 1041
I am currently writing a C application for an embedded system (limited disc space) On this system, several processes access files which I have to delete with my application on certain events (e.g. running out of disc space). But since the other processes still can write to these files, the disc space situation doesn't improve.
Is there any possibility to actually delete the file and let the write access of the other processes fail?
I have only limited access to the behavior of the other processes so it would be nice if no cooperation of these processes is needed.
Upvotes: 5
Views: 162
Reputation: 4926
If I was the owner of the "other processes", I would just let them close, rename and reopen the log files, as soon as their size reach a given threshold. In this way the log rotation can be performed in a "standard" way (eg: logrotate)
However it seems that you already have some legacy applications.
In this case I would suggest to use mkfifo to create named pipes in place of the log files, before the legacy applications are started.
Your C application will read from those special files, and in this way it will be able for example to create log files no longer than a specified amount. In this way you will be able to use logrotate, for example.
Drawback: if your C application is not running, the legacy applications will stay stuck in the write()
system call, trying to write to a non-connected pipe.
Upvotes: -1
Reputation: 6866
Two ideas come to mind to go around the fact that the file doesn't actually get deleted until all references to it are closed:
Upvotes: 4