Reputation: 17142
I have an application (video stream capture) which constantly writes its data to a single file. Application typically runs for several hours, creating ~1 gigabyte file. Soon (in a matter of several seconds) after it quits, I'd like to have 2 copies of file it was writing - let's say, one in /mnt/disk1
, another in /mnt/disk2
(the latter is an USB flash drive with FAT32 filesystem).
I don't really like an idea of modifying the application to write 2 copies simulatenously, so I though of:
/mnt/disk1/file.mkv
)/mnt/disk1/file.mkv
to /mnt/disk2/file.mkv
tail -f
does, copying everything it gets from /mnt/disk1/file.mkv
to /mnt/disk2/file.mkv
rsync /mnt/disk1/file.mkv /mnt/disk2/file.mkv
just to make sure they're the same. In case if they're the same, it should just run a quick check and quit fairly soon.What is the best approach for syncing 2 files, preferably using simple Linux shell-available utilities? May be I could use some clever trick with FUSE / md device / tee
/ tail -f
?
The best possible solution for my case seems to be
mencoder ... -o >(
tee /mnt/disk1/file.mkv |
tee /mnt/disk2/file.mkv |
mplayer -
)
This one uses bash/zsh-specific magic named "process substitution" thus eliminating the need to make named pipes manually using mkfifo
, and displays what's being encoded, as a bonus :)
Upvotes: 1
Views: 1156
Reputation: 77129
Hmmm... the file is not usable while it's being written, so why don't you "trick" your program into writing through a pipe/fifo and use a 2nd, very simple program, to create 2 copies?
This way, you have your two copies as soon as the original process ends.
Upvotes: 4