Reputation: 185
i have a command that potentially outputs a lot of data to stdout and I need to upload that via ftp to a remote location.
I found this question Upload output of a program directly to a remote file by ftp and I really liked the idea of redirecting the output into a named pipe and then read junks from it. however as soon as I read the first chunk via dd the command inputing into the pipe just exits and there is no more data to read from the pipe.
to test this i created a fifo
#> mkfifo fifo
then I wrote into the fifo on one shell:
#> echo bla > fifo
and on another shell i read from it:
#> dd if=fifo of=spool.1 bs=1 count=1
it outputs the first byte into spool.1 and then the command writing into the pipe exits and I can't read the remaining data from the pipe.
I would like to read the next chunk from that pipe but I can't figure out what I am doing wrong
any idea how to keep that pipe open until all data is read from it?
Upvotes: 1
Views: 1672
Reputation: 531345
dd
needs to read from standard input, rather than opening and closing the pipe itself, to keep the write end open for echo
. Once the write end is closed, you can't open the read end again.
For example
{
dd of=spool.1 bs=1 count=1
dd of=spool.2 bs=2 count=2
dd of=spool.2 bs=2 count=2
} < fifo
fifo
is opened once for the compound command {...}
, and each call to dd
inherits the same open file descriptor without closing it.
Upvotes: 3