Reputation: 41241
I have a script that writes to a named pipe and another that reads from the pipe. Occasionally, when starting the script I have noticed that the contents of the pipe exist from a previous run of the script. Is there a way to flush out the pipe at the beginning of the script?
Upvotes: 15
Views: 15533
Reputation: 181
Although this question is quite old, I got to it trying to solve the same problem.
As the question asks for a bash solution, and the currently accepted answer is indeed using standard unix tools, but not really a bash-only solution, let me share below the solution I found.
Requirements:
As a one-liner: (this will flush stdin, you can redirect input as / if needed)
while read -r -N 1 -t 0 ; do read -r -N 1 || break ; done
As a function: (testing / debugging commented out)
function devflush {
# local FLUSHED=0
while read -r -N 1 -t 0
do
read -r -N 1 || break
# FLUSHED=$[ $FLUSHED + 1 ]
done
# [ $FLUSHED -gt 0 ] && echo "Flushed ${FLUSHED} characters from input" >&2
return 0
}
How it works:
Upvotes: 0
Reputation: 11
Try this:
"Opening the FD read/write rather than read-only when setting up the pipeline prevents blocking."
from:
Setting up pipelines reading from named pipes without blocking in bash
Upvotes: 1
Reputation: 47104
I think dd
is your friend:
dd if=myfifo iflag=nonblock of=/dev/null
strace shows
open("myfifo", O_RDONLY|O_NONBLOCK)
and indeed doesn't even block on an empty fifo.
Upvotes: 16
Reputation: 98519
You can read from the pipe until it is empty. This will effectively flush it.
Before you attempt this daring feat, call fcntl(mypipe, F_SETFL, O_NONBLOCK)
(I don't know the shell-scripting equivalent) to make a read when the pipe is empty not hang your program.
Upvotes: 2