kipkoan
kipkoan

Reputation: 172

How can I have output from one named pipe fed back into another named pipe?

I'm adding some custom logging functionality to a bash script, and can't figure out why it won't take the output from one named pipe and feed it back into another named pipe.

Here is a basic version of the script (http://pastebin.com/RMt1FYPc):

#!/bin/bash

PROGNAME=$(basename $(readlink -f $0))
LOG="$PROGNAME.log"
PIPE_LOG="$PROGNAME-$$-log"
PIPE_ECHO="$PROGNAME-$$-echo"

# program output to log file and optionally echo to screen (if $1 is "-e")
log () {
  if [ "$1" = '-e' ]; then 
    shift
    $@ > $PIPE_ECHO 2>&1 
  else 
    $@ > $PIPE_LOG 2>&1 
  fi
}

# create named pipes if not exist
if [[ ! -p $PIPE_LOG ]]; then 
  mkfifo -m 600 $PIPE_LOG
fi
if [[ ! -p $PIPE_ECHO ]]; then 
  mkfifo -m 600 $PIPE_ECHO
fi

# cat pipe data to log file
while read data; do
  echo -e "$PROGNAME: $data" >> $LOG 
done < $PIPE_LOG &

# cat pipe data to log file & echo output to screen
while read data; do
  echo -e "$PROGNAME: $data"
  log echo $data   # this doesn't work
  echo -e $data > $PIPE_LOG 2>&1   # and neither does this
  echo -e "$PROGNAME: $data" >> $LOG   # so I have to do this
done < $PIPE_ECHO &

# clean up temp files & pipes
clean_up () {
  # remove named pipes
  rm -f $PIPE_LOG
  rm -f $PIPE_ECHO
}
#execute "clean_up" on exit
trap "clean_up" EXIT 

log echo "Log File Only"
log -e echo "Echo & Log File"

I thought the commands on line 34 & 35 would take the $data from $PIPE_ECHO and output it to the $PIPE_LOG. But, it doesn't work. Instead I have to send that output directly to the log file, without going through the $PIPE_LOG.

Why is this not working as I expect?

EDIT: I changed the shebang to "bash". The problem is the same, though.

SOLUTION: A.H.'s answer helped me understand that I wasn't using named pipes correctly. I have since solved my problem by not even using named pipes. That solution is here: http://pastebin.com/VFLjZpC3

Upvotes: 3

Views: 4235

Answers (2)

A.H.
A.H.

Reputation: 66263

it seems to me, you do not understand what a named pipe really is. A named pipe is not one stream like normal pipes. It is a series of normal pipes, because a named pipe can be closed and a close on the producer side is might be shown as a close on the consumer side.

The might be part is this: The consumer will read data until there is no more data. No more data means, that at the time of the read call no producer has the named pipe open. This means that multiple producer can feed one consumer only when there is no point in time without at least one producer. Think of it of door which closes automatically: If there is a steady stream of people keeping the door always open either by handing the doorknob to the next one or by squeezing multiple people through it at the same time, the door is open. But once the door is closed it stays closed.

A little demonstration should make the difference a little clearer:

Open three shells. First shell:

1> mkfifo xxx
1> cat xxx

no output is shown because cat has opened the named pipe and is waiting for data.

Second shell:

2> cat > xxx 

no output, because this cat is a producer which keeps the named pipe open until we tell him to close it explicitly.

Third shell:

3> echo Hello > xxx
3>

This producer immediately returns.

First shell:

Hello

The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.

Third shell

3> echo World > xxx
3> 

First shell:

World

The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.

Second Shell: write into the cat > xxx window:

And good bye!
(control-d key)
2>

First shell

And good bye!
1>

The ^D key closed the last producer, the cat > xxx, and hence the consumer exits also.


In your case which means:

  • Your log function will try to open and close the pipes multiple times. Not a good idea.
  • Both your while loops exit earlier than you think. (check this with (while ... done < $PIPE_X; echo FINISHED; ) &
  • Depending on the scheduling of your various producers and consumers the door might by slam shut sometimes and sometimes not - you have a race condition built in. (For testing you can add a sleep 1 at the end of the log function.)
  • You "testcases" only tries each possibility once - try to use them multiple times (you will block, especially with the sleeps ), because your producer might not find any consumer.

So I can explain the problems in your code but I cannot tell you a solution because it is unclear what the edges of your requirements are.

Upvotes: 10

huelbois
huelbois

Reputation: 7012

It seems the problem is in the "cat pipe data to log file" part.

Let's see: you use a "&" to put the loop in the background, I guess you mean it must run in parallel with the second loop.

But the problem is you don't even need the "&", because as soon as no more data is available in the fifo, the while..read stops. (still you've got to have some at first for the first read to work). The next read doesn't hang if no more data is available (which would pose another problem: how does your program stops ?).

I guess the while read checks if more data is available in the file before doing the read and stops if it's not the case.

You can check with this sample:

mkfifo foo
while read data; do echo $data; done < foo

This script will hang, until you write anything from another shell (or bg the first one). But it ends as soon as a read works.

Edit: I've tested on RHEL 6.2 and it works as you say (eg : bad!).

The problem is that, after running the script (let's say script "a"), you've got an "a" process remaining. So, yes, in some way the script hangs as I wrote before (not that stupid answer as I thought then :) ). Except if you write only one log (be it log file only or echo,in this case it works).

(It's the read loop from PIPE_ECHO that hangs when writing to PIPE_LOG and leaves a process running each time).

I've added a few debug messages, and here is what I see:

  • only one line is read from PIPE_LOG and after that, the loop ends
  • then a second message is sent to the PIPE_LOG (after been received from the PIPE_ECHO), but the process no longer reads from PIPE_LOG => the write hangs.

When you ls -l /proc/[pid]/fd, you can see that the fifo is still open (but deleted). If fact, the script exits and removes the fifos, but there is still one process using it. If you don't remove the log fifo at the cleanup and cat it, it will free the hanging process.

Hope it will help...

Upvotes: 0

Related Questions