naveen
naveen

Reputation: 1

unix fifo concurrent read from a named pipe leaves one of processes unterminated

I have created a fifo named pipe in solaris, which writes the content of a file, line by line, into pipe as below:

$ mkfifo namepipe

$ cat books.txt
"how to write unix code"
"how to write oracle code"

$ cat books.txt >> namepipe &

I have a readpipe.sh script which reads the named pipe in parallel like this:

# readpipe.sh

while IFS=',' read var
do
  echo var >> log.txt

done < namepipe

I am calling the readpipe.sh like

readpipe.sh &
sleep 2
readpipe.sh &

I have introduced a sleep 2 to avoid a race condition to occur, i.e. two processes get parts of values from each line, like process 1 gets

"how to"

and process 2 gets

"write unix code"

The problem I am facing is when the all the contents of namepipe is completed, the first background process gets exited, while the second keeps on running without completing.

The logic in script is declared simple in here for clear understanding. Actual readpipd.sh does lots of activities.

Kindly help me with knowledge

Upvotes: 0

Views: 466

Answers (1)

Andrew Henle
Andrew Henle

Reputation: 1

I have introduced a sleep 2 to avoid a race condition to occur

First, that's not going to work. sleep does not explicitly synchronize anything.

Second, you can't "share" reading from a pipe unless each reading process knows how much to try to read at a time. And read in a sh script isn't going to give you any control over how many bytes the sh binary actually reads from the pipe with the low-level read() call it uses. The sh process likely tries to read PIPE_BUF bytes, but it can try to read anything since it's really just feeding a stream. And you have no control over how much it reads.

For example, each process needs to know that the next read from the pipe needs to be 143 bytes, and then it issues a low-level read( fd, buffer, 143 ); call to read only 143 bytes.

And even if you can control at the lowest level how much each process reads, each process needs to know how much to read, which in this case you can't know.

Having a known-in-advance read size is necessary to do what you want, and you don't have that when using scripts. Note, though, it may not be sufficient - even if you solve the shared-reader problem, you may run into other issues trying share readers.

What's almost certainly happening in this case is your first script to run consumes all the data passed into the pipe, then closes and continues when there's no more data. THEN the second script opens the pipe and waits for data - but there isn't any, so it just blocks.

Upvotes: 1

Related Questions