Reputation: 2625
In the linux programming interface book, (p.1367)
Starvation considerations can also apply when using signal-driven I/O, since it also presents an edge-triggered notification mechanism. By contrast, starvation considerations don’t necessarily apply in applications employing a level-triggered notification mechanism. This is because we can employ blocking file descriptors with level-triggered notification and use a loop that continuously checks descriptors for readiness, and then performs some I/O on the ready descriptors before once more checking for ready file descriptors.
I don't understand what this 'blocking' part means. I think it's irrelevant whether we use blocking I/O or nonblocking I/O. (The author also says early in the chapter that nonblocking I/O is usually employed regardless of level-triggered or edge-triggered notification)
Upvotes: 3
Views: 638
Reputation:
SO, IO eh? Well, IO is "handling things" so we can go with a human metaphor. Imagine you are a process on a system getting stuff done for your boss.
Then blocking IO is like going to the dentist or having a face to face meeting with a customer. In both these scenarios, when you go to undertake that event, you're away from your desk and so totally unable to do anything else until you get back to your desk. Chances are, you're going to waste some time in the waiting room or idle chatter in the meeting/waiting for people to turn up.
Blocking IO is like this - blocking IO "sacrifices" (I say this because you lose the thread, effectively) the thread to the task in question. You can't use it for any other purpose whilst it is blocked - it's waiting on that IO to happen.
Non-blocking IO, by contrast, is like being on the telephone. When you're on the phone, you can engage in that IO whilst writing an answer on Stack Overflow! Such IO is said to be asynchronous - in that you accept an IO request and start processing it, but can handle other requests whilst they complete.
Now, my favourite resource for this kind of thing is the c10k problem page here. I'd say you're right - 99% of the time you are going to use non-blocking IO (in fact, your OS is conducting non-blocking IO for you all the time), mostly because using a whole thread for each incoming IO task is incredibly inefficient, even in Linux where threads and processes are the same thing (tasks) and are fairly lightweight.
The difference between edge-triggered and level-triggered notification types probably applies more to non-blocking connections, since it would be irrelevant for the blocking case anyway. As I understand it, an edge-triggered notification only marks a descriptor as ready when there is new data from the last time you asked for a status update, whereas level triggered marks a descriptor ready to handle whenever there is data available. This means edge-triggered interfaces are considered a bit more tricky, since you have to handle the incoming data when you see it as you won't be notified again. In theory, that should be more efficient (fewer notifications).
So, tl;dr - edge vs level readiness are slightly different considerations to blocking vs non-blocking designs namely, there are several ways to do non-blocking IO and only really one to do blocking IO.
Upvotes: 1