Reputation: 2277
I have the memory that when we want to use select() over a socket descriptor, this socket should be set NONBLOCKING in advance.
but today, I read a source file where there seems no lines which set socket to NON-BLOCKING Is my memory correct or not?
thanks!
Upvotes: 4
Views: 4397
Reputation: 121
duskwuff has the right idea when he says
In general, you do not need to set a socket as non-blocking to use it in select().
This is true if your kernel is POSIX compliant with regard to select(). Unfortunately, some people use Linux, which is not, as the Linux select() man page says:
Under Linux, select() may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready. Thus it may be safer to use O_NONBLOCK on sockets that should not block.
There was a discussion of this on lkml on or about Sat, 18 Jun 2011. One kernel hacker tried to justify the non POSIX compliance. They honor POSIX when it's convenient and desecrate it when it's not.
He argued "there may be two readers and the second will block." But such an application flaw is non sequiter. The kernel is not expected to prevent application flaws. The kernel has a clear duty: in all cases of the first read() after select(), the kernel must return at least 1 byte, EOF, or an error; but NEVER block. As for write(), you should always test whether the socket is reported writable by select(), before writing. This guarantees you can write at least one byte, or get an error; but NEVER block. Let select() help you, don't write blindly hoping you won't block. The Linux hacker's grumbling about corner cases, etc., are euphemisms for "we're too lazy to work on hard problems."
Suppose you read a serial port set for:
min N; with -icanon, set N characters minimum for a completed read
time N; with -icanon, set read timeout of N tenths of a second
min 250 time 1
Here you want blocks of 250 characters, or a one tenth second timeout. When I tried this on Linux in non blocking mode, the read returned for every single character, hammering the CPU. It was NECESSARY to leave it in blocking mode to get the documented behavior.
So there are good reasons to use blocking mode with select() and expect your kernel to be POSIX compliant.
But if you must use Linux, Jeremy's advice may help you cope with some of its kernel flaws.
Upvotes: 3
Reputation: 73041
Usually when you are using select(), you are using it is the basis of an event loop; and when using an event loop you want the event loop to block only inside select() and never anywhere else. (The reason for that is so that your program will always wake up whenever there is something to do on any of the sockets it is handling -- if, for example, your program was blocked inside recv() for socket A, it would be unable to handle any data coming in on socket B until it got some data from socket A first to wake it up; and vice versa).
Therefore it is best to set all sockets non-blocking when using select(). That way there is no chance of your program getting blocked on a single socket and ignoring the other ones for an extended period of time.
Upvotes: 0
Reputation:
It depends. Setting a socket as non-blocking does several things:
Makes read()
/ recv()
return immediately with no data, instead of blocking, if there is nothing available to read on the socket.
If you are using select()
, this is probably a non-issue. So long as you only read from a socket when select()
tells you it is readable, you're fine.
Makes write()
/ send()
return partial (or zero) writes, instead of blocking, if not enough space is available in kernel buffers.
This one is tricky. If your application is written to handle this situation, it's great, because it means your application will not block when a client is reading slowly. However, it means that your application will need to temporarily store writable data in its own application-level buffers, rather than writing directly to sockets, and selectively place sockets with pending writes in the writefds
set. Depending on what your application is, this may either be a lifesaver or a huge added complication. Choose carefully.
If set before the socket is connected, makes connect()
return immediately, before a connection is actually made.
Similarly, this is sometimes useful if your application needs to make connections to hosts that may respond slowly while continuing to respond on other sockets, but can cause issues if you aren't careful about how you handle these half-connected sockets. It's usually best avoided (by only setting sockets as non-blocking after they are connected, if at all).
In general, you do not need to set a socket as non-blocking to use it in select()
. The system call already lets you handle sockets in a basic non-blocking fashion. Some applications will need non-blocking writes, though, and that's what the flag is still needed for.
Upvotes: 1
Reputation: 310840
send() and write() block if you provide more data than can be fitted into the socket send buffer. Normally in select() programming you don't want to block anywhere except in select(), so you use non-blocking mode.
With certain Windows APIs it indeed essential to use non-blocking mode.
Upvotes: 0