ggaaooppeenngg
ggaaooppeenngg

Reputation: 1383

Why need to sleep(1) to allow socket to drain?

I downloaded the source code for a simple static web server from http://www.ibm.com/developerworks/systems/library/es-nweb/sidefile1.html

However, I'm confused by line 130:

#ifdef LINUX
sleep(1);       /* to allow socket to drain */    
#endif

exit(1);

Since there is no close for the socket, does it mean I need to wait the client close the socket?

Upvotes: 5

Views: 3547

Answers (2)

mrjoltcola
mrjoltcola

Reputation: 20842

Regardless of the author's intent, it is needless and incorrect. exit() is sufficient. When close() is called on a TCP socket, or exit() is called to terminate the process, unless SO_LINGER socket option has been set to a non-default setting, the kernel will keep the socket(s) in a wait state and attempt to deliver any undelivered / buffered data. You can see this with netstat, and is the reason that fast restarting a TCP server that isn't written for fast restart will have a problem reopening the port quickly (there is a proper way to accomplish this too).

I disagree with a couple of things in the accepted answer.

close() and exit() should have the same effect on the socket, traditionally it has only been a matter of style whether to close sockets if you were about to exit.

It should have nothing to do with overflowing a TCP send buffer, since it happens after all the writes. Full write buffer will return an error immediately by the write() return code; sleeping at the end will be irrelevant to that.

sleep(1) should have no effect on the socket buffer or reliable data delivery. If anything, this code throttles the web server child processes after the writes, so really has no good effect, and could actually increase the potential of a denial of service attack.

I am describing default operation. The defaults can be changed via the many options.

For the "bible" on socket programming, see W. Richard Steven's UNIX Network Programming - Networking APIs: Sockets and XTI where he covers this in detail.

Upvotes: 5

Sam Varshavchik
Sam Varshavchik

Reputation: 118352

That looks like a bit of sloppy code to me.

If a process with an open socket terminates, and the socket has some unwritten data, the kernel is going to tear down the socket without flushing out the unsent data.

When you write something to a socket, the written data will not necessarily be transmitted immediately. The kernel maintains a small buffer that collects the data being written to a socket. Or a pipe, too. It's more efficient to have the process go on, and then the kernel will take care of actually transmitting the written data, when it has time to do that.

A process can obviously write data to a socket much faster than it can be transmitted over a typical network interface, and the size of the internal socket buffer is limited, so if the process keeps writing data to the socket, at some point it will fill up the internal buffer, and will have to wait until the kernel actually transmits the data, and removes the written data from the internal buffer, before there's room to write more.

[*] I am omitting some technical details, such as that the data isn't considered written until the receiver ACKs it.

Anyway, the intent of that sleep() call appears to be to allow some time for the internal buffer to actually be transmitted, before the process terminates, because if it does before the actual data gets written, the kernel won't bother sending it and terminate the socket, as I just mentioned.

This is a bad example. This is not the right way to do these kinds of things. The socket should simply be close()d. This will correctly flush things out and make sure that everything goes where it's supposed to go. I can't see any valid reason why that example simply didn't properly close the socket, instead of engaging in this kind of hackery.

Upvotes: 1

Related Questions