Reputation: 51
Currently I am implementing a simple client-server program with just the basic functionalities of read/write.
However I noticed that if for example my server calls a write() to reply my client, and if my client does not have a corresponding read() function, my server program will just hang there.
Currently I am thinking of using a simple timer to define a timeout count, and then to disconnect the client after a certain count, but I am wondering if there is a more elegant/or standard way of handling such errors?
Upvotes: 1
Views: 1362
Reputation: 74365
There are two general approaches to prevent server blocking and to handle multiple clients by a single server instance:
write
operation will fail with error.select(2)
or poll(2)
. It is quite harder to program using polling calls though. Network sockets are made non-blocking using fcntl(2)
and in cases where a normal write(2)
or read(2)
on the socket would block an EAGAIN
error is returned instead. You can use select(2)
or poll(2)
to wait for something to happen on the socket with an adjustable timeout period. For example, waiting for the socket to become writable, means that you will be notified when there is enough socket send buffer space, e.g. previously written data was flushed to the client machine TCP stack.Upvotes: 3
Reputation: 56038
If the client side isn't going to read from the socket anymore, it should close down the socket with close
. And if you don't want to do that because the client still might want to write to the socket, then you should at least close the read half with shutdown(fd, SHUT_RD)
.
This will set it up so the server gets an EPIPE
on the write call.
If you don't control the clients... if random clients you didn't write can connect, the server should handle clients actively attempting to be malicious. One way for a client to be malicious is to attempt to force your server to hang. You should use a combination of non-blocking sockets and the timeout mechanism you describe to keep this from happening.
In general you should write the protocols for how the server and client communicate so that neither the server or client are trying to write to the socket when the other side isn't going to be reading. This doesn't mean you have to synchronize them tightly or anything. But, for example, HTTP is defined in such a way that it's quite clear for either side as to whether or not the other side is really expecting them to write anything at any given point in the protocol.
Upvotes: 1