Reputation: 23
I am writing a c++ server application running on Linux. An iOS client app opens a TCP connection to the server, and the server starts sending messages on the connection. When the iOS app is put in the background, e.g. by the user switching to another app, the app stops reading data from the socket, but the socket is still open in iOS. The server keeps sending until the buffer window is full, and then hangs in the send() call. Is there a way to detect that the send() call will hang before calling? Or a way to get out of the call, when it hangs, e.g. setting a timeout?
Upvotes: 1
Views: 639
Reputation: 118300
You can put the sending socket in non-blocking mode.
Use fcntl()
to F_SETFL
the O_NONBLOCK
flag on the socket.
Your properly-written code should already be checking the return value from send()
, since even with blocking sockets send()
may indicate that fewer bytes were written than were requested.
In non-blocking mode, send()
will simply return immediately, with a return value of -1 and errno
equal to EAGAIN
or EWOULDBLOCK
. Then, use poll()
or select()
to check when the socket can be written to next.
Upvotes: 1
Reputation: 28872
You can use a simple select
or poll
with a zero timeout and your single socket as a write file descriptor/handle.
Upvotes: 0
Reputation: 310869
Put the socket into non-blocking mode and detect EAGAIN/EWOULDBLOCK
.
Upvotes: 0