Reputation: 35
I am not quite clear some detail mechanisms of TCP and sockets.
One client connects to a server through TCP, and sends data to server. What happens if the speed of sending is far greater than the speed of processing? For example, if the client sends 1MiB per second, but the server can only process 1 KiB per second, will it cause a system memory crash?
I know there's receive buffer size setting in the sockets API:
Upvotes: 3
Views: 3313
Reputation: 7467
In short when the server can handle only 1k per second, the client will not get the opportunity to send 1M per second. And this for 2 reasons :
It would not make sense for the client to send data for more than the advertised window size because the server is going to drop any segments that are outside its window size in the first place so the client would have to resend if it were to ignore the window size.
Buffers are in place in different layers. The buffers you can set via socket options are socket level buffers, they don't have direct control over the TCP window size. If you don't set these buffers you get default buffers sizes at the socket level. TCP is queueing depending on whether out of order packets that are in the window size are received. If it receives data that is in order, it triggers the socket level and only then the data is pushed into the socket level buffers and then only for the amount that there is space available. If TCP is not able to push all the data that it has received on the socket buffer, then this will have influence on the algorithm that calculates the TCP window size. This calculation mechanism is based on the amount of bytes that are left in the TCP buffers.
Nobody will crash, neither the server and nor the client.
At the client side, the one that could send at high speeds, similar things happen because TCP will see based on the advertised window size of the server that it cannot put much data on the wire. So the clients send buffer will fill up. If it is full, the client socket will either block in send (blocking mode) or return an error indication that it would block (non blocking mode).
So this does not just say what happens , but I have also tried to explain why those things happen and how it is implemented.
Upvotes: 6
Reputation: 19168
There are congestion control algorithms in the network layer for controlling these cases :-
At the transport layer(TCP), certain policies are used to get rid of this scenario :-
Based on the above points combinedly, I hope you might have got the idea what can happen if the client tries to swamp the server.
Actions take place like adjusting/sliding of window(at transport layer),jitter control,load shedding(at network layer),etc based on the above factors if :
(i) the network/router is flooding with requests, or, (ii) the flow of data is irregular and erroneous between two processes at Transport Layer(TCP).
Upvotes: -1
Reputation: 310980
What happens if the speed of sending is far greater than the speed of processing? For example, if the client sends 1 MiB per second, but the server can only process 1 KiB per second …
If the sender is in blocking mode, it will block if it gets too far ahead of the receiver.
If it's in non-blocking mode, send()
will return -1 with errno == EAGAIN/EWOULDBLOCK.
Will it cause a system memory crash?
No.
I know there's receive buffer size setting in the sockets API:
- What if I set the buffer size but the data is flooding?
The receiving host will tell the sender to stop sending when the receive buffer is full.
- What if I don't set the receive buffer size?
You'll get a default size.
Upvotes: 2