Reputation:
The AIMD Additive Increase Multiplicative Decrease CA algorithm halves the size of the congestion window when a loss has been detected. But what experimental/statistical or theoretical evidence is there to suggest that dividing by 2 is the most efficient method (instead of, say, another numerical value), other than "intuition"? Can someone point me to a publication or journal paper that supports this or investigates this claim?
Upvotes: 2
Views: 539
Reputation: 11638
All of the algorithms here
https://en.wikipedia.org/wiki/TCP_congestion_control#Algorithms
alter the congestion window in one form or another and they all have varying results which is to be expected.
Can someone point me to a publication or journal paper that supports this or investigates this claim?
Yang Richard Yang & Simon S. LamThere's paper investigates it in this paper
http://www.cs.utexas.edu/users/lam/Vita/Misc/YangLam00tr.pdf
We refer to this window adjustment strategy as general additive increase multiplicative decrease (GAIMD). We present the (mean) sending rate of a GAIMD flow as a function of α, β.
The authors parameterized the additive and multiplicative parts of AIMD and then studied them to see if they could be improved on for various TCP flows. The paper goes into a fair amount of depth on what they did and what the effects were. To summarize...
We found that the GAIMD flows were highly TCP-friendly. Furthermore, with β at 7/8 instead of 1/2, these GAIMD flows have reduced rate fluctuations compared to TCP flows.
If we believe the papers conclusion then there is no reason to believe that 2
is a magic bullet. Personally I doubt there is a best factor
because it's based on too many variable ie protocol, types of flow etc.
Upvotes: 1
Reputation: 1666
Actually the factor of 2 also occurs in another part of the algorithm: slow start, where the window is doubled every RTT. Slow start is essentially a binary search for the optimal value of the congestion window, where the upper bound is infinity.
When you exit slow start due to packet loss, it is natural to half the congestion window (since the value from the previous RTT did not cause congestion), in other words you revert the last iteration of slow start, and then fine tune with a linear search. This is the main reason for halving when exiting slow start.
However the 1/2 factor is also used in CA when the transfer is in steady state, a long time after slow start has ended. There is not a good justification for this. I see it also as a binary search, but downwards, with a finite upper bound equal to the current congestion window; one could say, informally, that it is the opposite of slow start.
You can also read the document by Van Jacobson (one of the main designers of TCP) "Congestion Avoidance and Control", 1988; appendix D discusses exactly how the halving factor was chosen.
Upvotes: 1