V. Jacobson, "Congestion avoidance and control," in Proc. SIGCOMM symp. Communications Architecture and Protocols, 1988, pp. 314-329.
This paper is mainly about a group of proposed solutions to avoid congestion collapse as first seen in 1986 in LBL/UCB networks. The authors mention a group of 7 new algorithms which together they claim improves the robustness of networks against congestion collapse considerably and describe 5 of them somewhat in detail. The key goal of these algorithms is to make sure the network reaches its equilibrium state and the "conversation of packets" principle is followed when it is in equilibrium. This principle states that a new packet should not be inserted to the network unless one is first removed under stable conditions.
The slow-start algorithm which is used in today's TCP is initially proposed in this paper. As well as algorithms for load dependent estimation of round-trip timing which performs much better than the old load agnostic version. The variability of RTT increases as load is increased and ignoring this fact can cause faulty retransmission of packets. The congestion avoidance algorithm proposed in this paper is again the AIMD as read in our last paper, while the designers had chosen the additive increase part heuristically. In the last paper we saw a more rigorous justification of why AIMD is a better candidate than other linear CA algorithms.
As seen in figures 8 and 9 different end-to-end users with slightly different TCP implementations can experience a very different connection quality from the network. The consequences of this phenomena specially in terms of fairness might be interesting to discuss. Furthermore interestingly the authors of this paper predicted one of the main problems TCP has in today's internet architecture; being the confusion of the binary feedback signal for example in wireless (or other unreliable link level) connections. The TCP thinks packets are failing because of congestion and it reduces its window size while it should absolutely not. This paper is very interesting since it involves the reader with the initial proposed solutions to congestion collapse problem which many of them are still existing in current stack.
This paper is mainly about a group of proposed solutions to avoid congestion collapse as first seen in 1986 in LBL/UCB networks. The authors mention a group of 7 new algorithms which together they claim improves the robustness of networks against congestion collapse considerably and describe 5 of them somewhat in detail. The key goal of these algorithms is to make sure the network reaches its equilibrium state and the "conversation of packets" principle is followed when it is in equilibrium. This principle states that a new packet should not be inserted to the network unless one is first removed under stable conditions.
The slow-start algorithm which is used in today's TCP is initially proposed in this paper. As well as algorithms for load dependent estimation of round-trip timing which performs much better than the old load agnostic version. The variability of RTT increases as load is increased and ignoring this fact can cause faulty retransmission of packets. The congestion avoidance algorithm proposed in this paper is again the AIMD as read in our last paper, while the designers had chosen the additive increase part heuristically. In the last paper we saw a more rigorous justification of why AIMD is a better candidate than other linear CA algorithms.
As seen in figures 8 and 9 different end-to-end users with slightly different TCP implementations can experience a very different connection quality from the network. The consequences of this phenomena specially in terms of fairness might be interesting to discuss. Furthermore interestingly the authors of this paper predicted one of the main problems TCP has in today's internet architecture; being the confusion of the binary feedback signal for example in wireless (or other unreliable link level) connections. The TCP thinks packets are failing because of congestion and it reduces its window size while it should absolutely not. This paper is very interesting since it involves the reader with the initial proposed solutions to congestion collapse problem which many of them are still existing in current stack.
Actually the congestion collapse was observed in the ArpaNet backbone, and so TCP connections were not adequately adapting themselves in the presence of losses. The Slow Start and Congestion Window concepts, coupled with improved RTT estimation, are important contributions. That said, there isn't much justification in some of the decisions made, such as 4xvariance in the RTT calculation. There is always much to reconsider in using such parameter settings in new environments or as the network and its technology evolve.
ReplyDeleteP.S. try to stay up to date on the blogging!
Hmm, I guess the second paragraph of the paper is not very clear:
ReplyDelete"In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps."
According to Wikipedia the first congestion collapse was observed in NSFnet phase-1 backbone?