V. Vasudevan, A. Phanishayee, H. Shah, E. Krevat, D. G. Andersen, G. Ganger, G. A. Gibson, and B. Mueller. Safe and Effective Fine-grained TCP Retransmissions for Datacenter Communication. In SIGCOMM ’09: Proceedings of the ACM SIGCOMM 2009 conference on Data communication, 2009.
The regular values of TCP retransmission timeouts (RTOmin>~200ms) causes serious performance degradation for high-bandwidth, low latency (10-100s of microseconds for RTT) network environments such as Data Centers. The so called TCP incase collapse happens when clients are involved in barrier-synchronized requests with small data request size. The authors propose a solution based on Linux high resolution timers (htimers) which enables microsecond resolution in estimating RTT and RTO and significantly improves the performance under incast environments.
The authors first clearly show through simulations, implementation and traces gathered from a larger scale storage node that microsecond granularity is needed for future faster Data Centers for the incast problem to be solved. They emphasize that the RTO estimations should be done at the same timescale as the RTTs and minimum bounds on this value will result to reduced performance. I found it very interesting that with minor modifications to TCP stack and using currently available Generic Time of Day (GTOD) high resolution timer authors demonstrate the feasibility of achieving these precisions.
The main hazards of removing the RTOmin is spurious retransmissions and faulty interactions with clients using delayed acknowledgments of for example 40ms causing non-necessary timeouts. In two experimental setups the authors try to evaluate these possible drawbacks showing that in their specific experimental setup removing RTOmin has no performance effect when used in Wide Area Networks and furthermore limited but noticeable effect when delayed acks are not disabled.
It would be interesting to discuss weather these experiments are enough to understand the full range of negative effects the aggressiveness of fine-grained timing calculations can bring in? For example if a noticeable percentage of Wide Area networks servers change to these kind fine-grained strategies what happens in terms of fairness and resource usage within the routers?
I vote for keeping this paper in the syllabus since it provides a clear overview of TCP incast problem and the proposed solution.
The regular values of TCP retransmission timeouts (RTOmin>~200ms) causes serious performance degradation for high-bandwidth, low latency (10-100s of microseconds for RTT) network environments such as Data Centers. The so called TCP incase collapse happens when clients are involved in barrier-synchronized requests with small data request size. The authors propose a solution based on Linux high resolution timers (htimers) which enables microsecond resolution in estimating RTT and RTO and significantly improves the performance under incast environments.
The authors first clearly show through simulations, implementation and traces gathered from a larger scale storage node that microsecond granularity is needed for future faster Data Centers for the incast problem to be solved. They emphasize that the RTO estimations should be done at the same timescale as the RTTs and minimum bounds on this value will result to reduced performance. I found it very interesting that with minor modifications to TCP stack and using currently available Generic Time of Day (GTOD) high resolution timer authors demonstrate the feasibility of achieving these precisions.
The main hazards of removing the RTOmin is spurious retransmissions and faulty interactions with clients using delayed acknowledgments of for example 40ms causing non-necessary timeouts. In two experimental setups the authors try to evaluate these possible drawbacks showing that in their specific experimental setup removing RTOmin has no performance effect when used in Wide Area Networks and furthermore limited but noticeable effect when delayed acks are not disabled.
It would be interesting to discuss weather these experiments are enough to understand the full range of negative effects the aggressiveness of fine-grained timing calculations can bring in? For example if a noticeable percentage of Wide Area networks servers change to these kind fine-grained strategies what happens in terms of fairness and resource usage within the routers?
I vote for keeping this paper in the syllabus since it provides a clear overview of TCP incast problem and the proposed solution.
No comments:
Post a Comment