Adaptive Data Transmission in the Cloud

Loading...
Thumbnail Image

Authors

Wu, Wenfei
Chen, Yizheng
Durairajan, Ramakrishnan
Kim, Dongchan
Anand, Ashok
Akella, Aditya

Advisors

License

DOI

Type

Technical Report

Journal Title

Journal ISSN

Volume Title

Publisher

University of Wisconsin-Madison Department of Computer Sciences

Grantor

Abstract

Data centers provide resources for a broad range of services, such as web search, email, web sites, etc, each with different delay requirements. For example, web search should cater to users' requests quickly, while data backup has no special requirement on completion time. Different applications also introduce flows with very different properties (e.g., size and duration). The default method of transport in data centers, namely TCP, treats flows equally, forcing equal share of the bottleneck network bandwidth. This fairness property leads to poor outcomes for time-sensitive applications. A better solution is to allocate more bandwidth to time-sensitive applications. However, the state-of-the-art approaches that do this all require forklift changes to data center networking gear. In some cases, substantial changes need to be made to end-system stacks and applications as well. In this paper, we argue that a simple modification to TCP can help better meet the requirements of latency-sensitive applications in the data center. No modification to end-systems, applications or networking gear is necessary. We motivate our design using measurements of real data center traffic. We analytically derive the parameters to use in our proposed modification to TCP. Finally, we use extensive simulations in NS2 to show the benefits of our approach.

Description

Related Material and Data

Citation

TR1780

Sponsorship

Endorsement

Review

Supplemented By

Referenced By