In order to be able to understand this article, the readers should have a basic knowledge of the OSI model. In the absence of it, I encourage to skip all the explanations I provided about the protocols and how those IOmeter tests have been performed -> simply jump to the “Conclusions after processing the IOmeter benchmark results” section.
http://www.iebmedia.com/index.php?id=4582&parentid=63&themeid=255&showdetail=true
There are multiple ways to test a computer network from performance point of view. I decided to go the hard way and not run just a network benchmark tool who has some predefined tests (eventually some tests who are not matching the regular traffic an enterprise network infrastructure has).
The most common volume data intensive network operations are:
- web based
- file sharing
- database connections
Please note I am referring to volume data intensive network operations and not to most intensive from frequency point of view (DNS, PING, SNMP …).
Please note I am referring to enterprise network traffic and not to residential broadband, or ISP network traffic (BitTorrent, RTP, Skype, VPN …).
In a predominant Microsoft infrastructure we basically deal with HTTP/HTTPS, SMB and MS SQL traffic. Each of these protocols has its own way of working and of course, the way the data is handled (processed) before being passed to the network adapter has its performance implications. However, at the lower layers the data is behaving pretty much in the same way.
IOmeter is the unquestionable IOPS benchmark tool. Even if the project wasn’t continued and the last updates are years ago, this tool reached the maturity in terms of functionality. IOmeter will actually allow us to have HTTP/HTTPS, SMB, MS SQL on “steroids”, run it to maximum and in this way find the computer network limits. This will help figure out what does Microsoft mean by low / moderate / high / very high / extremely high Azure network bandwidth.
What sequence of steps I performed to write this article?
- It was required to analyze the HTTP/HTTPS, SMB, MS SQL protocols and see how exactly they behave over the network. To be able to run such analysis it was necessary to build an isolated network setup. I actually ended up with the following Hyper-V configuration:
- two Hyper-V VMs (named: OPIOA3_01 and OPIOA3_02);
- each VM has been configured with: 4 vCPU, 7168 vRAM and one 10 Gbps virtual network adapter;
- both these VMs have been connected to a Hyper-V Internal Network Virtual Switch.
(a detailed description of this step can be found here)
- I defined three IOmeter benchmark tests that reproduce as close as possible the HTTP/HTTPS, SMB and MS SQL network traffic.
(a detailed description of this step can be found here) - In order to run the benchmark tests, in Microsoft Azure I created five sets of Virtual Machines (one set for each network bandwidth size: low / moderate / high / very high / extremely high). Each set was composed by two VMs: one used to run IOmeter and act as server (sending data) and the other machine used to run Dynamo.exe and act as client (receive data). Each of those two VMs has been configured to be part of the same Azure availability set (to guarantee the VMs are running on separate Azure physical hosts and the network traffic is passing the physical Azure network infrastructure).The results for the network benchmark tests have been collected using Performance Monitor and IOmeter. To have the minimum influence over the results, for the duration of all these tests the Windows Firewall has been deactivated and no additional applications have been executed in parallel.
(a detailed description of this step can be found here) - At the end I got all the test numbers together, did the interpretations and got the final conclusions.
(a detailed description of this step can be found here, here and here)
Conclusions after processing the IOmeter benchmark results
Azure Low network bandwidth:
- Maximum: ≈20 to 30 Mbps
- Average: ≈14 Mbps
- Latency: >1 mS
- Ideal for: DNS, LDAP and in general all the protocols who are not data intensive (smaller network packets -> case in which the latency is improving <1mS).
Azure Moderate network bandwidth:
- Maximum: ≈500 Mbps
- Average: ≈491 Mbps
- Latency: ≈1 mS for large and <0.5 mS for smaller network packets
- Ideal for: any type of network traffic and average computing VMs
Azure High network bandwidth:
- Maximum: ≈1 Gbps
- Average: ≈930 Mbps
- Latency: ≤0.5 mS
- Ideal for: any type of network traffic and production VMs with moderate to high usage
Azure Very High network bandwidth:
- Maximum: ≈3 Gbps
- Average: ≈2.1 to 2.5 Gbps
- Latency: <0.5 mS
- Ideal for: any type of network traffic and heavy production VMs
Azure Extremely High network bandwidth:
- Maximum: ≈3 Gbps
- Average: ≈2.1 to 2.5 Gbps
- Latency: <0.5 mS
- Ideal for: any type of network traffic and heavy production VMs
Azure Very High vs Azure Extremely High: even if the benchmark results indicated some differences, I am not convinced if an upgrade from Very High to Extremely High will actually make the difference in case network bottlenecks are encountered with Azure Very High network bandwidth. In reality, the performance differences will be too small to be observed.
In such case I would advise upgrading to an Azure Very High RDMA capable VM, but keep in mind the second network interface for remote direct memory access (RDMA) does not provide general TCP/IP connectivity (it can be used only by applications who can “talk” over Network Direct interface and MPI protocol).
If you showed interest in Azure network performance, for sure you also understand the importance of the storage performance – case in which I recommend you to read the following article “Microsoft Azure IaaS – storage benchmarks, comparison with on-premises“.