Hello,
Question: What is the typical performance on VMSwitch connected to 10Gbps NIC?
Problem: I'm configuring a new Hyper-V R3 cluster with dual 10 Gbps NICs (HP DL380pG8 / HP Ethernet 10Gb 2-port 560SFP+ Intel Adapter) and testing the performance. A direct jperf test between two physical servers is about 9.8Gbps, LACP HASH TEAM 19Gbps, so far so good, but once I create a VMSwitch on top of the TEAM (or the NIC alone) the speed drops to 5Gbps. VMQ is being used (tried disabling VMQ, result almost the same). Tested also without TEAM, same poor performance. I can see one of the cores (NIC RSS start CPU core) being overutilized (during peak Windows GUI freezing), but that should have been handled by VMQ, right? What is your experience? Can you utilize the whole bandwidth on VMSwitch?
Thank you in advance!
Regards,
Vojtech Fiurasek