Running an HP DL380 G5 single quad core with 8GB of ram.
OS is on RAID-1 146GB 10k rpm SAS
VM's boot from EqualLogic SAN on 24 900GB 10k rpm SAS via iSCSI at 1Gb/sec (as a test lab)
Host OS is Server 2012 with hyper-v role (all updates)
Client VM's are all server 2012
I have been running bandwidth tests between VM's on the same node and have noticed that no matter what I do I cannot get the vSwitch to push more than about 2.0Gb/sec between VM's. I am using iperf as a test to generate bandwidth.
I have tried between 2 VM's, I have tried 3 VM's pushing to 1 VM, I have tried 2 to 2, but no matter how I structure the tests the overall aggregate bandwidth between all the VM's never exceeds ~2Gb/sec over the vSwitch.
Suppoesdly the vSwitch is a virtual 10Gb/sec adapter so I'm wondering why I can only push 20% of that traffic. I'm using iperf because it doesn't require disk I/O to test bandwidth because I want to eliminate the client OS or the SAN as the bottleneck.
Can anyone provide help on getting more performance out of the vSwitch?