Hi All!
So I'm running into a bit of a problem with a new Server 2012 Core Hyper-V deployment. The VMs' disks are painfully slow. Here's a brief overview of the setup:
+ 2x HP Bl460 Gen 8's in a C3000 enclosure w/ 2x Flex-10 modules
- Each server has 256GB RAM
- Each server has a HP FlexFabric 10Gb 2-port 554FLB Adapter and a HP Flex-10 10Gb 2-port 530M Adapter
- Each server has 2x Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz
+ A 6 node HP P4500 cluster with each node connected via two 10Gb NICs bonded
- Servers are connected to the SAN via iSCSI using the MS iSCSI Initiator and HP Leftand DSM for MPIO for multipathing.
- Jumbo frames are configured for the iSCSI NICs in the server, switch, and P4500 nodes (confirmed by "ping -f -l 8000").
+ The two Server 2012 boxes are clustered w/ MS Failover Clustering.
- A quorum disk is provided by the SAN.
- A 4TB LUN is presented to the cluster nodes as a CSV. (CSV Cache is enabled and set to 512MB)
- Jumbo frames are configured for the physical NICs and logical MS teamed NICs for Migration and Cluster networks (confirmed by "ping -f -l 8000").
So, I'm experiencing severely bad perfomance with the VM's that reside on the CSV. The read times for the machines are pretty great and when running any kind of perfomance tool (e.g. ATTO, ioMeter, etc.) they return pretty decent marks. I think values reported
by the tools may be a bit misleading due to them using rather small file sizes so cache is being used quite a bit. The VMs though are suffering pretty bad. Since this a new deployment of Hyper-V (trying to move over from vSphere), we've only migrated a few
machines to test performance and validate the config before everything gets moved over. The boxes that have been moved, aren't where they should be, especially with the gear they are running on.
A repeatable test that I use to gauge the performance so far is just a simply file copy of a 2.5GB file. Inside of a VM, it starts off at ~100MB/s for a bit and then quickly drops down to ~20MB/s and then varies from ~20MB/s to a few KB/s. These results
show up on thin and thick provisioned VHDXs, copying over the network or local on the same disk, and across disks. If I'm directly on the Hyper-V node, I can copy the same file to the CSV and I hit ~115MB/s steady for the whole copy. Also, if I deploy a VM
the Hyper-V node's local disks I see ~115MB/s as well. It's only from within the VM's that reside on the CSV.
Since I can copy the same file to the same CSV that the VMs reside on from the node at full speed, I'm assuming that the iSCSI side of things are working but that doesn't appear to be the case since VMs can't copy that file to their VHDXs.
I'm kind of at a loss here and pulling my hair out (plus a few bruises on my forehead from beating my head against the wall). Any help would be greatly appreciated!
Thanks in advance!
Louis