Dell R510 PERC H700: 8 virtual disks in total. System drive\OS on RAID1, other 7 VD on 4 SAS 600GB as RAID10
Host has 48Gb ram and is running dual Xeon E5645 CPU's
Read Policy: Adaptive Read Ahead
Write Policy: Write Back
Disk Cache Pol: Disabled
OS: W2K8 R2 ENT, Hyper-V + failover clustering
Each VM hosted in separate volume on RAID 10, SIOS DataKeeper Cluster Edition replicating the 7 volumes to two identical R510's
VM's: One Exchange 2010, one SQL2005, Citrix Server, File and Print (Os mixes of W2K8 and W2K3 64 Bit) plus a couple of other application servers.
Problem:
I am seeing very high average disk queue lengths in all the 7 VM's (maxing at 100 in performance monitor) whenever I try and do large(ish) file transfers (2Gb plus) whereas if I file transfer between
the hyper-v hosts on their RAID1 disks and outside of the hypervisor layer, I see pretty respectable throughput and no maxing out on average queue lengths
Other symptoms: Exchange VM slows right down if anyone emails attachments of say +7Mb.
File and Print Server VM, long time to enumerate folder structures etc...
I've tried updating BIOS, RAID f/ware Broadcom network adapters (against my better judgement I've got a Broadcom LACP team upto my switch for VM LAN access). I'm also seeing pretty sluggish
throughput with IPERF from other LAN computers into the Hyper-V VM's
Edited to add: have migrated one VM to a Hyper-V node in the cluster with no other VM's running, give the VM a load more RAM and still seeing same problem
Kind of running out of ideas here, any help appreciated