Hi
I'm looking at a setup that's as follows:
2 Cisco SG300-28 switches.
2 HP2920 switches for iSCSI (dedicated switches for only iSCSI)
Standalone Windows Server 2012R2 Hyper-V host: SH1 (hosts DC for Cluster). 1 Quad port Broadcom nic. 2 ports teamed for VM traffic, 2 for Management traffic. All nics are 1 GB.
Cluster host 1 Windows Server 2012R2 Hyper-V host: CL1. 1 Quad port Broadcom nic with 2 ports teamed for management. 2 Quad port intel nics with 2 ports for iSCSI (MPIO), 2 ports teamed for Cluster traffic,
2 ports teamed for Live Migration traffic, 2 ports teamed for VM Traffic. All nics are 1GB.
Cluster host 2 Windows Server 2012R2 Hyper-V host: CL2. 1 Quad port Broadcom nic with 2 ports teamed for management. 2 Quad port intel nics with 2 ports for
iSCSI (MPIO), 2 ports teamed for Cluster traffic, 2 ports teamed for Live Migration traffic, 2 ports teamed for VM Traffic. All nics are 1GB.
One team member from each host connects to each switch. So for example, on CL1 one nic from the Management team will connect to a port on the SG300 switch1 and the other will connect to the SG300 switch 2.Simillarly,
on CL2, one nic from the Management team will connect to a port on the SG300 switch1 and the other will connect to the SG300 switch 2.
The SG300s have 3 VLANs. VLAN 1 (Management and VM Data - no choice here), VLAN 6 (Live Migration), VLAN 7 (Cluster Traffic). The SG300s are not connected to each other directly and have one port connected
to the HP core switch (10/100 switch that I do not have access to). SG300-1>>HP Core & SG300-2 >>HP Core.
VLANs 6 and 7 are not trunked across to the HP Core.
Nic teaming on all adaptors is set to Switch independent and Dynamic.
Everything works fine in terms of fail-over, etc. However, Network speeds are pretty bad. For example,
Data transfer (3 GB file):
From SH1 to CL1: 30MB/s av.
From SH1 to CL2: 10MB/s av.
From DC on SH1 to CL1: 30 MB/s av.From DC on SH1 to CL1: 10 MB/s av.
From SH1 to VM on CL1: 30MB/s Av.From SH1 to VM on CL1: 10MB/s Av.
From CL1 to CL2: 800+MB/S but it uses SMB Multipathing as data goes across the cluster and Live Migration Teams as well.
I've changed the teaming mode to Hyper-V port and address hash and still experience poor speeds. In some cases 1 way traffic goes up but the other way, it stays at a maximum of 30MB/s.
If however, I disconnect the SG300(switch1) from the HP core and connect it instead to the SG300 (Switch2) and connect that to the HP Core, My speeds on every transfer go up to 600-900
MB/s. For example: SG300-1>>SG300-2>>HP Core.
The SG300's do not show any packet loss or errors. The cabling is CAT6 and tested. I'm aware of the Boradcom/ VMQ issue and for testing have disabled it on the physical nics on the Standalone host. THis didn't make a difference. I have broken the teams
on the Standalone host (SH1), disabled all nics but one management nic and tested transfers to and from my laptop (plugged into the same switch) with no difference. I've tried with Jumbo frames on and off and with STP (Rapid) enabled and disabled on all interfaces.
Can anyone please offer some insight on the above? Additionally, is the setup correct or do the SG300 switches need to be interlinked in addition to be connected to the HP Core (I would imaging this would cause network loops unless they were stacked)? Would
I be correct in assuming that STP should be enabled on all interfaces except for the uplink to the HP Core?
Thanks,
HA