Good day.
We are experiencing an issue on our VMM based private cloud where pings are failing only for VMs running within the private cloud. We can ping servers on the rest of our network with no apparent issues. We can even ping the cluster nodes that host the private cloud VMs without issues. When we try to ping the VMs though we note between 1%-3% in missed pings.
We've tried a couple of tests to try and isolate this to see if it could be related to our older, non-IPv6 aware, hardware switches but have not seen any significant change in the behavior. One test we attempted was to take the NIC connections for one of the nodes that are connected to the vSwitch that the VMs are running on and place those two connections on a different physical switch to see if this was some kind of address space issue from the switch perspective. This test resulted in no change to the behavior and the ping loss of 1-3% continued.
At this point we suspect that this is a configuration issue on our part, but we're not having much luck figuring out how to best troubleshoot this issue. Are there any statistics we should be looking at from the VM, vSwitch, or Hyper-V level that might help us better identify the root cause of this issue?
We are running an 8 node Windows 2008 R2 SP1 cluster with several post SP1 patches loaded on it. We are also running SCVMM 2012 Update 2. The VM networks are running off of a pair of teamed Broadcom NICs on each host with a separate set of teamed NICs for all of our iSCSI and Cluster network.
Any input or thoughts would be appreciated.
We are experiencing an issue on our VMM based private cloud where pings are failing only for VMs running within the private cloud. We can ping servers on the rest of our network with no apparent issues. We can even ping the cluster nodes that host the private cloud VMs without issues. When we try to ping the VMs though we note between 1%-3% in missed pings.
We've tried a couple of tests to try and isolate this to see if it could be related to our older, non-IPv6 aware, hardware switches but have not seen any significant change in the behavior. One test we attempted was to take the NIC connections for one of the nodes that are connected to the vSwitch that the VMs are running on and place those two connections on a different physical switch to see if this was some kind of address space issue from the switch perspective. This test resulted in no change to the behavior and the ping loss of 1-3% continued.
At this point we suspect that this is a configuration issue on our part, but we're not having much luck figuring out how to best troubleshoot this issue. Are there any statistics we should be looking at from the VM, vSwitch, or Hyper-V level that might help us better identify the root cause of this issue?
We are running an 8 node Windows 2008 R2 SP1 cluster with several post SP1 patches loaded on it. We are also running SCVMM 2012 Update 2. The VM networks are running off of a pair of teamed Broadcom NICs on each host with a separate set of teamed NICs for all of our iSCSI and Cluster network.
Any input or thoughts would be appreciated.