Hi,
We were running 2x3 node Hyper-V 2012 cluster and added a 4<sup>th</sup> node to both clusters. Most VM workloads have been running as expected on the 4th node of either cluster and at one point I saw 120 VM’s running on one of them node with no reported issues. Now it gets a little strange because we have a couple of VM’s that run in a failover cluster and if any one of the cluster nodes runs on Hyper-V node 4 its reported as down in cluster manager.
We have been running 2x3 node Hyper-V 2012 clusters since the release of SCVMM SP1 now with no issues. We are using Logical switches and all switches have 2 network adaptors assigned.
The process used for adding both node 4’s was;
- LUN cloned from filer and attached to new blade – This is the same sysperped LUN the other 6 nodes were built from.
- The node was added into SCVMM, compliance scan and remediated so it’s at the same patch level
- 6 Logical Switches configured in the same way as the other cluster nodes
- Attempted to add node to cluster via SCVMM, however I believe we hit a known bug http://social.technet.microsoft.com/Forums/en-US/virtualmachinemgrclustering/thread/d975cc4a-aa75-4ca9-9189-a41c1eb14edc.
- Added node to cluster via cluster manager
Our hardware is Cisco UCS using B230 M2 blades All hardware was purchased at the same time and all 4 blades in each cluster are running from the same service profile template therefore there should be no discrepancies in the hardware presented to windows from one blade to the next.
Thanks in advance for any help