We have a two-node Hyper-V cluster running Windows Server 2012 R2 on the hosts and on most of the vm's. For various reasons we have not before tried using dynamic memory on any of these vm's, but we recently decided to do so.
We turned dynamic memory on for about 90% of the vm's (there are about 30 in total) and largely left startup memory unchanged from what it was before and set minimal memory to what we felt each vm could just about get by with. This was significantly lower than startup for most of them. we set maximum memory quite high for some of them as these have very varying workloads. We identified a set of critical vm's and made sure that the sum total of maximum memory for these did not exceed the 128 GB, which each host has, so that we can always patch the hosts without interfering with the critical vm's. So maximum typically is a lot higher than startup.
All in all, we expected better performance as we expected Hyper-V to have more free hands to hand out memory to the vm's that need it. What we saw was that several vm's became sluggish.
And we noted that in fact many vm's received memory below their startup value, while leaving most of the host memory unused. This surprised us. why would Hyper-V hog all that memory to itself?
A few numbers:
Host 1:
Total startup = 30 GB
Total Assigned = 22 GB
Total Minimum = 20 GB
Total Maximum = 58 GB
Total Demand = 14 GB
Host 2:
Here there are 3 vm’s which still use static memory. They have 14 GB between them. They following figures are for the other vm’s:
Total startup = 17 GB
Total Assigned = 21 GB
Total Minimum = 11 GB
Total Maximum = 53 GB
Total Demand = 20 GB
Question 1: Are the figures for assigned memory about what is to be expected given the other figures and given that both hosts have 128 GB?
Question 2: What can we do to change this? Is there a general setting where we can tell Hyper-V to just use more memory as long as it is available? Or do I have to increase minimum memory for all vm’s?