Quantcast
Channel: Hyper-V forum
Viewing all 19461 articles
Browse latest View live

hyper-v fails to enable live migration

$
0
0

I'm trying to enable live migration on my Windows 2012 R2 hyper-v server, and it keeps failing. When I try through the GUI, I get an error that says "Error applying Live Migration changes. Failed to modify service settings."

If I try to enable it via the Enable-VMMigration powershel cmdlet, I get the following result:

Enable-VMMigration : Failed to modify service settings. The operation cannot be performed while the virtual machine is in its current state. At line:1 char:1 + Enable-VMMigration + ~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [Enable-VMMigration], Virt ualizationOperationFailedException + FullyQualifiedErrorId : InvalidState,Microsoft.HyperV.PowerShell.Commands.EnableVMMigrationCommand

I've removed & reinstalled hyper-v on the server, but with no luck. Any suggestions as to how to resolve this?


Enable Hyper-V Replica for the VM with recovery checkpoint

$
0
0

VM has Recovery Checkpoint and I need to enable to replica. Is it is good to enable replica along with checkpoint.  Please advise 

_ Ragav


Hyper-V 2019 Ingnoring Live Migration "Performance Option" Setting | Bug?

$
0
0

Hi all,

I've spent days at this point trying to diagnose this one, so I'm opening it up for discussion/a bug report.

Windows Server 2019 / Hyper-V Server 2019 is ignoring the "Performance options" selection on hosts and using SMB regardless of any other administrative choice. This is using Kerberos authentication.

2 hosts, non-clustered, local storage. Hyper-V Server 2019 (I've installed 2019 Standard as well, with the same behavior)

I originally assumed that I must have a config problem. I blew two hosts away, clean installs of 2019, no configuration scripts. Just the LBFO network config and a a converged fabric network design. Same problem.

Next I assumed it had to be networking. So drop the teaming, drop the config scripts, drop the VLAN configs, drop the LAGs. 1 physical NIC with access to domain controllers for non-migration traffic. Another single physical NIC - on a crossover cable - for live migration. Same problem.

After this, I have tried clean installs with all group policy being blocked from application on the hypervisors. I've tried clean installs with Microsoft Intel NIC drivers and Intel's v23 and v24 release NIC drivers. Same problem. I've tried the June 2019 re-release of Hyper-V Server 2019 and even a direct from DVD install (no language packs etc), so a 100% vanilla source.

Here is the problem in its simplest form (the two physical 1GbE NIC setup):

Live Migration: Kerberos & Compression
A test VM with a VHDX with a couple of GB of junk data in it to push back and forwards
All configs are verified
Windows Firewall modes are domain/private - and it makes no difference if Windows Firewall is off
Windows Defender is uninstalled and no other software (let alone security software) is installed, period. These are fresh installs
The ONLY live migration network in the Incoming Live Migration networks list is 10.0.1.1/32 on HV1 and 10.0.1.2/32 on HV2

Migration LAN: 10.0.1.0/24 (point to point via a crossover cable)
All other traffic: 192.168.1.0/24 (via a flat LAN configured switch i.e. it's in dumb mode)


VMMS listens ON THE CORRECT LAN

netstat -an | findstr 6600
  TCP    10.0.1.2:6600       0.0.0.0:0              LISTENING

When performing an online/offline migration VMMS connects correctly over the >correct< LAN

netstat -an | findstr 6600
  TCP    10.0.1.2:6600       0.0.0.0:0              LISTENING
  TCP    10.0.1.2:54397      10.23.103.1:6600       ESTABLISHED

All fine!

Using Packet Capture on the 10.0.1.0/24 migration LAN there is plenty of chatter to/from TCP 6600. You can see the VMCX configuration state being transmitted in XML over TCP 6600 and lots of successful back and forth activity for 0.35 seconds. Then traffic on TCP 6600 stops.

Traffic now starts up on the non-Migration network, the 192.168.1.0 network - that is NOT in the Migration networks list. A large block transfer occurs. Packet monitoring this connection shows an SMB transfer occurring. This block transfer is of course, the VHDX file.

As soon as the block transfer completes on the 192.168.1.0 network (~16 seconds) traffic picks-up again over TCP 6600 on the 10.0.1.0 network for about 0.5 seconds and the Live Migration completes.

The only way that I can get the hosts to transfer over the 10.0.1.0 network is to add their respective FQDN entries to the local server Hosts files.

Re-doing the transfer now uses the correct 10.0.1.0 network. You can clearly see the VMCX transfer over TCP 6600, then a SMB 2.0 session is established using the value from the hosts file between source and destination over 10.0.1.0. An SMB transfer of the VHDX occurs on the forced 10.0.1.0 network before finally the process is concluded via traffic on TCP 6600 (still on the 10.0.1.0 network) and the transfer completes successfully.

Without the Hosts file entries, Hyper-V seems to be using NetBIOS to find the migration target, it can't so it defaults to whatever network it can find a SMB connection on. However, I say again, the 192.168.1.0 network is not in the Live Migration networks list - Hyper-V should be failing the transfer, not unilaterally deciding to "use any available network for live migration". PowerShell on both hosts confirm that this is correctly configured:

get-vmhost | fl *

....
MaximumStorageMigrations                  : 2
MaximumVirtualMachineMigrations           : 2
UseAnyNetworkForMigration                 : False
VirtualMachineMigrationAuthenticationType : Kerberos
VirtualMachineMigrationEnabled            : True
VirtualMachineMigrationPerformanceOption  : Compression
...

Get-VMMigrationNetwork

Subnet       : 10.0.1.2/32
Priority     : 0
CimSession   : CimSession: .
ComputerName : HV2
IsDeleted    : False

Something is causing it to ignore the Compression setting, but only for VHDX transfers. Other VM data is being sent correctly over TCP 6600. As the 10.0.1.0 network isn't registered in DNS, Hyper-V isn't "aware" that it can find the destination host over that link. Of course, in this test I do not want it to use SMB to perform this transfer, so it should not be using SMB in the first place. What I want is migration traffic to occur over a private 9K Jumbo Frame network - as I've always used - and not bother the 1.5K frame management network.

I've clean installed Windows Server so many times to diagnose down on this I've gone dizzy! Does anyone have any bright ideas?

Thanks


An error Occurred while attempting to checkpoint the selected Virtual machine(s).

$
0
0

Dear Support,

On one of the Hyper-v servers i receive te following error code.

An error Occurred while attempting to checkpoint the selected Virtual machine(s).

Checkpoint operation failed.

'OOS-TIR-FS1' could not initiate a checkpoint operation.

The guest host is an domain server with Windows 2016 server. Just for your information.

For backupping I use the software Veeam. The other serves are working correctly.

Please advise.


What could be the case?

Greetz

Jeffrey de Jong.



Remove-VMSnapshot : The operation failed because the file was not found

$
0
0

Hi all¡

I need your help to can delete a DPM orphaned checkpoints.

This is the problem:

 - Weekend DPM backup of a server failed with error "DPM encountered a retryable VSS error"

 - I find that the error is because the Hyper-V server has olders chekpoints:

 - I did the merge of disks with the parent, but this action not resolve the problem....the chekpoints still figures at the console and with powershell (the disk folder of the VM, only have the two disk parents):

I tried to remove snapshots with powershell and from the console, but the error always is the same: "The operation failed because the file was not found"

The comand was executed with the vm turned off, and turned on.

Some help, please?

Maybe edit a xml file?

Note: its a production server, that not have a backup..

Thanks in advanced.

Regards¡


Marcos Ferreira

NLB VMs 2008R2 stopped status on 2012 R2 Hyper-V cluster

$
0
0

Hello.

I'm having an issue with two VMs 2008 R2 running onto my 2012 R2 hyper-v cluster (two nodes). When I finished the configuration of the NLB between my VMs everything starts working as expected, but after a few seconds the NLB I've configured in my VMs stops working.

I already tried unicast, multicast and IGMP multicast modes in my NLB, changed the host priority, uninstalled the latest updates from the VMs and the hosts and the problem persist. I also tried to install the KB4457133, but I get the error "it doesn't apply for my operating system".

The only thing I didn't try yet is build again my 2012 R2 hyper-v cluster from scratch without applying any Windows updates.

I hope to find here a light at the end of the tunnel, someone have some tip?

Thanks a lot!

Mauro.

Windows 10 Hyper-V Host - problems with RDP to VM

$
0
0

I am using hyper-v on my Windows 10 machine. I have created several VM's using the default switch. After getting the machine set up and verifying that it has network access; I can get out to the internet using the default switch; I change the machine name of the VM to an appropriate name.

I am not on a domain controller, so I have all machines on the same "Workgroup". I do NOT use static IP addresses for the VMs because I often travel and have to connect to other networks that have different IP configurations. I cannot use the hosts file on my host machine either; for this same reason.

My problem is that sometimes I can connect to a VM using it's machine name. But sometimes I cannot connect that way and have to use it's currently assigned IP address. The weird part is that this sometimes will happen the same day; without moving my host machine to a different network. I can connect in the morning using the machine name, then later in the day; after restarting the VM; I have to use it's IP address. Same thing with trying to ping the VM from the host machine.

Am I missing something in my configuration?

   

Checkpoint Operation Failed

$
0
0

Hi guys, 

On Hyper-V 2016 I'm getting this for just one of the virtual machines. 

If someone ever had the similar issue please let me know how to resolve it. 

Thanks. 


Windows 10 - Adding External Switch Failed

$
0
0

Anyone aware how to solve below error? It happened when im trying to add external switch.

But if internal or private network, no issue at all. My account is already admin of the computer.

Adding port to the switch "New External Switch" failed.

You do not have permission to perform the operation. Contact your administrator if you believe you should have permission to perform this operation.

Delegating permissions to manage Hyper-V machines

$
0
0

Hello,

the title says it all.

I would like to delegate permissions to manage Hyper-V machines, e.g. full control on 1 VM or right to start/stop/manage a set of VMs to one user or a group of local/AD users.
I've done some research and I've found Authorization Manager but is has been deprecated since Windows Server 2012 R2.

Is there any simple method to achieve this goal that is supported in Windows Server versions newer than 2012?

I'd like to point out that I don't want to add users to "Hyper-V administrators" group on the Hyper-V server.


Snapshot merge process stuck on Merge in Progress (20%)

$
0
0

Hi guys, 

On Hyper-V 2019 (which is a member of the Failover Cluster) I took a checkpoint for one of the machines.

3 hours later I deleted that checkpoint and now can see that merging process stuck on Merge in Progress (20%) and its been like this for 15-20 minutes. 



However Get-VM | Get-VMSnapshot doesn't show any snapshots. 

What is this and how to resolve it? 

Thank you.

How to remove VM if you delete the configuration file

$
0
0
Hi,

there are cases that we need to restore a VM that presented problem, however we do not want to lose the VM that is in trouble.

So we restore the VM with another name and this is bad for the hyperv inventory.

Is there any option that we can remove only the VM from the hyperv inventory without losing VM settings?

Thank you.

Boot to Network

$
0
0
We have a working USB drive.  Press F12 and it will connect the machine to our SCCM server. Then you can choose an OSD task sequence to build your machine with Windows 10.  

I have used a conversion tool to convert our USB drive into an .ISO.  I have imported this .ISO into our Hyper-V virtual machine manager.   I have built a new VM and I tell it to use the newly created .ISO.  

When I boot up my VM pressing F12 does not do anything.  It tries to PXE boot (even if I don't press F12) and it never connects.  

How do I make F12 work in this scenario?

mqh7

Hyperv Guest Manual Backup Windows 2008 R2

$
0
0
Hello,

    I have Windows 2008R2 which is acting as a hyperv host. I want to backup the virtual machine. I know it can be done this way using the wbadmin.

https://blogs.technet.microsoft.com/hugofe/2011/09/01/backup-hyper-v-virtual-machines-on-windows-server-2008-by-using-windows-server-backup/

https://www.petri.com/vm-backup-with-windows-server-backup.

My Question is what if I shutdown the hyperv Guest and copy the .vhd manually and restore it to another server in case of any disaster, will it work ?

Thanks

http://www.arabitpro.com

Poor Performance in Cluster Shared Volumes

$
0
0

Hi All, 

I have noticed on several clusters upon which I've worked, that there is a huge fall off in the expected performance after LUNs added to the hosts are turned over into cluster shared volumes. With the LUNs attached to the hosts before clustering, I typically see the full performance afforded by the the network and the SAN.

However when the CSV is created on the storage, and local Virtual machines are moved into the cluster, the guests suddenly become more sluggish. Tests then reveal that the LUN's are not delivering their full performance. 
I had wondered if the management network performance then comes into play, but I'm unable to verify this after restricting associated ports to 100Mbs. Its strange however that we typically see this kind of performance degradation once the CSV's are created and none of the VM's I work on clustered seem to have stella performance now matter what the storage backing actually is. I've checked out Qos settings, but note that the load balancing registry values mentioned in other articles are missing from 2019 HyperV Core which is what's in use in my test scenario. 

Test scenario setup 
is based upon my ageing test bed. 
1 x DL360 Gen8
1 x DL360 Gen7 (CPU compatibility enabled on all guests)
1 Proware IPS SAN 4 x 1Gbe in 2 x 2Gbe LACP aggregated Links
Hosts connected to iSCSI network via 2x2 LACP Teams each, with iSCSI on in its own VLAN on a CISCO 2960G
Management network 1Gbe only unfortunately due to lack of NICS and available switch ports in test setup
Data Link Network not in use by Host OS. All management, data and iSCSI networks in their own VLAN's on 2960. 

The SAN typically gives 700 - 900 MB/s when served up as attached LUN's. Although I am not sure how this manages this speads over the iSCSI network mentioned above but however it does, and it maintains it over large files tested up to 100GB plus. 

I have video of what I'm seeing that I will link to when my account is verified. 


Powershell Hyper-V was unable to find a virtual machine with name

$
0
0

Hi,

I try to execute a simple command on my cluster to start each VM hosted on two nodes. I opened a powershell on the first node and execute the command below :

Get-VM -ComputerName (Get-ClusterNode -Cluster MYCLUSTER) | ForEach {Start-VM -Name $_.Name}

Only VM's hosted on the first node are starting, for the VM hosted on the second node I have the message :

Start-VM : Hyper-V was unable to find a virtual machine with name "MYVM10"

With the command below, all VM's are well listed :

Get-VM -ComputerName (Get-ClusterNode -Cluster MYCLUSTER) | Select VMName

Any idea ?

Thank you for your help

Create VM

$
0
0

Hi Experts

i want to build oracle linux virtual machine on hyperv2016, is it supported on hyperv or not, please help me how to check this OS is supported or not.

Windows 2016 Virtual machine stuck at shutting down or restarting

$
0
0

Windows 2016 Virtual machine is having difficulty of shutting down (restarting).  It will be stuck at restarting, or shutting down for hours, or overnight..  and never came back.  

Host Windows 2016 server data center:

some facts:

1. VMs with other OS (windows 2012, 2008 etc) on the same host (with same vswitch) do not have any problems.  

2.  Windows 2016 VMs are working fine immediately after OS install with no added role/feature, no windows update.  (clean OS) 

3.  Problems appeared after a few minutes of running,  (nothing added) 

Anyone has similar issues?  



Hyper-V 2016 VMs keep automatic reboot - VMs up time is around 6 days

$
0
0

Hi,

We have Hyper-V 2016 Traditional Failover Cluster, most of VMs up time is very low (around 6 days), most of VMs are activated using VMAC (except some VMs that VMAC command shows successful activation, but Server Manager still shows not Active, although all Hyper-V hosts are activated with Windows Server 2016 Datacenter Edition), Applications owner reported that their VMs seems reboot automatically periodically, and some times reboot while they are working on it without any notification.

The only event that can be related is information System Event shown on most of VMs as ["Successfully scheduled Software Protection service for restart: Reason: RulesEngine"], and other events related to [Software Protection Service scheduled to restart in 05-08-2019].

Agents on Server:

  • TrendMicro Antivirus
  • Dell Networker Backup

Any clue?

Regards,

Maged

hyper-v breaks PMTUD?

$
0
0

Hello everybody, 

As allready posted in the "Windows 10 IT Pro  >  Windows 10 Networking" forum:

We're currently investigating some Networking Issues at our workplace.

After countless testing scenarios and network-dumps it seems that the problems we're facing are due to hyper-v breaking path mtu discovery.

System: Win 10 Enterprise Version 1709 (Build 16299.1146) - (We also tested Version 1809 in hopes it would fix this issue)

NW Card: Intel(R) Ethernet Connection I219-LM, Driver Version 12.15.25.6712 (It also happened using different NW-Cards and Driver Versions)

Requirements to reproduce the issue:

Some network service to talk to (e.g. a Webserver etc.)

A Win 10 Machine having hyper-v enabled/installed and PMTU discovery enabled

A network device in between which has a lower mtu size configured than the Client PC (1280 in our case, 1500 on the PC)

Symptoms:

As soon as the PC sends a packet with DF flag set > the configured mtu it gets ICMP responses (Type 3, Code4) telling it should lower the mtu to a given value. So far expected PMTUD behaviour...
However the Win 10 Machine keeps retransmitting the "oversized" packet without ever changing the MTU size until the connection is reset.

Crosscheck:

Uninstall/Deactivate hyper-v. (Also uninstall all created virtual switches!) 

Now try to retrieve the same resource -> Works flawlessly (NW dump shows 1 ICMP Type 3, Code 4 packet. After that packet size to this destination never exceeds the lowest MTU in the path)

Update:

We found out that the problem does not exist on Win 10 v1703 and that it has something to do with the hyper-v standard switch.

Does anyone of you experience the same issue or is able to confirm our findings? Maybe does even know a solution? ;)


Thanks in advance!

Viewing all 19461 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>