Quantcast
Channel: Hyper-V forum
Viewing all 19461 articles
Browse latest View live

Convert existing Hyper V host and two VMs to VM for rescue purposes

$
0
0

I have an existing Server 2012 host with two Hyper V VMs. The virtual RAID is failing and I've been asked to reset the RAID and reinstall the server via an image. Unfortunately I have been unable to get a reliable backup with my Barracuda backup device, Windows Backup, even Acronis TrueImage. All of these products seem to be having problems reading the disk. 

The next thought was to convert the host and the two VMs to another VM on another server.  In other words, convert the server 2013 host to a VM - as well as the existing VMs and put them on a completely NEW host on new hardware so the old hardware RAID can be reconfigured and reinstalled.  Is it possible to run a DC host and two VMs in virtual mode on a new host?  I hope this makes sense.

Thanks! 


how to provide internet to Client VMs from VMwindows server2012 ?

$
0
0

Hi All

I have a virtual lab as below :

*Windows Server 2012, with 2NIC , first one is connected to external switch second one is connected to Private Switch

Windows Server is DC,DNs, and DHC

*Client Machine Windows 8.1 joined to domain through Private switch.

now how to make the Client machine ability to brows for internet through my Server?

and what is the best practice in real word.

regards

Hyper-V lost iSCSI LUN during storage firmware update

$
0
0

Hi All,

Thanks in advance for anyone who can shed a light on this issue.

We have an EMC storage that hosts ESXi and Hyper-V. During firmware update (VAAI related), ESXi hosts had no issue but Hyper-V host lost all LUNs except one. We have Hyper-V cluster managed by failover cluster manager. On one of the Hyper-V host, B01, only one LUN was there. After B01 was restarted manually, all VMs were back up running. 

Storage support people advised that all upgrade was clean per their logs. Also ESXi hosts were running fine. Now that the Hyper-V Hosts are running ok, but still I like to know what happened. My guess is maybe BL01 locked LUNs for some reason and BL01 can't provide service either in the mean time. Any comment or idea on this? Thanks.

B01 system log:

10:21 am

The description for Event ID 37 from source mpio cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

\Device\MPIODisk2

Microsoft DSM

The resource loader failed to find MUI file

10:30:42 am

Event ID: 70

Initiator failed to connect to the target. Target IP address and TCP Port number are given in dump data.

10:42:54 am

Event ID: 5120

Cluster Shared Volume 'Volume5' ('Cluster Disk 4') is no longer available on this node because of 'STATUS_VOLUME_DISMOUNTED(c000026e)'. All I/O will temporarily be queued until a path to the volume is reestablished.

10:43:06 am

Event ID: 20

Connection to the target was lost. The initiator will attempt to retry the connection.

11:46:32 am

Event ID: 113

Failed to allocate VMQ for NIC 4B8980C7-7EAC-449D-B614-5B0B00993C8D--449703B5-1D3A-4F4D-86F4-AD1147583C35 (Friendly Name: VFILE) on switch 6813D4F6-D891-4A4C-8037-CDF2B5DF5219 (Friendly Name: Villa Cluster Logical Switch). Reason - Maximum number of VMQs supported on the Protocol NIC is exceeded. Status = Insufficient system resources exist to complete the API.

12:24:45 pm

Event ID:5120

Cluster Shared Volume 'Volume3' ('Cluster Disk 5') is no longer available on this node because of 'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the volume is reestablished.

12:27:36 pm

Event ID: 39

Initiator sent a task management command to reset the target. The target name is given in the dump data.

12:28:24 PM

Event ID: 153

The IO operation at logical block address 0xed8c5b50 for Disk 3 was retried.

12:28:24 pm

Event ID: 140

The system failed to flush data to the transaction log. Corruption may occur in VolumeId: LUN3, DeviceName: \Device\HarddiskVolume586.

(STATUS_DEVICE_NOT_CONNECTED)

12:28:24 pm

Event ID: 15

The device, \Device\Harddisk4\DR4, is not ready for access yet.

12:28:24 pm

Event ID: 5120

Cluster Shared Volume 'Volume4' ('Cluster Disk 6') is no longer available on this node because of 'STATUS_DEVICE_NOT_CONNECTED(c000009d)'. All I/O will temporarily be queued until a path to the volume is reestablished.

12:28:24 pm

Event ID: 5121

Cluster Shared Volume 'Volume5' ('Cluster Disk 4') is no longer directly accessible from this cluster node. I/O access will be redirected to the storage device over the network to the node that owns the volume. If this results in degraded performance, please troubleshoot this node's connectivity to the storage device and I/O will resume to a healthy state once connectivity to the storage device is reestablished.

12:28:24 PM

Event ID: 140

The system failed to flush data to the transaction log. Corruption may occur in VolumeId: LUN3, DeviceName: \Device\HarddiskVolume586.

(A device which does not exist was specified.)

12:30:16 pm

Event ID: 1230

Cluster resource 'SCVMM TSR1' (resource type 'Virtual Machine', DLL 'vmclusres.dll') did not respond to a request in a timely fashion. Cluster health detection will attempt to automatically recover by terminating the Resource Hosting Subsystem (RHS) process running this resource. This may affect other resources hosted in the same RHS process. The resources will then be restarted.

The suspect resource 'SCVMM TSR1' will be marked to run in an isolated RHS process to avoid impacting multiple resources in the event that this resource failure occurs again. Please ensure services, applications, or underlying infrastructure (such as storage or networking) associated with the suspect resource is functioning properly.

12:30:16

Event ID: 1146

The cluster Resource Hosting Subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually associated with recovery of a crashed or deadlocked resource.  Please determine which resource and resource DLL is causing the issue and verify it is functioning properly.

12:42:19 pm

Event ID: 21502

SCVMM TSR1 Configuration' failed to unregister the virtual machine configuration during the initialization of the resource: The wait operation timed out. (0x00000102).

1:28:51 pm

Event ID: 1074

The process Explorer.EXE has initiated the restart of computer BL01 on behalf of user COMPANYABC\Pepole1 for the following reason: Other (Unplanned)

 Reason Code: 0x5000000

 Shutdown Type: restart

 Comment:

1:58:56 pm

Event ID: 6008

The previous system shutdown at 1:51:55 PM on ‎26/‎11/‎2015 was unexpected.

1:58:23 pm

Event ID: 41

The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.


How to Get a Virtual Machine has been deleted

$
0
0

I have Hyper-V 2012

and by mistake someone delete the virtual machine completely from the SAN Storage

please help me to get the VHDX file from this machine

time issue

$
0
0

Hi,

had to perform some job on Hyper-V host (2008 R2) of Remote Site.

When logged in found a weird time issue.

Two VMs (2008R2) running on Host: one is DC and other Print Server.

DC shows correct time, while Host and second VM (print server) are 10 min ahead.

1. Time sync is unchecked in both VMs settings

2. Logon server for the Host and second VM (showing wrong time) is DC (running on the same host).

3. All three are on the same Site and subnet.

What is wrong?

Thanks.


--- When you hit a wrong note its the next note that makes it good or bad. --- Miles Davis

Converting VMware host to Hyper V

$
0
0

I presently have a VMware host hosting a Windows 2008 server and four Windows 2012 servers.  I'm setting up a temporary server so that I can migrate from VMware to Hyper V.  My plan is to install Windows Server 2012 trial with Hyper V Manager on the temp server and migrate my VMware VMs to it.  Once this is done, I will wipe my original VMware server and install Server 2012 GUI as the host with Hyper V Manager.  Once my VMs are converted and moved back, I will switch my Server 2012 GUI host to Server 2012 core version.  Does this sound practical?  Are there any technical or licensing issues involved?

Is there a "native" process in Windows 2012 to convert my VMware VMs?  Or must I use a third party program?  Thanks!

 

Hyber v dosent want to connect to the host at winodws 10

$
0
0

i added hyper v from windows features i have windows 10 Enterprise this is what happens when i start hyper v i keep getting this message an error occurred while attempting to connect to server **** check that virtual machine management service is running and that you are authorized to connect to the server

then it say in a another section the computer *** couldnt be resolved make sure you typed the machine name correctly and that you have network access

although im the owner of the pc and i checked the bios and it say that vitalization is enabled

VMQ issues with NIC Teaming

$
0
0

Hi All

Apologies if this is a long one but I thought the more information I can provide the better.

We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues having cropped up in the past with Broadcom NICs.

I'll give you a little bit of background into the problem...

Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.

Prior to the redesign, each VM host had 12 NICs installed:

  • Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.

  • Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining 4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.

The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible to the same designated NICs in each of the VM hosts).

In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.

So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.

In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration, utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)

However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.

The Production VM Switch is configured as follows:

  • Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).

  • In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC hasRSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:

Name                           InterfaceDescription              Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
                                                                                                        Queues
----                           --------------------              ------- ---------------- ------------- ---------------
VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1             7
VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1             7
VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1             7
VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1             7
Production VM Switch           Microsoft Network Adapter Mult... True    0:0                            28

Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.

The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).

If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.

VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.

It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see less issues with VMQ disabled compared to it being enabled.

I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.

I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.

Many thanks

Matt



Show Virtual Machine (HV) Mac address on phisical switch mac table

$
0
0

Hi to all,

I have a little problem with HyperV conf, i have some phisical PCs connected to the same phisical switch, and one of this PC 2012r2 have the role of hyperV. In HV environment I have different VM connected on virtual switch in "internal" way so all the machine are on the same subnet of my phisical LAN.

the problem is that if I show all the mac addresses on the phisical switch I'm not able to see the MAC of the virtual machine, It seems that the virtual switch does not forward the VM Mac address to the phisical swicth.

Is there a way to have all MAC addresses available on the phisical switch mac table?



Hyper-V Architecture

$
0
0

Guys I'm confused of some terms in Hyper-V Architecture.

We know that each environments created by Hypervisor in Type 1 Hypervisors are called partitions are partitions equal to VMs?

Is the Parent Partition a VM?

Is Hyper-V the Hypervisor or it only makes the installed OS the Parent Partition if not the what is our Hypervisor in Windows Server 2012 ?

Thanks

Hyper-V Requirments

$
0
0

Guys I have a bunch of questions.

Is SLAT (Second level Address Translation) required for installing Hyper-V?

What is SLAT?

What is DEP (Data Execution Prevention)?

Hyper V Server disk full can't access the VM

$
0
0

Hey guys,

We're in deep trouble right now. Basically we have a VM that was accidentally got set to difference. it was stored on a separate disk alone, now that drive got full because of the avhd file. What's the best way to fix this? we never really had a differencing disk before so we don't know how to handle this.

Thanks

MMC stopped working pop-up at Hyper-V Manager

$
0
0

Hi all.

My desktop is running with Windows 10 pro and I turned on hyper-v manager.

And then I installed Windows 7 Enterprise to the virtual machine.

It is okay to use VM but I got an error "Microsoft management console has stopped working" when I turn on/off VM that it make Hyper-V manager close instantly.

I checked the event viewer and I found clr.dll cause the error.

Is there anyone who have same issue? Please help to find  solution to fix it.

Thanks.

Faulting application path- C:\Windows\system32\mmc.exe

Faulting module path- C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll

RemoteFX Preparation

$
0
0

Hi,

 We have a VDI environment:

1. 2 nodes running Server 2012 r2

2. 100 VMs (60 persistent and 40 non-persistent)

3. Remote users complained that they cannot watch a youtube video. barely load and choppy.

Solution: Buy a NVidia Grid Video Card and enable RemoteFX.

Currently the 100 VMs are Windows 7, RDP 8.0 and Thin Client has Windows Embedded Enterprise with RDP 8.0. Will this be an issue when we start to add the Nvidia Card? Is RemoteFX only works on Server 2012R2 and Windows 8 only? Not Windows 7? Our remote users: WAN and T1 line connection.

Thank you


Tuan

Connect to Hyper-V 2012 R2 standalone from Hyper-V Manager

$
0
0

Hyper-V 2012 R2 standalone is a workgroup machine.

My Win8.1 laptop is a domain-joined machine.

I named my workgroup similar to the domain, then added my laptop domain administrator account to Hyper-V local administrators group.

Configure in my laptop following the step here for adding my Hyper-V 2012 R2 standalone machine as trusted host.

https://technet.microsoft.com/en-us/library/jj647788.aspx

Added both ip address and computer name.  

Now I manage to enter-pssession from my laptop to Hyper-V standalone machine, but I failed to connect to my Hyper-V from Hyper-V Manager (Win8.1 RSAT).  Both using the same credential.

When connect using computer name in Hyper-V Manager, the error message is 

"Cannot connect to the RPC service on computer xxxx.  Make sure your RPC service is running"

When connect using ip address in Hyper-V Manager, the error message is 

"You do not havea the required permission to complete this task.  Contact the administrator of the authorization policy for the computer <ip address>"


Creating custom Add-Ins and or Option for Hyper-V

$
0
0

All

I am interested to know how to create some custom add-ins or option for Hyper-V.

Is there a SDK that covers that..?

Many thanks,

Phil Cox

Disk Queue Lenth from PerfMon

$
0
0

Hi,

 Can you please explain this picture?


Tuan

live migration fails from one particular node to one other particular node

$
0
0

Hi

I have a 3 node Windows Server 2012 Hyper-V Cluster

Using CredSSP for live migration authentication.

For some reason LiveMigration of virtual machines won't work from one particular node to one other particaular node using a dedicated LiveMigration network. If I let the admin network be used for live migration it works

Eg. node

1->2 ok
2->1 ok
1->3 ok
3->1 ok
2->3 ok
3->2 fails

Eventlog Message:

The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host 'abc': A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (0x8007274C).

I have checked:

networking(using windows teaming), servers been rebooted, restarted vmms, run netstat to see that live migration network is listening on port 6600, checked binding order, run cluster validation, have recreated the virtual switch used on node 2...

Any other areas to have a look at?
Looks somewhat similar to:
https://social.technet.microsoft.com/Forums/en-US/9cc654a1-80d3-461d-ba79-920ccfd1bcd8/live-migration-failing-between-nodes-on-2012-r2-cluster-0x8007274c?forum=winserverhyperv


Is there a way to open VHD files without having a Virtual Server?

$
0
0
I was just wondering if there is a way to VHD files without having to install Virtual Server.  

thanks

Hyper-V Replica Broker stuck in "Failed" status.

$
0
0

2 2012 standard servers in a cluster. Trying to add the replica broker so I can have my HA VMs replicated off site as a DR solution. I've tried to install the role over a dozen times, with a dozen different names, no nice.

I've changed the OU security to include the cluster's computer having the rights to create computer accounts. I've even tried to "stage" a computer before adding the role but it just complains that it is already in use in active directory. I'm at the end of my rope as to why this isn't working.

 

I've done everything that has been suggested in these forums and the blogs I could find on google.

 

Any help would be great. Thanks guys!

Viewing all 19461 articles
Browse latest View live




Latest Images