Quantcast
Channel: Hyper-V forum
Viewing all 19461 articles
Browse latest View live

Mous is missing in VM

$
0
0

Hi,

I have parent server Windows 2012 R2. There are 4 Virtual Machines (VM) with Windows 2012 R2 server installed.
I am accessing VMs via remote desktop.

Now three machines are OK. The last one I can connect only via Hyper-V manager.
But mouse pointer is only dot. I checked devices, there is no mouse device.
In the event. Log is message:

The driver \driver\mouclass failed to load.

I tried to install Integrated services bat there is latest version 6.3.9600.16389.

Can anybody help me?


JN


Dynamic memory not released to host

$
0
0

Dear Techies,

Ours is a small Hyper V virtual server infrastructure with three DELL power-edge physical hosts(Windows server 2012 Datacenter) and around 15 virtual machines running on top of it. The hosts are added to fail-over cluster. Each host has 95 GB RAM. All the VMs are running Windows server 2012 standard edition.

We have installed terminal services(TS licensing, TS connection broker, TS session host) in four VMs with the following dynamic memory settings:

Start-up RAM: 2048 MB

Minimum RAM: 2048 MB

Maximum RAM: 8192 MB

Below mentioned applications are configured in the server:

  1.       Nova Application Installed
  2.       SQL Developer Tool is Configured (ODBC Connection established for Database communication)
  3.       FTPs communication allowed for File Transfer
  4.       McAfee Agent is configured (Virus Scanner)
  5.       Nimsoft Robot Agent Configured – Monitoring
  6.       Terminal Service
  7.       Enabled Multiple terminal sessions based on customer requirement
  8.       BGinfo tool configured through group policy for customized desktop background

The average memory utilization in the terminal servers are 3.6 GB. As per dynamic allocation the maximum RAM requirement/allocation till date is 4GB. As seen in Hyper V console, the current RAM demand is 2300 MB and assigned memory is 2800 MB.

However, the earlier assigned RAM in the server is ballooned/faked to the VM as driver locked memory. This is by design. Despite the memory being released back to the host, the server still shows up the 4Gb which makes the memory utilization report from monitoring tools look 80% (3.2 GB out of 4 GB).

As a result, the memory utilization report is always based on the current dynamically allocated RAM and not calculated based on the maximum assigned RAM(8GB in this case). To make it clear: If the currently assigned RAM is 4Gb and utilization is 3.2 GB the utilization % is reported as 80%. However, if calculated in accordance with maximum RAM capacity of the server it would be 40% ((3.2/8)*100).

Is there any way to release the driver locked memory from the VM.?

Regards, 

Auditya N

 

Does my home computer have Hyper-V ?

$
0
0

Does my home computer have Hyper-V ?

As it's not in Turn Windows services on or off - Windows 8 is fully updated.

Thanks.

ODX on Dell MD 3XXX Storage will corrupt the VHDX

$
0
0

I have found that the latest version of the Dell MD Storage has ODX on by default. When using the storage migration with this turned on the VHDX files transfer instantly between the LUN's but the VHDX is corrupt and causes the Virtual Server to blue screen and then to never boot again.  I have yet to find a way to recover any data. 

My current work around is to disable ODX on the array itself which means that the data is transferred by the hosts, but does not corrupt the VHDX files.

show storageArray odxsetting;

set storageArray odxEnabled=false;


Virtual SAN Manager Configuration

$
0
0

I am trying to configure Virtual SAN Manager for the first time and whenever I attempt to start the virtual machine that I have the Fiber Channel Adapter connected to I get an error which I will display later. 

The configuration is as follows: I have a host system running Windows Server 2012 with Hyper-V, attached to that host is a HP MSA 2040 that is connected through a Single Path Fiber Channel connection with an 8GB/s PCI controller card.

Below is the error message I receive when I attempt to start the VM with the Fiber Channel Adapter added to the VM.



Hyper V 2012 blue screen

$
0
0

Currently I have three servers in Hyper V 2012
which form a cluster of hyper.

one of the nodes genre me a blue screen and generates minidump log attached.

the server generates an alert of hardware RAM, but according to HP indicates that a driver is Microsoft, I could help

Mini Dump:

https://onedrive.live.com/embed?cid=645BFCC5DD53BC8A&resid=645BFCC5DD53BC8A%2115682&authkey=ADBBXRRfGk9Quy4

 Image path: \SystemRoot\system32\DRIVERS\netft.sys<u5:p></u5:p>

File location problem after moving vm's from vmware to hyper-v cluster

$
0
0

I migrated some VM's from VMware esxi to hyper-v 2012r2. When I run the validation report a few of the vm's come up with this warning.

The following virtual machines have referenced paths that do not appear
accessible to all nodes of the cluster. Ensure all storage paths in use by
virtual machines are accessible by all nodes of the cluster.

Virtual MachineStorage PathNodes That Cannot Access the Storage Path

Virtual Machine WebserverC:\ProgramData\Microsoft\Windows\Hyper-VAll

Due to this issue, live migration fails. I need to get them in the proper location, C:\ClusterStorage\volume1. What is the best way to fix this problem as I don't see anything obvious.

Thanks.

Dynamic Disk Related Questions

$
0
0

Hi Members,

 I have HYPER-V Server on Windows Server 2012 R2.

I have One VM on Which I attached one dynamic VHD size : 600 GB

On VM in which this VHD attached shows 399 GB Free Out of 600 GB

ON Hyper-V VHD Inspection it shows 599 GB Drive

Questions:

If data on this gets 200 GB then why it gets full space on cluster although its dynamic VHD


KB 2979256

$
0
0

Is there a work around for this issue which will let us fix VMs with the issue in situ?

http://support.microsoft.com/kb/2979256

Also - if i understand the problem correctly, we are not meant to upgrade the Integration Components of 2012 R2 VMs via offline servicing at all? What if there is an upgraded needed? What are the best practice methods? 

Cheers.

VHD for Hyper-V Guest on Server 2012 R2

$
0
0

Hi there

I have a guest Windows 2008 R2 machine on a Windows 2012 R2 Hyper-V host. It had some problems booting up so I removed the VHD and now trying to add the same VHD again. The VHD sits on a SAN and I've used iSCSI Initiator on the Windows 2012 R2 Server to connect to the target disk. I can connect to the disk and can browse the disk plus also copy all contents to another disk.

But when I try to add the VHD which is located on this iSCSI Target Disk to a Hyper-V guest, I get an error message:

"Error applying Hard Drive changes. XXXX failed to modify device Virtual Hard Disk (Virtual machine ID 81E9D60A-......)

Failed to open attachment "D:\Hyper-V\XXXX.vhdx". Error: The media is wrtie protected.

Any idea why this is happening when I have full Read & Write access to then iSCSI Disk?

Enhanced session connection to a VM-no audio recording

$
0
0

I have a set of VMs that I'd like to connect to with enhanced session and audio redirection, but for some reason audio recording has stopped working through the enhanced session connection.  The host is windows 8.1, and the guest is server 2012 R2, and I have the correct policies enabled on both host and guest.  I can connect the enhanced session with the "record from this computer" selected for audio capture, but on the guest I only have playback, not capture. 

What's really strange is that if I connect a second virtual NIC to this VM with a public IP address, I can RDP directly into this machine (through a regular RDP session, not the hyperV enhanced session), and recording redirection works just fine.  So far I've tried:

  • rebooting host and guest
  • disabling and re-enabling audio redirection
  • removing the "remote audio" device in the guest's device manager

I've also tried USB redirection using a USB audio device, and this works through enhanced session, but I'd also like to be able to get an audio connection using just the default audio device.  Anyone have any ideas?

Can I join my Sever2012R2 HyperV server to a forest that the DC is once of the VMs

$
0
0
I built a forest with 2012R2 DC datacenter, Exchange 2013, and wanted to join my Hyper Vo server to that DC from the workgroup it started from. Now I cannot RDP to the HyperV server. but can the DC. It appears to be in a hung state. Befor I reboot the Hyper Vi server been researching for what the issue may be. I know you need to restart to join a domain. What if the DC VHD is on the server. Maybe the way our of this is to spin up another DC on a another box>

Server 2012 R2 Hyper-V Cluster: possible config issue

$
0
0

Hi,

I've built a test cluster in my lab. The kit consists of:

2 Hosts (5 X 1GB nics each).

1 X 16 port switch

1 server with 2012R2 (has the iSCSI target installed.

1 VM on another host which acts as a DC/ DNS for the cluster.

 on each host server, nic allocation is as follows:

nic1 (USB): Management network (host 1:192.168.2.40, Host2: 192.168.2.41)

nic2&3 (team): VM Access Network (no ip as it's just a virtual switch)

nic4: iSCSI1 (Host1: 192.168.50.10, Host2: 192.168.51.10)

nic5: iSCSI2 (Host1: 192.168.50.20, Host2: 192.168.51.20)

MPIO is installed and configured. 

On the iSCSI target, I have 1 nic for the network and 2 nics for iSCSI (192.168.51.30, 192.168.50.30).

All three servers run Windows Server 2012R2 Enterprise and the two hosts have failover clustering and the hyper-v role installed. All the iSCSI traffic is VLAN'd on the same switch (VLAN50).

There is a Quorum disk on the iSCSI target and a CSV for Virtual Hard Disks. The Quorum disk shows on the owner as Drive Q. The CSV drives can only be seen in disk management or server manager>disks. on both hosts

To test I've created 2 HA VMs and live migrated VM's them between both hosts and that works fine.

I've powered down each host and the VMs come up on the other host(For example, VMs are on Host1 and I power it down. If I log onto host 2, they will then power up).

I've tested the teaming by removing 1 nic cable at a time and that's fine as well. I've tested MPIO/ iSCSI by removing 1 cable at a time and everything is ok.

The issue is as follows:

Scenario1: Disk Quorum Owner is Host 2, I have a running VM on host 1. If I unplug the cables from both iSCSI nics on host 1, Host2 goes offline (I can't ping or connect to it, VMs on Host 1 go off, VMs on Host 2 go off, failover clustering manager on host one cannot find the cluster). If I reboot Host 2, everything works fine. The VMs startup on Host1. Not sure why host2 goes offline. Can anyone please advise?

Scenario2: Disk Quorum Owner is Host 2 owner of data CSVs is Disk 2, I have a running VM on host 2. If I unplug the cables from both iSCSI nics on Host 2, Host2 does not go offline. Instad all CSVs and Quorum move to Host1 ( VMs on Host 1 run as normal, VMs on Host 2 run as normal, failover clustering manager on both hosts work as normal and the csvs are shown as mounted on host2). How is this possible/ is this expected behavior?

Thanks very much,

Regards,

HA

Generation 2 vms pxe boot

$
0
0

Hi All,

can anyone please confirm if pxe boot works with gen 2 vms in hyperv? I have wds installed in standalone mode. When ever I create new gen 2 vm, it takes ip, downloads some file and it says 

Station IP address is x.x.x.x
Server IP address is x.x.x.x
NBP filename is boot\x64\wdsmgfw.efi

NBP filesize is 1459552 Bytes
Downloading NBP file...
PXE-E18: Server response timeout.
Boot Failed. EFI SCSI Device

Boot Failed. EFI Network.
No Operating System was Loaded. Press a key to retry the boot sequence

I have tried with and without enabling secure boot option. I have wds(standalone mode), MDT and gen 2 vms. Can anyone please guide me how to make gen 2 vms boot over pxe?

Chaitanya.


Chaitanya.



Hyper-V cluster slow network speeds/ network config issue?

$
0
0


Hi

I'm looking at a setup that's as follows:

2 Cisco SG300-28 switches.

2 HP2920 switches for iSCSI (dedicated switches for only iSCSI)

Standalone Windows Server 2012R2 Hyper-V host: SH1 (hosts DC for Cluster). 1 Quad port Broadcom nic. 2 ports teamed for VM traffic, 2 for Management traffic. All nics are 1 GB.

Cluster host 1 Windows Server 2012R2 Hyper-V host: CL1. 1 Quad port Broadcom nic with 2 ports teamed for management. 2 Quad port intel nics with 2 ports for iSCSI (MPIO), 2 ports teamed for Cluster traffic, 2 ports teamed for Live Migration traffic, 2 ports teamed for VM Traffic. All nics are 1GB. 

Cluster host 2 Windows Server 2012R2 Hyper-V host: CL2. 1 Quad port Broadcom nic with 2 ports teamed for management. 2 Quad port intel nics with 2 ports for iSCSI (MPIO), 2 ports teamed for Cluster traffic, 2 ports teamed for Live Migration traffic, 2 ports teamed for VM Traffic. All nics are 1GB. 

One team member from each host connects to each switch. So for example, on CL1 one nic from the Management team will connect to a port on the SG300 switch1 and the other will connect to the SG300 switch 2.Simillarly, on CL2, one nic from the Management team will connect to a port on the SG300 switch1 and the other will connect to the SG300 switch 2.

The SG300s have 3 VLANs. VLAN 1 (Management and VM Data - no choice here), VLAN 6 (Live Migration), VLAN 7 (Cluster Traffic).  The SG300s are not connected to each other directly and have one port connected to the HP core switch (10/100 switch that I do not have access to). SG300-1>>HP Core  & SG300-2 >>HP Core.

VLANs 6 and 7 are not trunked across to the HP Core.

Nic teaming on all adaptors is set to Switch independent and Dynamic.

Everything works fine in terms of fail-over, etc. However, Network speeds are pretty bad. For example,

Data transfer (3 GB file):

From SH1 to CL1: 30MB/s av.

From SH1 to CL2: 10MB/s av.

From DC on SH1 to CL1: 30 MB/s av.From DC on SH1 to CL1: 10 MB/s av.

From SH1 to VM on CL1: 30MB/s Av.From SH1 to VM on CL1: 10MB/s Av.

From CL1 to CL2: 800+MB/S but it uses SMB Multipathing as data goes across the cluster and Live Migration Teams as well.

I've changed the teaming mode to Hyper-V port and address hash and still experience poor speeds. In some cases 1 way traffic goes up but the other way, it stays at a maximum of 30MB/s.

If however, I disconnect the SG300(switch1) from the HP core and connect it instead to the SG300 (Switch2) and connect that to the HP Core, My speeds on every transfer go up to 600-900 MB/s. For example: SG300-1>>SG300-2>>HP Core.  

The SG300's do not show any packet loss or errors. The cabling is CAT6 and tested. I'm aware of the Boradcom/ VMQ issue and for testing have disabled it on the physical nics on the Standalone host. THis didn't make a difference. I have  broken the teams on the Standalone host (SH1), disabled all nics but one management nic and tested transfers to and from my laptop (plugged into the same switch) with no difference. I've tried with Jumbo frames on and off and with STP (Rapid) enabled and disabled on all interfaces. 

Can anyone please offer some insight on the above? Additionally, is the setup correct or do the SG300 switches need to be interlinked in addition to be connected to the HP Core (I would imaging this would cause network loops unless they were stacked)? Would I be correct in assuming that STP should be enabled on all interfaces except for the uplink to the HP Core?

Thanks,

HA





Hyper V 2012 Failover Cluster Physical Switch Failure

$
0
0

So I have a 3 node cluster of Hyper V 2012 Servers.

They are connected to a stack of switches.  All physical network connections are run through this stack.

I need to take the switch stack down for a few minutes to update the firmware. The switches will be 100% down. 

Will the Virtual machines be ok, other then being inaccessible during the firmware update?

Or will I have to shut down the failover cluster, update the switches, then bring the cluster back up?  I'd like to avoid that if possible...

VMWare ESX VM migration to Microsoft HyperV Cluster

$
0
0

All,

I have the following setup prepared to migrate VM based on VMware ESX server to HyperV Cluster - Windows 2012R2

# HyperV clusters has three nodes based on Cisco Servers with 300 GB local storage used to install OS windows 2012 R2.

# Storage - CSV file system with 1 TB used to storage VM

We have created the HyperV cluster and it is working fine.

Now issue we are facing in VM migration by using MVMC 3.0 that while migration it is asking shared storage on HyperV host

and that we do not have it.  So my question is how Can I resolve this issue...

I checked and I cannot share the C:\ClusterStorage\Volume1 (Where CSV volume)

How Can I  use CSV file system for the same...

Other option available is to have new LUN to specific HyperV host from storage and then used it to convert ESX VM to HyperV VM and then migrate the VHDX file of that VM to CSV volume to have VM as Highly available.

Thanks

Vijay Dalimkar

 


Vijay Dalimkar VCP,MSTS,MCITP,SCSA


Very Strange Network Issue With Two Guests on 2012 R2 Hyper-V Failover Cluster

$
0
0

Hi all.  We're having a odd issue with two guests on our 2012 R2 failover cluster.  

In a nutshell, if we shutdown a particular server (I'll call it Server A) another totally different server (Server B) on the same node loses it's network connectivity to the domain. If we start server A back up, network connectivity returns on server B. 

At first I thought server A might be running a service that was somehow linked to server B, so I decided to disable server A's NIC.  Interestingly, that had no affect on server B's connectivity.  

The next step I tried was pausing server A and again, no adverse affect on server B's connectivity.  

Next step was to live migrate server A to another node.  This action didcause server B to lose its network connection. 

One other clue is that if I ping server B from either of the Hyper-V hosts in the cluster, I never lose network connection to server B.

So I would suspect this is some network issue on the cluster, but I'm kind of at a loss where to go from here.  

Has anyone seen this behavior before or does anyone have any troubleshooting suggestions I can try?

Thanks! 


George Moore

Host vs Guest disk performance

$
0
0

I've got a disk performance issue in my HV cluster. I have 2 ibm x3850 x6's clustered running server 2012 R2 with and IBM XIV connected via 8gb fiber channel.  When I run my SQLIO test from the host to XIV drive (CSV or drive letter), I get 180,000 iops / 1300MB throughput.  When I run the same test from a guest VM with its vhdx on the same CSV, I get 30,000 iops and 250MB throughput. 

My guest VM is server 2012R2 (gen2, fresh install).  Where it gets even more interesting is that when I do a passthrough disk to the same VM, I get 90,000 iops / 700 MB throughput.  Everything I've read online claims passthrough and vhdx should be very similar performance, as well as host and VM performance.  There are no other VM's on the host.

Has anyone come across this before? 

MPIO Dual Controller ALUA BSOD in Cluster

$
0
0
Hello,

I'm having issues with iSCSI MPIO on ALUA-enabled dual controller SAN (Promise VessR2600tiD).
The device is 2012 R2 certified (http://www.windowsservercatalog.com/item.aspx?idItem=752f327a-05c8-ec2a-95c5-671460e6b7a9&bCatID=1282) with current firmware. We are running VM workloads on 2 LUN's atm over dedicated 10G iSCSI connections (Intel X540-T1/T2 NIC's) in a 2-node Hyper-V cluster.

The NIC's have stripped down protocol support (ipv4 only with jumbo frames) and dedicated subnet. 
Everything works fine until mpio kicks in. When I try to failover a controller on the SAN (change preferred one), I get strange BSOD's msdsm.sys. 

What's stranger is that I basically tried this recently:
2x Hyper-V servers in a cluster, no MPIO, on LUN1, LUN2 on CTRL 1
1x STANDALONE server WITH MPIO on LUN3 (Lun masking turned on) on CTRL2

If I change ALUA preference on LUN3 -> CTRL1, Hyper-V cluster servers get BSOD, not the standalone server. Also,
on the standalone server, the Path State stays Active/Optimized, although TPG State changes to standby and the second portal becomes active.

The vendor wasn't able to help me and I'm kinda at wit's end here. It seems that no matter what server I try to setup MPIO on, the cluster servers get BSOD, even if the server with MPIO is not a part of the cluster nor it shares LUN with the cluster. There's nothing in the server nor SAN logs.

Viewing all 19461 articles
Browse latest View live




Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>
<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596344.js" async> </script>