Quantcast
Channel: Hyper-V forum
Viewing all 19461 articles
Browse latest View live

Windows Server 2019 Hyper-V Guest on Windows Server 2019 Hyper-V Host

$
0
0

Hi,

Short version: Hyper-V role installed on Windows 2019 with gen2 Win 2019 VM. File share created on that Win 2019 VM. File open (200Mb) or copy from share with Win 10 client machine is realy slow.

Please check detailed issue description: https://social.msdn.microsoft.com/Forums/en-US/3248a691-d32f-4ac6-9175-94524afc35ba/windows-server-2019-hyperv-guest-on-windows-server-2019-hyperv-host-networking-performance-error?forum=servervirtualization .  

I guess we have posted it on the wrong place..



Server 2019 - SET - NDIS reset Network card

$
0
0

Dear all

Environment:

3 node HyperV Failover cluster

each node:

- 2x Intel Xeon Gold 6138
- 512 GB RAM
- 1x 2 Port Intel X722
- 3x 2 Port Intel X710
- latest NIC driver and firmware
- Windows Server 2019 17763.805

SET:

Hash algorithm: dynamic

1x SET with the 2 Ports of Intel X722 for Cluster and LiveMigration
1x SET with the first Port of all 3 Intel X710 Networkcards for iSCSI Traffic
1x SET with the second Port of all 3 Intel X710 Networkcards for Data Traffic

After some time (with traffic faster than without) i can see the following events:

----------
Failed to move RSS queue 2 from VMQ 1 of switch 95AD556E-D325-46BB-87EC-AFDEA4ACE8F7 (Friendly Name: SET iSCSI), ndisStatus = -1073741823 .
----------
The network interface "Intel(R) Ethernet Controller X710 for 10GbE SFP+ #6" has begun resetting.  There will be a momentary disruption in network connectivity while the hardware resets. Reason: The network driver detected that its hardware has stopped responding to commands. This network interface has reset 1 time(s) since it was last initialized.
----------
NIC /DEVICE/{F4B0FE78-B659-4D0E-AB0A-83E52D3A5419} (Friendly Name: Intel(R) Ethernet Controller X710 for 10GbE SFP+ #4) is no longer operational.
----------
Intel(R) Ethernet Controller X710 for 10GbE SFP+ #6
 Network link is disconnected.
----------

Then the network card comes back for a minute and the last 3 messages from above are written every minute.

The only way to get back to normal evacuate the vm to other hosts and reboot it.

Any idea what could be wrong here?

Thanks


Hyper-V Cluster - iSCSI and MPIO / TEAM Interface

$
0
0

Hi,

We have two HP servers with 2*10G network ports. Can anyone provide the best practice to use both the 10G NIC with redundancy config ? Can we use NIC Teaming or MPIO ? can both the interface be configured with same VLAN IPs ?

Thanks,


suhag

Accessing a mobile device via Hyper-V

$
0
0

Greetings,

I am new to Hyper-V and have setup a Windows 10 guest VM on it, however I am in need of accessing an Android device that is connected physically to the PC.  My hope is to be able to see the device on the guest VM and access it via ADB (Android Debug Bridge).

I have done this previously on VMWare Player but being new to Hyper-V it does not seem as trivial.

Can anyone guide me on what I need to do to successfully communicate with my mobile device?

Appreciate any help.

Regards

SDDC Management cluster role Error

$
0
0

Trying to use Windows Admin  Center with my microsoft hyperconverged cluster/ Using powershell installed  SDDC Management role but it doesn't starts with errors:

SDDC Management cluster resource failed to come online.

Resource: SDDC Management
Status: There are no more endpoints available from the endpoint mapper.

Cluster resource 'SDDC Management' of type 'SDDC Management' in clustered role 'Cluster Group' failed. The error code was '0x6d9' ('There are no more endpoints available from the endpoint mapper.').

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

Which way should i look?

Slow VM Performance on certain Hyper-V nodes compared to other nodes in the cluster

$
0
0
On some Hyper-V nodes, we have observed significant performance slowness compared to other nodes running in the same cluster. The performance is most noticeable and reproducable in our SQL VMs due to the long running nature of some of the queries on it, however this slowness is observed in all kinds of loads, even regular monthly patching.

The performance difference mostly evident from what we can see (which may not be the only issue though) is disk access speed between the different nodes. For example:
We have a SQL query that runs on one of our SQL VMs. On the fast nodes in the cluster, that particular query finishes execution in on average of 7 minutes, and we observe disk speed access to be in the range on 18MB/s for each sqlserver process thread. On the other hand, when that same query on the same VM runs on the slower nodes, the query finishes execution in around 12 minutes and we observe disk access speed to be about 8MB/s for each thread.

This speed difference can even be observed in real time. If we Live Migrate the VM while it is executing the query, we can immediately see the speed difference depending on whether we migrated that VM to a fast node or slow node.

Other resources are not under pressure, memory and CPU resources seem fine. The nodes in the cluster run comparable hardware and drivers and we have been unable to find a difference between them.

Suspecting bad hardware at first, we opened a support case with Cisco since the nodes are Cisco UCS blades. After extensive troubleshooting with them, they determined that the issue is not hardware related and it is on the host Operating System itself. Changing driver versions did not help, switching server profiles to hardware that is supposedly slow made them fast. As a last attempt, we sysprepped the Operating System on a fast node and reimaged a slow node with the fast sysprepped image, the reimaged node's speed became fast equivalent to the fast nodes.

At this time, we *suspect/guess* that this is something related to iSCSI. Some of the tests we did: we verified settings across all nodes for offload settings, Jumbo frames and every other imaginable setting. We even installed drivers for an old Datacore SAN to see if they optimize the stack for iSCSI. We assigned iSCSI LUNs directly to VMs to bypass CSVs. None of that helped.

Note all our CSV storage (including boot volumes) is on iSCSI LUNs. The CSVs themselves are fine and run fast, data copy speed is around 1.3GB/s when copying between CSVs. All our nodes are configured the same using Cisco server profiles. Servers run Windows 2012 R2 Core.

Thanks to anyone who can provide some insight on this. I already have a support case open but just wanted to see if anyone has some input.

CSV STATUS_IO_TIMEOUT When starting a specific VM

$
0
0

Hi, 

I have the strangest problem with a customers infrastructure that I can't seem to get to the bottom of.

Server OS Windows Server 2019 Standard

Two Servers in the cluster accessing the CSV on a Dell EMC SC277984 via iSCSI over a 10GB back bone.

Basically long story short, but when rebooting (the act of powering it on) a specific VM it causes the CSV to drain and produces a myriad of errors and mayhem, it takes the entire estate down for about 15 minutes whilst everything begins to resume and power up again.

This error is found in the event logs.

Cluster Shared Volume 'Volume1' ('Cluster Disk 2') has entered a paused state because of 'STATUS_IO_TIMEOUT(c00000b5)'. All I/O will temporarily be queued until a path to the volume is reestablished.

I have tried rebooting the SAN, rebooting the hosts running a Cluster Validation Test (passes with no errors) 

I'm keen to try removing the role itself from the cluster and recreating the VM in hyper-V from scratch and re-attaching it's original disks but haven't had the opportunity to try this yet.

WS2K19 Essentials Hyper-V powers down VM's each week

$
0
0

For the last month, something strange is happening with our new Hyper-V server. During the summer holidays we bought and installed a new physical server with Windows Server 2019 Essentials installed on it; on which we also enabled and configured the Hyper-V role. On that physical Hyper-V, we have three VM's running, which are:

1. Domain controller;
2. File sharing server, part of the domain;
3. App server, also part of the domain.

All of the three VM's also have Windows Server 2019 Essentials as OS. Each WS2K19 Essentials server has its own dedicated license.

Each monday between 9:30 am and 10:30 am the Hyper-V power downs both the File and App servers. When we look in the logs there isn't anything that explains the power down of the VM's. Updates are also disabled to make sure that isn't the culprit.

Does someone has any ideas of what might be wrong or where to look in what direction? I've been looking for over a month now and can't seem to find anything.

Thanks!


HyperV manager Mobile guide

$
0
0
The hyperV virtual machine management server was unable to establish a connection for the virtual machine migration because the correct protocol was not supported by the target host

VLAN tagging not working in Hyper V

$
0
0

I have a Dell Server 2012 R2 Standard server running Hyper V. The server is equipped with 4 NIC interfaces, all of them being Broadcom NetXtreme Gigabit NICs which are the built in NICs. These 4 NICs are teamed using the Windows teaming service to which gives me a Microsoft Network Adapter Multiplexor Driver NIC. This teamed NIC is then being used to do the Virtual Switch in HyperV and it's configured as an External Network with "Allow management operating system to share..." ticked. VLANID in the Virtual Switch Manager is unticked.

The 4 ports on the switches are aggregated and configured as Trunk ports with a native VLAN of 16. The Host server has a static ip in the VLAN16 range and it works fine and can connect across the network. All 4 physical NICs have the Priority and VLAN enabled. The teamed connection has the 'virtual machine queues - VLAN Id' enabled.

Now my issue is that if i enabled VLAN tagging on a virtual machine, that machine won't work at all even if you give it a static ip, if you turn off VLAN tagging, then it works.

Failover Clustering - Hyper-V - WS 2019 Issues Migrating VMs (Flickering States)

$
0
0

Hello,

We currently are seeing the following issues when migrating VMs from Windows Server 2016 clusters or Windows Server 2019 cluster to Windows Server 2019 newly built cluster. We are building out these clusters as usual and have been running/migrating from cluster to cluster in 2016 without any issues (3000+ VMs). New 2019 clusters are being validated with 100% success on every single parameter. We are using Mellanox RMDA cards in a 4 node HCI or non-HCI clusters.

When VMs are moved from 2016 cluster to 2019 cluster or 2019 to 2019, some VMs start flickering between states in Failover Cluster Manager. They are going between Starting/Stopping/Failed and the FOC crashes. However, on the Hyper-V on the owner node, they are showing up fine as running. We have tried to migrate these VMs via VMM, or by un-clustering and bringing it in via Move on Hyper-V and once they are in FOC, the flickering starts.

It does not matter if the storage is moved along or not, the problem still arises.

Is there something happening in Server 2019 that we are not aware off that results in such behavior? The fix has essentially been to drain all roles except the faulty one, stop the cluster services, reboot node and bring the VM back, it then stops acting in such way.

Thanks for your help!

Hyper-V S2D Cluster and Network Interface usage

$
0
0

Hi,

I have 2 nodes Hyper-V in S2D cluster with 2 interfaces in teaming for the OS and 2 FC interfaces for storage replication and VM networks. I would like to be sure that both FC interfaces are used for storage and VM networks and I would like to know the amount of traffic on each interface dedicated on each services (Storage, Replication, VM Networks ...)

Is it possible to have a chart with perfmon or powershell ?

Thank you for your help

Cannot Allocate wanted amount of RAM to hyper-V machine, although host has more than enough free memory

$
0
0
Hi!

I recently bought a dedicated server, and i wanted to try the windows server 2016 edition ( evaulation version ).

To be short, my host server has 8GB of RAM, it is using about 1.3GB when idle, and when i try to allocate 5500MB to Hyper-V machine it just throws the classic error "Not Enough Memory in the system to start the virtual machine". 

I tried messing with the settings a little bit, suck as NUMA spanning, and dynamic memory ( it starts the hyper-v with dynamic memory, but it doesn't want to allocate more than 4.6GB RAM for hyper-V machine when it demands over that, while the host server still has more than 1.5GB RAM free )

In the end nothing worked.

One thing I was able to notify, is that, while the RAM on the host is free, some amount of RAM is cached ( tehnically that is free RAM, it clears up if needed), and here is the catch. I can not allocate RAM for Hyper-V machine if it is supposed to free up that cached RAM.

So here is an example, if 1.5GB RAM on host machine is cached. and host server uses 1.5GB on idle, that means that of totally 8GB, only 5GB remains. And what i am able to achieve, is to allocate only up to that "totally free" 5GB.


Although this is just a hypothesis, but i tested it and it is really weird how the limit is always that cache size.
But maybe I am wrong...

I would be extremely happy if someone has a clue or knows how to deal with this problem, as I am getting desperate.

Thanks :)

Windows Boot Manager appears when restarting VM with Windows Server 2019 Guest OS

$
0
0

Hi,

I've seen this problem for a while now - First in existing deployments (which made me think it's an issue with the infrastructure), but now on new Hyper-V deployments as well:

When I try to restart a VM by actually pressing the restart button in the Windows Server 2019 Guest OS, the VM appears to boot into the Windows Boot Manager from time to time:

There is absolutely no issue with the attached VHDX-Files, the problem can occur on the simplest deployments with one local disc. And if I select the "windows server"-entry, everything is booting like a charm. 

My problem right now is, that this boot manager does not have a countdown. That's a real problem for our staff, which has no access to the Hyper-V Console, but is able to restart a VM by accessing it via RDP. 

My question is now: Am I the only one out here with that problem? I did some research, but I either used the wrong search phrases or I am missing something big here.

Thanks in advance for your time and effort - and sorry for my bad english, I'm not a native speaker.


Carsten Brenner - IT-Engineer @ cloud4you AG (Germany)

Get connected:
   

Hyper-V VM export

$
0
0

Hello,

We have one Hyper-V VM with 100 GB C drive and 8 TB of D Drive. We are planning to migrate this VM to Azure with Azure ASR. Since, Azure ASR does not support this VM due to data disk size, we want to export the VM locally with only boot drive and then upload the VHD to Azure.

Is it possible to export the Hyper-V VM with only C drive and exclude D drive ?


suhag


Hyper-V cluster: Unable to fail VM over to secondary host

$
0
0

I am working on a Server 2012 Hyper-V Cluster. I am unable to fail my VMs from one node to the other using either LIVE or Quick migration.

A force shutdown of VMHost01 will force a migration to VMHost02. And once we are on VMHost02 we can migrate back to VMHost01, but once that is done we can't move the VMs back to VMHost02 without a force shutdown.

The following error pops up:

Event ID: 21502 The Virtual Machine Management Service failed to establish a connection for a Virtual machine migration with host.... The connection attempt failed because the connected party did not properly respond after a period of time, or the established connection failed because connected host has failed to respond (0X8007274C)

Here's what I noticed:

VMMS.exe is running on VMHost02 however it is not listening on Port 6600. Confirmed this after a reboot by running netstat -a. We have tried setting this service to a delayed start.

I have checked Firewall rules and Anti-Virus exclusions, and they are correct. I have not run the cluster validation test yet, because I'll need to schedule a period of downtime to do so.

We can start/stop the VMMS.exe service just fine and without errors, but I am puzzled as to why it will not listen on Port 6600 anywhere. Anyone have any suggestions on how to troubleshoot this particular issue? 

Thanks,

Tho H. Le


No network connection between multiple VM's

$
0
0

Hi folks hope you could help me.

Searched different fora but could not find a solution that was related to my problem?

Situation:
I have 3 VM server 2016 all connected with external network. I have 2 network cards in my machine. 1 broadcom used by the machine itself (not used by hyper V)

I used the second networkcard (Intel i219-V Gigabit) as my "external network" in the switchmanager so i could communicate between the VM's and a laptop hooked up directly with crosslink to the newtwork card.

Reason for that being that i'm studying for SCCM 2016 and i need to deploy to real machine for testing bios WMI query's

Now my problem is that my connection between the VM's is showing "no connection", but once i put power on the laptop and thus also on the external network then the virtual networkcards from the VM's show connected.

This is really annoying since of course i want them also to be "connected" without the use of the external networkcard.
There is also no hardware switch being used.

Hope someone could give me a permanent fix or a workaround.

Thanks


Nvidia Quadro P4000 passed through with DDA, error code 43

$
0
0

Hi

I've installed a server with Windows server 2016 core and Hyper-v role. On this host I've setup a virtual machine and installed it with Windows server 2012R2 and also successfully passed through a Nvidia graphics card, Quadro P4000 to it with DDA.

The problem I encounter is that after installing the driver for the card it shows up in device manager as expected but with the text "Windows has stopped this device because it has reported problems. (Code 43)".

Obviously, it doesn't work and I've asked Nvidia if they can point me in any direction where I've faulted but their response was that this should work and the P4000 supports running in virtual machines. I've tested and made sure I've followed every instruction I could find but still no change.

I've tested different drivers as well but still the same error. How can I solve this?

Appreciate all help I can get, thank you.

iWARP RDMA not working on vNICs

$
0
0

In preparation for setting up S2D, I want to test the RDMA (iWARP) communication.
We use Qlogic QL41164HMRJ nics (Dell R640).
RDMA communication works great (diskspd.exe) if I use the pNICs directly for this.
The counters in PerfMonitor give me positive results.

When using SET switch with vNICs it only works from vNIC to pNIC direction (SET and vNICs at the sending server, no SET at receiving server) not in the opposite direction (pNIC to vNIC). So, The vNIC does send RDMA traffic but can never receive it (Firewall is disabled on both servers)

In the end, both servers must have a SET switch with vNICs of course

Any ideas?


Mount an Ubuntu 8.04.3 LTS VM to server 2016 hyper-v

$
0
0

Hi, 

We have a Ubuntu 8.04.3 LTS VM runnung on VMware ESXI 6.5u2 at the moment. However, we would like to get rid of ESXI and move this VM to Window Server 2016. However, is any method to mount this old VM on hyper-v? Also, as VMware VM uses .vmdk file, is anything i need to be aware if i convert it to .vhdx file? 

Thanks

Thomas


Viewing all 19461 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>