Quantcast
Channel: Hyper-V forum
Viewing all 19461 articles
Browse latest View live

Reducing Hyper V disk size

$
0
0

Hello Support,

I have a Windows server 2019 standard as a host.  I have converted a windows server 2008 R2 physical server to virtual (P2V) and used it as a VM in the host.  The allocated hard disk space for this VM is 350 GB.  I have an online backup and I am exceeding the 500 GB for both host and VM.  is there any way that I can reduce the 350GB VM size without causing any issues?

I appreciate your help.

Thanks,

Jamshid


Stop 5355 traffic from vEthernet nic

$
0
0

I am running Server 2016 with Hyper-v. I have multiple internal vSwitches but working on one at a time. IP on the VEthernet on the server for the vSwitch is 10.1.1.1 as all i need is the connection. one side of vSwitch is PFSense the othe side is VYOS (virtual router). everything works fine but PFSense is showing the IP keeps reaching out to 224.0.0.252:5355. I understand this is LLMNR but why is the VEthenet adapter sending this and how do i stop it from being sent?

I am blocking it in PFsense but tired of seeing it. Yes i could just make it not show but i would like to make it not happen.

Properties for the vEthernet are all unchecked except internet protocol version 4 (TCP/IPv4)

Thank you for any help

Hyper-V VM volume/LUN isolation or single volume/LUN

$
0
0

Hi guys, I'm doing a new hyper-v setup, and have an external HPE MSA 2052 SAN storage. When provisioning storage for the VMs,  which is the recommended way to host my VMs (VHDs and Config files), in a single large LUN/volume, or create separate LUNs/volumes for each VM?

Is  there any disadvantages of either options?

Hyper-V 2019 Newbie Questions

$
0
0

Hello,

I am trying out Hyper-v to compare to KVM.  I installed a bare-metal server with the free Hyper-V 2019.  It is not on any domain and it has a working IP address.  I am able to RDP into it.  I have a Windows 10 desktop to manage it.  I cannot seem to get the Hyper-v manager from windows 10 to connect to it.  What are the exact steps that are needed?  I have created a guest vm on my desktop and the hyper-v manager can connect to and manage it just fine.  Help would be greatly appreciated!

error manging Hyper-V 2012 R2 Core with server 2019 desktop hyper-v manager

$
0
0

Hello,

i can not manage my hyper-v 2012 R2 server with a hyper-v manager on 2019 server (gui). (VM on this hyper-v core server)

With a 2012 R2 gui server i could manage my hyper-v 2012 R2 server without a problem.

using a new 2019 gui server i can not connect to my hyper-v server.

then i update the working 2012 R2 gui server to 2019 gui server, and the hyper-v manager has no the same problem when connecting to the hyper-v 2012 R2 core server.

if i connect to server i get the error: (can not insert picture.. )

Hyper-v Manger popup message with red cross.

"An error occurrd while attempting to connect to server

'fqdn.ofthe.server'. Check that the Virtual Machine Management service is running and that you are authorized to connect to the server.

Hyper- encounterd an error trying to access an object on computer 'fqdn.ofthe.server' because the object was not found. The object might have been deleted. Verify that the Virtual Machine Management services on the computer is running."

I have disabled the firewall on the 2012 R2 core server, i can open RDP the the server without any problem ..

is there a compatibility issue ?? between hyper-v 2012 R2 and server 2019 gui with hyper-v manager ??

thanks for helping..

Why doesn't Hyper-V support .img floppies

$
0
0
There is not one default format for floppies, but if there was it would be .img. Adding support for his is essential. You may not care, but people like me got no idea what a Vfd is and work with .img all the time. Why would this be hard to implement? Thank you a lot for reading.

SQL 2012 Cluster Storage on Hyper-V 2016

$
0
0

Hi,

I'm migrating from VMware to Hyper-V and have a question regarding the storage for our SQL 2012 Cluster VM's. We would like to do a side-by-side SQL migration to retain the current SQL Failover Cluster config.

Current setup:
- 3x VMWare ESXi 5.5 hosts
- 2x Windows Server 2012 VM's with SQL 2012 Standard
- Failover Cluster for 6x SQL 2012 Clustered Instances
- HP 3PAR Fibre Channel SAN with 17x RDM's for Cluster Storage

New Setup:
- 3x Windows Server 2016 Hyper-V hosts
- 2x Windows Server 2012 VM's with SQL 2012 Standard (converted from VMware)
- Failover Cluster for 6x SQL 2012 Clustered Instances (existing)
- HP 3PAR Fibre Channel SAN (1x 5TB Disk or multiple smaller disks)


Would it be supported to setup the new environment as,
- Failover Cluster for 3x Hyper-V hosts
- Add shared storage from the SAN to the Hyper-V Cluster (either 1x 5TB disk or multiple smaller disks)
- Convert disks to Cluster Shared Storage (CSV's) ***is this supported for SQL 2012 workload??
- Convert SQL VM's from VMware to Hyper-V (retaining existing failover cluster config for SQL)
- Attached new disks to mirror previous setup in VMWare

I've found this link that has good info but it mentions "CSVs don't support the Microsoft SQL Server clustered workload in SQL Server 2012 and earlier versions of SQL Server".
https://docs.microsoft.com/en-us/windows-server/failover-clustering/failover-cluster-csvs

Are we able to use CSV's on the Hyper-V failover cluster?

Thanks!

Help Needed to write a script to create vm from VHD file

$
0
0

Hello Folks,

I need assistance to write a script in creating VM from unmanaged disk.

Thanks,

Salman



VM Crash without any event info.

$
0
0

Yesterday one of our VM file-servers crashed some time between 19.00 and 22.15 and any file changes/ additions for the entire day on this server is gone. We know that some files were saved on the file-shares just around 19.00, so in my mind these files should be available, but no. - There are no event-log entires from 11.00 (mid day) and this morning when I restarted the server 7.34)

Could it be that every transaction to this server the entire day were done in a memory/ temp file and never committed into the VM harddisk file and these files were lost when the server crashed? - It is the only logic that come to my mind. But how can this happen? - There is no checkpoint on the server.

It is just as if the entire server was restored one day back or the day never happened.?!? (checked my restore-logs and that is not it :-) )

Note to mention.

Our Veeam backup start at 22.00 and failed for this server, then started automatically again at 22.30 with a success (allthough the server had Repair startup this morning). But everything is one day old in this backup.

W2019 vs Hyper-V Server 2019 for VM hosting (plus bonus W2019 licensing question)

$
0
0

So I'm getting contradictory answers regarding installing Windows Server 2019 Standard on bare metal and installing VMs, so I'm also investigating the possibility of using Hyper-V Server 2019 on bare metal and deploying VMs on that.

It's to my understanding that Hyper-V Server is basically Windows Core with Hyper-V functionality, but I've also heard there are ways of adding GUI components onto it? Does that include the Hyper-V management module? Anyone have any idea of the general pros and cons that come from running Hyper-V 2019 instead of Windows 2019 with Hyper-V?

Otherwise, how much of a licensing apocalypse would occur if we had a single license of Windows Server 2019 standard and it was running on both bare metal (strictly in a Hyper-V capacity) and two VMs each running Windows Server 2019? Would I use the provided license key on the server and AVMAs on the VMs? The provided license key on all three systems? Any help would be appreciated on any of these subjects.

Optane 4K performance within Hyper-V clients

$
0
0

I got a question about SSD performance of Hyper-V clients.

I used an Intel 900P 280GB drive as the storage for two Hyper-V clients, both running Windows 10 Pro 1809. If I only power one of them on, and I use CrystalDiskMark 6 to test the disk performance, the Q1T1@4K (Queue 1 Thread 1@4K) result (the one that made all the difference with Optane) will be only Read:110MB/S Write:105MB/S, but if I test the same drive in the host system, the result will be Read:235MB/S Write:225MB/S.

I tried different platform, and the result is identical, the disk performance of clients are seemingly limited at 110MB/S.

I managed to set up the entire 280GB drive to direct pass through as a SCSI drive for one client, and the disk performance is still like above, halved the test result within host system.

The sequential read and write performance is the same for clients and the host, but that does not make the biggest impact on the smoothness of the system. I wonder if there are some tweaks to change this.

My host system configuration is: AMD Ryzen 2600, 8-core, 64GB DDR4 2666Mhz, Intel 900P 280G NVME

Hyper-V 2016 failed to perform the 'Cleaning up stale reference point(s)' operation.

$
0
0

Hello dear Hyper-V elites. I'm in need of help. I've googled without success, I can't find anything matching this issue.

We have a customer with 2x Hyper-V 2016 machines. HOST-A and HOST-B. In this case the issue seems to originate from HOST-B. 5 of the servers that is currently running on HOST-B and is set to replicate to HOST-A is giving us headaches.

The 5 servers are all running Linux (Ubuntu), and we have no issues with the other VM's that are running Windows. What we receive are alerts with the following description:

EventID: 19060

'Linux-Server-1' failed to perform the 'Cleaning up stale reference point(s)' operation. The virtual machine is currently performing the following operation: 'Backing up'. (Virtual machine ID XXXXXXX-XXXXXXX-XXXXXXX-XXXXXX1)

Directly after the following event-log entries comes up;

EventID: 19070 and after that 19080

'Linux-Server-1' background disk merge has been started. (Virtual machine ID XXXXXXX-XXXXXXX-XXXXXXX-XXXXXX1)

'Linux-Server-1' background disk merge has been finished successfully. (Virtual machine ID XXXXXXX-XXXXXXX-XXXXXXX-XXXXXX1)

However we never get the code 29174 which we receive from all other "working" windows VM's.

Example below;

Replica statistics for 'Windows-server-1' (Virtual machine ID XXXXXXX-XXXXXXX-XXXXXXX-XXXXXX2)
StartTime: 132184512000256627
EndTime: 132184943997080231
ReplicationSuccessCount: 144
ReplicationSize (bytes): 1194310144
ReplicationLatency (seconds): 3
ReplicationMissCount: 0
ReplicationFailureCount: 0
NetworkFailureCount: 0
MaxReplicationLatency: 1
Application-consistentCheckpointErrorCount: 0
PendingReplicationSize (bytes): 0
MaxReplicationSize (bytes): 79151616

They all run on the same backup solution (solarwinds msp backup & replication) and at the same schedule. We receive no errors on the backup console.

All servers are also "hosted/located" on the same disk. Since I've seen other topics mentioning that it could be a disk issue, we think this is not the case.

Help me obi-wan kenobi, you're my only hope.


//Regards, Andreas

Windows iscsi boot issue on second interface

$
0
0
I am able to boot windows on first interface, with iSCSI. If I try to boot from second interface using iSCSI, the booting happens fine, but the boot procedure stucks with "INACCESSIBLE_BOOT_DEVICE" message. I have disabled "NDIS LWF driver" on both the interfaces, though 2nd interface never gets me past that error message.

I have followed all the steps as per:

support.microsoft.com/en-in/help/976042/windows-may-fail-to-boot-from-an-iscsi-drive-if-networking-hardware-is
mistyrebootfiles.altervista.org/documents/TinyPXEServer/files/wfplwf.htm

This is related to windows settings in iSCSI boot. if anyone else has been able to get iSCSI boot over ipxe in windows with 2nd interface? if yes, any pointers much appreciated.

The question is not related to Hyper-V, in general, it's mostly related to iSCSI boot.

Quick VM move/migration in SAN environments

$
0
0

We have several Windows 2012R2 hyper-v hosts with identical hw-config and all storage is local attached, virtualized on storage controllers over Fibrechannel. In addition all hosts has dual 10GbE converged network adapters. The hosts are domain members and no failover clustering is enabled.

What is the most effective way to move VM's between hosts in a SAN environment ? 
The "old" way of export/import requires significant downtime on large volumes.
The same goes for Live migration without SMB storage ?
Replication is much the same, i.e moving data over the fabric network ?

I've always used the features of the storage controller when moving vm's between hosts by shutting down, changed zoning and remapped the LUN's to the new host, created the VM's manually and attached existing disks - almost without downtime and no physical duplication on storage. The backup is done on the storage controller with flash or volume copy. 

Does this manual approach represent any risk on identical host hardware ?
The VM Bios GUID is of course changed, but I've never got any problems with reactivating licenses etc.
Btw I've never used ROK/OEM software on the VM's and both SQL Server and ADS/RDS was working flawless after moving.

But now I have to move a Exchange 2013 DAG member to a new host and have some doubts after reading several articles on this topic... The plan is as follows:

- Enable maintenance mode and stop/change startup type to disabled for all Exchange services, shutdown.
- Remap storage to new host and create VM in Hyper-V manager with identical settings
- Start VM and check network connectivity and event log
- If all OK, reset startup type on Exchange service and reboot
- Check events - disable maintenance mode

Is this a viable plan ? Any comments will be highly appriciated !

Cheers,

Terje

Problem while Live Migration of Virtual Machines

$
0
0

Hi,

I have setup two node Hyper-V cluster on Windows 2012 R2 server and configured 3 virtual machines. I can do Quick Migration of all Virtual Machines to another node and also I am able to live migrate one Virtual Machine but when I am trying to do Live Migration of all VMs I am getting error message saying "The operation did not complete on the resource."


Live migration of 'Virtual Machine XXXXXXXXXXXX' failed.

Virtual machine migration operation for 'XXXXXXXXXXXX' failed at migration source 'XXXXXXXXXXXX'. (Virtual machine ID 1A7F4F24-018E-4563-B60B-0199908855E6)

Failed to initiate the migration of a clustered VM: Only one instance of this resource type is allowed per resource group. (0x80071735)

Please help.


Hyper-V 2019 VM Cluster Replication

$
0
0
Ever since upgrading our 2 Hyper-V clusters to 2019 from 2012 R2 when trying to set up a new VM replication I get errors. Works for existing VMs replicating that were created before the upgrade but any new VMs fail. Enabling the replication seems to create the VM fine on the replica and creates drives at 4,096KB ready for replication but trying to start the initial replication fails with the error:

Start-VMInitialReplication : Hyper-V failed to start replication for virtual machine 'VM': Operation aborted
(0x80004004). (Virtual machine ID E3E5EDD5-66CA-48AB-8F52-CA63A3A6AEC5)
Hyper-V could not find the virtual machine 'VM' on the Replica server and will connect to the Hyper-V Replica
Broker in the next retry interval. (Virtual machine ID E3E5EDD5-66CA-48AB-8F52-CA63A3A6AEC5)
Replication operation for virtual machine 'VM' failed. (Virtual machine ID
E3E5EDD5-66CA-48AB-8F52-CA63A3A6AEC5) (Primary server: 'Host1', Replica server:
'Broker')
At line:1 char:1
+ Start-VMInitialReplication -VMName 'VM'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [Start-VMInitialReplication],
   VirtualizationOperationFailedException
    + FullyQualifiedErrorId : OperationFailed,Microsoft.HyperV.PowerShell.Commands.StartVMInitialReplicationCommand


Doing the same process through the failover cluster gui I get a similar error. Says the replication was enabled successfully however initial replication could not be started. And goes on to say Hyper-V could not find the virtual machine on the replica server.

Has this yet to be recognized as a bug in Windows Server 2019?

iWARP RDMA not working on vNICs

$
0
0

In preparation for setting up S2D, I want to test the RDMA (iWARP) communication.
We use Qlogic QL41164HMRJ nics (Dell R640 - Server 2019).
RDMA communication works great (diskspd.exe) if I use the pNICs directly for this.
The counters in PerfMonitor give me positive results.

When using SET switch with vNICs it only works from vNIC to pNIC direction (SET and vNICs at the sending server, no SET at receiving server) not in the opposite direction (pNIC to vNIC). So, The vNIC does send RDMA traffic but can never receive it (Firewall is disabled on both servers)

In the end, both servers must have a SET switch with vNICs of course

Any ideas?



Virtual Machine failed to initialize on all VMs

$
0
0

I have a client that filed to tell me their physical domain controller had two drives fail. The two domain controllers on their Hyper-V Core system continued to run, so they thought it wasn't important. Anyway, the VMs decided to do updates and both rebooted last night, possibly the Hyper-V server rebooted as well. Anyway, when they go in this morning no one could authenticate, get DHCP leases or access either the Internet or their intranet. It doesn't look like I'll be able to get their physical DC up any time soon, but the Hyper-V Core host is running fine. None of the four VMs on that server will start though, they all respond to a Start-VM with a "Failed to Initialize" error. When I request a dump from the event handler I see this, duplicated for each VM:

Get-WinEvent -FilterHashTable @{LogName ="Microsoft-Windows-Hyper-V*"; StartTime = (Get-Date)
.AddMinutes(-10); }


   ProviderName: Microsoft-Windows-Hyper-V-Hypervisor

TimeCreated                     Id LevelDisplayName Message
-----------                     -- ---------------- -------
11/19/2019 4:17:01 PM        16642 Information      Hyper-V successfully deleted a partition (partition 6).
11/19/2019 4:17:01 PM        16641 Information      Hyper-V successfully created a new partition (partition 6).


   ProviderName: Microsoft-Windows-Hyper-V-VMMS

TimeCreated                     Id LevelDisplayName Message
-----------                     -- ---------------- -------
11/19/2019 4:16:55 PM        15120 Error            'SRV-SHU' failed to initialize. (Virtual machine ID 723248F1-6C3...
11/19/2019 4:16:55 PM        14070 Error            Virtual machine 'SRV-SHU' (ID=723248F1-6C33-4BA8-A960-83CE0F157D...
I'm guessing the issue is that the VMs can't get authenticated by a DC since no DCs are running when they try to start. Am I even close? They were down all day today and I'm going to be back on it in the AM, but I'd really like for them to get back running tomorrow.

Optane 4K performance halved within Hyper-V clients

$
0
0

I got a question about SSD performance of Hyper-V clients.

I used an Intel 900P 280GB drive as the storage for two Hyper-V clients, both running Windows 10 Enterprise 1809. If I only power one of them on, and I use CrystalDiskMark 6 to test the disk performance, the Q1T1@4K (Queue 1 Thread 1@4K) result (the one that made all the difference with Optane) will be only Read:110MB/S Write:105MB/S, but if I test the same drive in the host system, the result will be Read:235MB/S Write:225MB/S.

I tried different platform, and the result is identical, the disk performance of clients are seemingly limited at 110MB/S.

I managed to set up the entire 280GB drive to direct pass through as a SCSI drive for one client, and the disk performance is still like above, halved the test result within host system.

The sequential read and write performance is the same for clients and the host, but that does not make the biggest impact on the smoothness of the system. I wonder if there are some tweaks to change this.

My host system configuration is: AMD Ryzen 2600, 8-core, 64GB DDR4 2666Mhz, Intel 900P 280G NVME



Migrate WS2008 R2 VMs to WS2016

$
0
0

I have 19 differencing VMs running in my Windows Server 2008 R2 host which all connect to a single Parent VM.

I want to migrate the 20 VMs to a Windows Server 2016 host.

I have read I can migrate the 19 differencing VMs and the parent VM but this may not be straight forward and a possible easier option would be to merge the 19 differencing VMs and the parent to leave 19 normal VMs.

The question is which option is best and what are the steps involved to perform thus chosen option.

Viewing all 19461 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>