Quantcast
Channel: Hyper-V forum
Viewing all 19461 articles
Browse latest View live

Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

$
0
0

Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.

I have read several articles about this issues like this ones:

https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv

http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx

But haven't been able to fix my issue.

The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.

All the World Wide Names are configured both on the FC Switch as well as the FC SAN.

All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.

The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.

Quick migration works without problems.

We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.

At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.

My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.

Any ideas on how to solve this is deeply appreciated.

Thank you!


Eduardo Rojas


Hyper-V Replication Certificate Question

$
0
0

Hi,

Possibly a common question so apologies if it's simple.

My main server is HV1.company.local

My second server will be cloud based and standalone in a workgroup, let's call it hv2 and for arguments sake I'll add the DNS entries of hv2.mydomain.co.uk and a certificate for this address.

Certificates will obviously need to be internet facing so will go for HV1.companydomain.com for the first server.

Do I just use internal DNS to sort out the fact that the server name is .local?

Anyone with experience in this set up with best practice and any gotcha's would be much appreciated,

Cheers,

JJ


Hyper-V suddenly hang in every day around 12:04pm in working day

$
0
0

Hi,

My office Exchange server use hyper-v, but the hyper-v suddenly hung in every Monday ~ Friday around 12:04 PM.

I must restart Hyper-V service or reboot physical server.

Anyone can help me this issues, thanks very much.

willise

Hyper-V server 2012R2 Network Issues

$
0
0

Hi all, I am trying to setup 65 VM's running in Server 2012 R2 Hyper-V.  I am having issues with doing this though.  I have 2 Dell R720's setup in a failover cluster and on my domain.  I have several VM's already built.  My issue is, I would like my VM's to be on a private network instead of on the domain.  I have implemented a Cisco 3550 Switch to assist in routing traffic.  I need to have other servers (that will be connected to my physical switch) have the ability to remote in to the VMs.  Below is my current configuration:

Cisco 3550 switch configured to use private adddresses

2 Dell PowerEdge R720's with Server 2012R2 running Hyper-V in a fail over cluster, connected to my domain as well as my physical (private) switch

1 Dell PowerEdge R620 with server 2008R2 connected to my domain as well as my private switch

Each R720 also has a virtual switch configured the same across both using my physical switch as a gateway to the other servers

Is it possible to have my VMs running on my R720s communicate, using private IP's, to my domain network with public IP's (R620), and vice-versa?

Any assistance that can be provided will be greatly appreciated.


VM won't connct to the internet

$
0
0

Okay..

First things first.

I have a virtual setup on a Windows Hyper-V Server 2012 R2 that i moved from a Windows Server 2008 R2 Enterprise.

It worked fine on my server 2008 R2 but after i transported it over to my hyper-V server and made the same configurations there it dosent seems to get online other then local.

I have checked the router and that one can ping 8.8.8.8 whit success.

The server it self is also online.

Does anybody have an idea for the solution?

Correct VMQ configuration

$
0
0

Hi gentlemen

I need some confirmation if my statement is correct:

Host:  2 sockets, 16 cores each, HT enabled.

The host team 10Gbp x2 for VM network and the teaming mode is Switch independent and Hyperv port algorithm.  So, this is sum-of-queues should be configured.

What is the correct VMQ configuration?  Based on MS technet, sumpof-queues should be non-overlapping for minimal overlapping.  

So if I configure:

The first NIC should be:   base processor =2, maxprocessor= 12

Second NIC should be:  base processor = 10, maxprocessor= 16

Is this recommended?  Else please suggest.

Why does moving VM when off via "Move" require sync'ing?

$
0
0

We've got a 2012 R2 server running as a Hyper-V host for about a dozen machines. Over time the layout of files for VMs have changed over time. Currently they all reside on the host hardware on separate partition (D:). I'm trying to reorganize the files so that we are using the same convention for everything, namely:
D:\Hyper-V\Virtual Machines\MACHINE_NAME1\Virtual Hard Disks\MACHINE_NAME1.vhdx

D:\Hyper-V\Virtual Machines\MACHINE_NAME1\Virtual Machines\GUID_FOLDER

D:\Hyper-V\Virtual Machines\MACHINE_NAME1\Virtual Machines\GUID.xml

D:\Hyper-V\Virtual Machines\MACHINE_NAME1\Snapshots

etc...

Each machine has it's own "base" directory.

I've tried shutting down machines and then using the "Move" command from the Hyper-V Management console, but even when they are shut down, it appears that they have to first be "synced" and then copies. Additionally, the copies are very slow even though they are all happening on the same disk.

Is there a better way?

Also, what's the ideal layout for VMs? I've seem some say that you should put all the files associated with a single VM into folder as described above, though the default for the wizard has one Virtual Hard Disks folder and one Virtual Machines folder.

Hyperthreaded CPU Scheduling via Hyper-V

$
0
0

I searched forums and documents (and books) all over for this answer pertaining to Hyper-V.

When a guest VM is assigned 2 virtual CPUs with 100% reservation on a Hyper-V host with a processor featuring hyper-threading, does the guest VM receive guaranteed access to twophysical cores or two logical cores?  For a thought experiment, on a single socket 2-core processor with hyper threading (4 logical cores), from my understanding Hyper-V takes physical core 0 by default leaving a single physical core for VMs.  What would be the result of configuring a VM with 2 virtual CPUs on this host with 100% reservation?  Would you be allowed to?  Would the vCPU scheduling be across two logical processors or would the VM fail to start because it requires two physical cores?


IP Subnet Allocation As Good as Physical VLAN?

$
0
0

Per a thread I started a few days ago, I have moved our environment to virtual adapter based using VMQ and a pair of teamed 10Gbps Ethernet adapters per physical host. (See: https://social.technet.microsoft.com/Forums/en-US/4a164894-dcf6-484e-bd32-15179c67f3cd/separation-of-traffic-with-2-x-10gbps-nics?forum=winserverhyperv)

I was planning to roll out VLANs at the physical layer and then assign each vAdapter to one based on traffic type - however, each adapter has it's own unique subnet (10.0.1.0/24, 10.0.2.0/24, etc.) so I'm wondering if there is any benefit to be had by adding pVLANs to my environment? Aside from a few stragglers, our entire server environment is Hyper-V/VM based. We utilize both iSCSI and SMB 3.0, but again, traffic is isolated to their own IP subnet and I've constrained SMB to a specific pair of vAdapters and iSCSI to another vAdapter pair.

I'd rather not add the complexity of physical layer VLANs unless there is a compelling reason to do so.

VMQ issues with NIC Teaming

$
0
0

Hi All

Apologies if this is a long one but I thought the more information I can provide the better.

We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues having cropped up in the past with Broadcom NICs.

I'll give you a little bit of background into the problem...

Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.

Prior to the redesign, each VM host had 12 NICs installed:

  • Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.

  • Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining 4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.

The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible to the same designated NICs in each of the VM hosts).

In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.

So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.

In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration, utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)

However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.

The Production VM Switch is configured as follows:

  • Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).

  • In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC hasRSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:

Name                           InterfaceDescription              Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
                                                                                                        Queues
----                           --------------------              ------- ---------------- ------------- ---------------
VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1             7
VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1             7
VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1             7
VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1             7
Production VM Switch           Microsoft Network Adapter Mult... True    0:0                            28

Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.

The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).

If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.

VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.

It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see less issues with VMQ disabled compared to it being enabled.

I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.

I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.

Many thanks

Matt


Hyper-V with Seagate BlackArmor NAS 400 unit

$
0
0

I am looking to setup a test environment using some equipment I have pieced together over the years. I have a couple of Dell PowerEdge 1950 servers and 2 Seagate BlackArmor NAS 400 Units. The Dell servers are running Server 2012 R2 and the NAS units have Raid 5 volumes setup.

My goal is to setup a Hyper-V virtual machine environment where the machines hard drives run off of the Seagate NAS units.

I wanted to get some input about best practices and how this should be setup. From my research it says that Hyper-V over SMB requires SMB 3.0 and I cannot find documentation that states whether the Seagate NAS units are capable of this or not.

I am very familiar with Hyper-V but have never run a machine from a network share before. What is the best way of setting this up?

Hyper-V 2012 Replica on a large cluster - many VMs to a single target CSV

$
0
0

Hi all,

We have a Hyper-V 2012 cluster with 5 hosts and ~200 VMs (a few of them actually).

We have tried to start using Hyper-V Replica (HVR) a couple of times now, but every time at some point we encounter issues on the destination HVR Broker storage side. Probably too much bandwidth is required from the storage and we start having issues with merging snapshots in time. Size is also a limiting factor with ~16TB size CSV to contain all the VMs; the CSV is a NetApp LUN and we cannot scale the LUN larger than 16TB.

While potentially this could be addressed by using a faster storage, I am more worried about the side effects of starting and using 200 VMs from a single CSV and LUN. I know that currently there is no official way to split the replica VMs to different CSVs - except for a workaround mentioned here. But this workabound is not suitable for large clusters as it is not managable on this scale.

Any thoughts, comments or suggestions are welcome.


Regards, Leonid


Hyper-V Host, VM and Application upgradation Query

$
0
0

Hi Frenz,

            My organization have an environment,

02 Node Windows 2008 R2 Hyper-V cluster, 15 Vm's (all windows server 2008 R2) are running on the cluster and all the VM's are running SC components (2007 version).

            Activity planned: I suppose to do an up-gradation of all the Host's and VM's to Windows 2012 R2 and finally upgrade the SC components to 2012 R2 UR2 version.

    Query: Kindly someone share your best suggestion/Recommendation/High Level steps /possible ways to do the same. Kindly do let me know for any Questions.


hyper v manager: "virtual machine connection has stopped working", when connect to VM

$
0
0

Hello,

cannot find any info on web and View problem details in VIRTUAL MACHINE CONNECTION error box:

"virtual machine connection has stopped working".

I get this error when click Connect in HV Manager. The machines are on and are accessible through RDP.

This is a lab host after restart it randomly works. I have this problem during last week.

Just interesting to know what could cause the problem and if other were seeing something like that...

Also, can connect to machine from SCVMM.

Thx.


"When you hit a wrong note it's the next note that makes it good or bad". Miles Davis

Hyper-V Problem with Internet Network

$
0
0
HI! I am having issues with my internet connection in my Hyper-v.

The machine have this configuration and software installed:

  • DNS
  • AD-DS
  • SQL SERVER
  • SHAREPOINT
  • DEV TOOL

The VM have a external virtual switch.

This is the VM's ipconfig

Windows IP Configuration

   Host Name . . . . . . . . . . . . : DEV
   Primary Dns Suffix  . . . . . . . : dominio.local
   Node Type . . . . . . . . . . . . : Mixed
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : dominio.local
                                       lan

Ethernet adapter Ethernet 2:

   Connection-specific DNS Suffix  . : lan
   Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter #2
   Physical Address. . . . . . . . . : 00-15-5D-21-98-07
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   Link-local IPv6 Address . . . . . : fe80::a9f0:a026:95f2:6bf%16(Preferred)
   IPv4 Address. . . . . . . . . . . : 192.168.1.77(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Lease Obtained. . . . . . . . . . : Monday, November 24, 2014 10:35:13 PM
   Lease Expires . . . . . . . . . . : Monday, November 24, 2014 10:45:14 PM
   Default Gateway . . . . . . . . . : 192.168.1.254
   DHCP Server . . . . . . . . . . . : 192.168.1.254
   DHCPv6 IAID . . . . . . . . . . . : 369104221
   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1B-97-C9-62-00-15-5D-21-98-01
   DNS Servers . . . . . . . . . . . : 127.0.0.1
   NetBIOS over Tcpip. . . . . . . . : Enabled

This is my client's ipconfig

Windows IP Configuration

   Host Name . . . . . . . . . . . . : Avancle
   Primary Dns Suffix  . . . . . . . :
   Node Type . . . . . . . . . . . . : Mixed
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : lan


Ethernet adapter vEthernet (Realtek RTL8723AE Wireless LAN 802.11n PCI-E NIC Virtual Switch):

   Connection-specific DNS Suffix  . : lan
   Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
   Physical Address. . . . . . . . . : 24-0A-64-B6-5E-9F
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   Link-local IPv6 Address . . . . . : fe80::5950:6ca9:4cc1:69ff%15(Preferred)
   IPv4 Address. . . . . . . . . . . : 192.168.1.70(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Lease Obtained. . . . . . . . . . : lunedì 24 novembre 2014 20:35:32
   Lease Expires . . . . . . . . . . : lunedì 24 novembre 2014 22:50:32
   Default Gateway . . . . . . . . . : 192.168.1.254
   DHCP Server . . . . . . . . . . . : 192.168.1.254
   DHCPv6 IAID . . . . . . . . . . . : 606341732
   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-A9-E2-B1-8C-89-A5-0D-AF-00
   DNS Servers . . . . . . . . . . . : 192.168.1.254
                                       62.101.93.101
                                       83.103.25.250
   NetBIOS over Tcpip. . . . . . . . : Enabled

I find a strangebehavior whentrying tonavigateto siteswithhttps protocol,for exampleifI browse onthehttp://www.codeplex.com the browser show the pageimmediatelybutifI browseonhttps://www.codeplex.com the browsertries toload the pagewithoutsuccess. I must be able to connect to team foundation online(https protocol), so I have a big problem ;)

Do I made mistake when configure the Virtual Switch?
P.s.sorry formy English


VM disappeared

$
0
0

I have a Server 2008 R2 server running Hyper-V, and after a reboot the virtual machine running on Hyper-V disappeared.  I  had this happen before on other 2008 servers, and I would stop the Hyper-V service, and replace the xml file associated with the virtual machine with a xml file from backup.  This usually works.  I tried it with  this server and it did not work.  Here is the original errors I was receiving.

Wonderdesk1' cannot access the data folder of the virtual machine. The worker process (Process ID The requested operation cannot be performed on a file with a user-mapped section open.) may not be functional anymore. (Virtual machine ID 2F544E08-05F3-48D3-A5D9-D8BF764C2C8C)


Cannot load a virtual machine configuration because it is corrupt. (Virtual machine ID 2F544E08-05F3-48D3-A5D9-D8BF764C2C8C) Delete the virtual machine configuration file (.XML file) amd recreate the virtual machine.

After replacing the xml file I get the following error.

Cannot load a virtual machine configuration file general access denied error (0x80070005).  I am receiving this error event ID 16300 for the virtual machine, or 16320 for the snapshots.  This is a critical virtual machine, so any help would be greatly appreciated.

Help- Hyper-V Snapshot Question

$
0
0

Hello-

I am having an issue with one of my VM's and my setup is like this. I have a physical server running 2008-R2 that is hosting two Virtual Servers through Hyper-V. One of the VM's is a very important server as it handles a lot of our budgeting stuff. The Admin before me depended on primarily Snapshots as a form of backup. I have always known that to not be a smart move and to not run on a snapshot in a production environment for more than 2 days. So we had a lot of snapshots from different times throughout this year and some from last year and the issue we have been running into is the VM has ran out of space and halted. So the past few weeks I have been removing some obsolete snapshots from the VM as to recover space to keep the server going. However currently I am down to the current running instance and two previous snapshots and as of yesterday I still had 35GB left. Today however when I logged on i noticed that i was down to 29GB left? This was confusing as I hadnt created any more snapshots and nothing had been added to the system to take up 6GB within a 24 hour period. So after doing some looking I noticed that the AVHD files (Snapshot Files) are growing in size? If they are snapshots why are they increasing in size? I would understand the .VHD (Current Running Disk) to be growing as we use it but not sure why snapshots are growing as well. 

So what I would like to do and get confirmation on is fix this, and the solution I have come up with would be to create a brand new VM, export and copy the VHD and AVHD'S to this new "non-snapshotted" system in hopes that the avhd's would merge into one VHD. Which would give me all my space back and let me work off the VHD without any loss of data or days of work. and then moving forward using snapshot only when needed and not as a form of backup. I just need someone to help make sure this is a good option to fix this server that is running out of space due to these AVHD's growing drastically day by day. 

Please someone 

I Thank You

-K

Bug in HyperV Integration Services 6.3.9600.16384 ?

$
0
0

Hi,

I just deployed a VM (Generation 1) hosted on a 2012 R2 Datacenter-Hypervisor. For deployment I used WDS and the VM had a "Legacy Network Adapter". Deployment works fine. After deployment I installed Hyper-V Integration Services (Version 6.3.9600.16384). Then I shutdown the VM, removed the Legacy network adapter and added a "normal" network adapter and started the machine again. I noticed that the network-connection is down.

When I check the properties it says "This device cannot start. (Code 10)".

I tried to uninstall the Adapter, I tried to remove the adapter and add a new one - no change. Even I redeployed the VM completely new...same problem...

Does anyone know this issue?

Regards
Miranda

Hyper-v Replication between servers with different performance

$
0
0
If Hyper-v replication is set between servers with different performance. What will happen with the performance of the fastest servers? Will the performance decrease to match the performance of the slowest server in the chain?

DPM 2012 R2 SMB VM Backup (ID 2033 Details: A required privilege is not held by the client (0x80070522))

$
0
0

Hi,

I am encountering an error when trying to backup a VM in SMB share. The error was (ID 2033 Details: A required privilege is not held by the client (0x80070522)). 

There is no VSS event logged on the host.

Does anyone face this error before? Appreciate if anyone could help to enlighten.

Thanks.

Viewing all 19461 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>