Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Discussion of Hyper-V
    0 0

    Hi All,

    We have 3 Hyper-V servers 2 of which are part of a cluster, 2 physical adaptors have been dedicated to iSCSI and LUNs are presented to the hosts for shared storage. Disk failover is controlled by the iSCSI MPIO.

    We have a requirement for VM's to have direct iSCSI connection(s) to access the data on the disks, i think this is called a RAW connection (correct me if i'm wrong). To configure this my thoughts were to create two vSwitches and have each vSwitch connect to the physical iSCSI interface (all iSCSI vSwitchs will have the same name across all hosts).

    With this is mind it raises questions:

    1. Is the correct way to do this?
    2. How do the VMs deal with disk failover events if the host is controlling the disk pathing via MPIO? My thoughts were to enable the 'NIC Teaming' option by ticking 'Enable this network adaptor to be part of a team in the guest operating system' within 'Advanced Features'. Would this allow the VMs to continue to work gracefully in the event of a  failover when the hosts utilises MPIO for redundancy/load balancing?

    I look forward for all of you professional guidance.


    0 0

    Dear all,

    We use one Hyper-V server on Windows Server 2016 to host some virtual machines. We need a virtual machine which requires:

    -16GB RAM

    -2TB of disk

    After doing some researches on the Internet, I finally created a VHDX file and I configured the size for 2 TB. I created it through Hyper-V Manager on my client computer connected to the remote Hyper-V server.

    Although it took forever to create the vhdx file, I suddenly plug off my cable without thinking about it (damn!). Now, I cannot see the virtual hard disk process anymore (and I cannot see if it is finished or not, even if it says the file is currently in use when I try to delete my vhdx).

    Do you have any solutions so that I can see again the process? If not, a way to cancel it and recreates it (during the night this time, because it takes forever and it slows down the other virtual machines)? The host hyper-V is in production.

    Regards,

    Lucas



    0 0

    In my Hyper-V server I get an error message in the event log:

    Event ID 55

    The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume\\?\Volume{d8cfcca8-c935-11e1-acc4-d94f72d41a21}.

    I think that the volume is an ID of a Virtual hard disk, but how do I find which one?

    Things I already tried:

    mountvol.exe does not list the volume;

    vssadmin list writers does not show the volume.

    We suspect this happens when Backup Exec makes a shadow copy of a volume as it occurs as the backups are going. But how do we see which volume on which virtual machine is corrupt?

    (We have a 5-cluster Hyper-V host, cluster shared volumes based on iSCSI, runing Server 2008R2 Datacentre edition).



    0 0

    Hi,

    I recently made two VMs. First one I installed from ISO image (Server 2016 eval).
    Then I exported the installed and imported it by copy method (Generating new ID)

    Now I noticed that when I click the restart button from the imported VMs OS the VM starts stopping the services and it says restarting.... But it shuts it down when it should be starting.
    If I go to CMD and enter "shutdown -r -f -t 0" the restart is ok and nothing wrong seems to happen.

    On my Win10 Host computer there are error events at the time of supposed reboot.

    • These are translated so it may not be exact english versions:
      Source Microsoft-Windows-Hyper-V-Worker event id 3502 description is not found...
      The event was included the next information:
      <VM Name>
      <VM ID>
      %%2147942402
      0x80070002

      Wanted message language resource is not found.
    • Target <VM Name> could not initialized again (Virtual ID: <VM ID>)

    No other problems are not seen yet and there is no errors but these in the host event log.


    0 0

    Hi,

    I have a new Hyper-V environment using Windows 2016 Standard and have run into an issue with restarting the VM's from the start menu inside the VM.

    The VM will start the restart process but when it has finished stopping the services it shuts down and does not restart unless I manually start the VM from the Hyper-V console.

    Thanks in advance

    Bill


    0 0

    I have a VM exchange 2013 under Failover cluster with pass thru disk . I scheduled a daily backup of this Virtual server by attaching an additional pass thru disk as dedicated backup drive for dail full backup. Before adding this dedciated backup drive to the VM , I have presented it to the failover cluster and then added it to the VM as pass thru disk. However, now I am getting an error in VMM as "unsupported cluster configuration" for this Exchange VM. Though the server is wrkng fine , I believe that ,  when the dedicated pass thru  disk is being used , it will be removed  from the vm  windows explorer hence the cluster settings got screwed up.

    Conlcusion:  pls suggest me the best way to use WSB for VM with pass thru disk under failover cluster


    0 0

    Recently I am investigating NLB VIP unstable issue. I also found another issue with ARP, it's might be same issue with NLB.

    environment info please see this thread: https://social.technet.microsoft.com/Forums/en-US/df4722a9-62fa-4d34-a1d3-9dd2de0fa8cd/hyperv-nlb-vip-unable-to-ping-from-some-clients?forum=winserverClustering

    I found when NLB VIP get unstable, some VM nodes within this NLB cannot ping it's GW of NLB dedicate NIC. but the LAN NIC ping gateway is pretty fine.

    After configure port mirror on TOR switch, I can capture ARP reply package from Cisco Switch. but on Hyper-V host, I can only see ARP package send out, but without any ARP response inbound.

    Reboot V-Host or just disable then enable NIC on Hyper-V host, can fix this issue temporary. but after hours or one day, this issue will repro again.

    by netsh trace log, found two strange reason. all VM have been disabled MAC Address spoofing and never use NIC bridge.

    [0]0000.0000::‎2017‎-‎01‎-‎18 14:08:42.090 [Microsoft-Windows-Hyper-V-VmSwitch]NBL destined to Nic 5ABC7209-83A0-4034-A30C-1D34B066AC9C--71761E64-BF59-4A0D-93CC-3F7C25021AEF (Friendly Name: Network Adapter) was dropped in switch 037C29CB-B632-498B-B1B3-D676884AF17D (Friendly Name: VLAN_693_10.185.76.96_27), Reason Bridge is not allowed to run inside VM 

    [24]0004.29E8::‎2017‎-‎01‎-‎18 12:47:07.651 [Microsoft-Windows-Hyper-V-VmSwitch]NBL originating from Nic 0C493B45-B40B-46BC-BD2D-835A95D7C19A--B56F8597-A216-406E-B1C3-187246E51EA4 (Friendly Name: Network Adapter) was dropped in switch 766ED074-5690-4A47-9819-8CF160B99D7E (Friendly Name: VLAN_692_10.185.76.64_27), Reason Spoofed MAC address is not allowed 


    0 0

    Hello,

    Where I can find the latest information on Hyper-V 2016 Backup and Recovery APIs for VSS less (RCT) backup and recovery? I am using some of the WMI APIs with a C# program what Taylor Brown discussed in his Europe-2014 TechEd talk. Few things are working but few things are giving error, e.g. 

    1. I am not able to use DestroyReferencePoint API using WMI, some parameter seems to be missing, but do not get detailed information as what needs to be done.

    2. Not able to find any parameter to set incremental snapshot during CreateSnapshot WMI API.

    Any help and pointer to latest documents about API are highly appreciated.

    Thanks,

    -Upanshu


    0 0

    Hi,

    I have two identical physical servers: 8 core Xeon with HT, 96 GB RAM, 2 x OS SSDs in RAID 1, 4 x 4 TB HDs, 2 x 180 GB SSDs, dual 10GbE NICs, and 6 x 1 Gbps NICs. Each server has Server 2016 Data Center with Desktop Experience installed on the RAID 1 SSDs.

    I want to configure this hardware into a Hyper-V cluster that does fail-over whilst also using S2D as the data storage mechanism, and I have some questions. The 4 x 4 TB will form the storage area for S2D on each server with the 2 x 180 GB SSDs being the S2D cache. These 6 HDs are configured as pass-through in each server's BIOS.

    1) Does the Server 2016 Data Center OS that are acting as the Hyper-V hosts have to be attached to a domain to do Hyper-V clustering? I really want to set this up in the lab before getting it to the customer and then attaching the Hyper-V guests to the domain. But I'd rather leave the Hyper-V hosts off the domain.

    2) Am I right in thinking I can achieve what I want with Hyper-V clustering and S2D? I imagine the S2D will give me storage space on each physical server through the Hyper-V host, and also replicate itself over to the other server. So when I have my Hyper-V guests stored on the S2D space I will have a resilient infrastructure that can quickly deal with a physical server going down (the dead server's data will be on the other server's S2D space, and the Hyper-V host easily be able to spin up the VMs that were running on the dead server).

    3) When setting this up, should I be configuring the Hyper-V clustering first, or the S2D first?

    4) I have 2 x 10 GbE NICs on each server, and a switch with 4 x 10 GbE ports on. Should I use both 10 GbE ports on the Hyper-V hosts for S2D replication. or one for S2D replication and one for something else like RDMA?

    There's not that much good documentation out there atm. A lot of stuff refers to technical previews of Server 2016 which has since changed. A lot talks about CSV but not S2D. The stuff that talks about S2D doesn't talk about Hyper-V failover, etc.

    Thank you for reading.


    0 0

    I create a new virtual machine and connected with one of existing virtual switch.

    Virtual machine which was before connected to this virtual switch is still visible in HvConect.

    Is there any way to remove it from list in HvConect.

    Thanks

    Batocanin


    0 0

    Hi Guys,

    I have had a quick look through the forum but cant seem to find anything on my problem.

    The setup I have is as follows;

    3 x Hyper V 2012R2 (Domain) Servers - IP 192.168.1.x

    1 x Hyper V 2012R2 (Workgroup Test) Server - IP 192.168.9.x

    I have on my Windows 10 1607 Machine the OS built it Hyper V Manager. - (Domain)

    Now the problem starts here;

    I can contact the other Hyper V Servers on the Domain (which I expected) but I'm having problems connecting to the workgroup server.

    I have tried HVREMOTE tool to setup the trust relationship between my windows 10 machine and the workgroup server so that the manager can connect.

    The HVREMOTE tool reports NO errors regarding the configuration.

    There is no doubt a problem with trust relationship between W10 and the HyperV server but WHERE!!!!.

    I have trawled through hundreds of internet fixes and broken many a hyperv server by doing so but nothing so far fixes the problem.

    The message I get is below showing a very generic error.

    

    But if I use different credentials (that I know has admin rights on that server) I get this error;

    So I'm at a loss of where the problem lies and how to resolve it.

    Has anyone had a similar problem using the Windows Hyper V Manager to contact a Domain -> Non Domain server.

    Any information would be grateful.

    Regards


    0 0

    Hi, i have a doubt.

    I have an Microsoft Hyper-V Server 2016. On this server its running an Oracle Linux VM with a Oracle Database on top. The Oracle Database its configured to use ASM storage (standalone with local disks, not a RAC Cluster).

    Actualy the server phisical disks are old 512 bytes SAS disk (here its Brazil Guys, i bougth a new Dell Poweredge 630 server and Dell send me these old technology disks), not the new 4K disks.

    We all know that for database servers, larger blocks/stripe sizes are better for performance, but now im limited by the phisical hardware.

    I want to know if i move the already in use VHDXs to a new storage (new local 4k disks or even a better iscsi storage) the linux will automaticly be beneficied by the changes or if i will have to use new VHDXs (or format the old ones) to use the updated phisical disks enhancements?

    To exemplify my problems here are some the disks configurations on Hyper-V Server and on the Linux VM

    PS C:\Users\administrador.CN> Get-PhysicalDisk | sort-object SlotNumber | select SlotNumber, FriendlyName, Manufacturer,
     Model, PhysicalSectorSize, LogicalSectorSize | ft -autosize
    
    SlotNumber FriendlyName         Manufacturer Model            PhysicalSectorSize LogicalSectorSize
    ---------- ------------         ------------ -----            ------------------ -----------------
               FreeNAS iSCSI Disk   FreeNAS      iSCSI Disk                   131072               512
               FreeNAS iSCSI Disk   FreeNAS      iSCSI Disk                   131072               512
               DELL PERC H730P Mini DELL         PERC H730P Mini                 512               512
               DELL IDSDM           DELL         IDSDM                           512               512
               WD My Passport 0741  WD           My Passport 0741               4096               512
    
    
    PS C:\Users\administrador.CN> exit
    
    C:\Users\administrador.CN>wmic partition get BlockSize, StartingOffset, Name, Index
    BlockSize  Index  Name                   StartingOffset
    512        0      Disco #4, Partição #0  135266304
    512        0      Disco #3, Partição #0  33685504
    512        1      Disco #3, Partição #1  17408
    512        0      Disco #0, Partição #0  1048576
    512        1      Disco #0, Partição #1  315621376
    512        2      Disco #0, Partição #2  554696704
    512        3      Disco #0, Partição #3  85900394496
    512        0      Disco #2, Partição #0  135266304
    512        1      Disco #2, Partição #1  17408
    512        0      Disco #1, Partição #0  4194304

    Physical Disks on Hyper-V Server

    PS C:\Users\administrador.CN> Get-VHD -Path "g:\vhds\*" | FT Path, LogicalSectorSize, PhysicalSectorSize -Autosize
    
    Path                               LogicalSectorSize PhysicalSectorSize
    ----                               ----------------- ------------------
    g:\vhds\nostromo-oracle-bin.vhdx                 512               4096
    g:\vhds\nostromo-oradata01.vhdx                  512               4096
    g:\vhds\nostromo-oradata02.vhdx                  512               4096
    g:\vhds\nostromo-oradata03.vhdx                  512               4096
    g:\vhds\nostromo-oradata04.vhdx                  512               4096
    g:\vhds\nostromo-redo-ctlvhdx.vhdx               512               4096
    g:\vhds\nostromo-root.vhdx                       512               4096
    g:\vhds\nostromo-swap.vhdx                       512               4096
    

    Virtual Disks on the server

    [root@nostromo ~]# cat /sys/block/sdh/queue/optimal_io_size
    0
    [root@nostromo ~]# cat /sys/block/sdh/queue/minimum_io_size
    4096
    [root@nostromo ~]# cat /sys/block/sdh/queue/physical_block_size
    4096
    [root@nostromo ~]# cat /sys/block/sdh/queue/max_sectors_kb
    512
    [root@nostromo ~]# cat /sys/block/sdh/queue/max_hw_sectors_kb
    512
    [root@nostromo ~]# cat /sys/block/sdh/queue/hw_sector_size
    512
    [root@nostromo ~]# fdisk -l /dev/sdh | grep "Sector size"
    
    WARNING: GPT (GUID Partition Table) detected on '/dev/sdh'! The util fdisk doesn't support GPT. Use GNU Parted.
    
    Sector size (logical/physical): 512 bytes / 4096 bytes
    [root@nostromo ~]# parted -l
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sda: 85,9GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name  Sinalizador
     1      1049kB  2098MB  2097MB  fat32              boot
     2      2098MB  4195MB  2097MB  ext4
     3      4195MB  85,9GB  81,7GB  ext4
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdb: 53,7GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name     Sinalizador
     1      2097kB  53,7GB  53,7GB               primary
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdc: 53,7GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name     Sinalizador
     1      2097kB  53,7GB  53,7GB               primary
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdd: 215GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End    Size   File system  Name     Sinalizador
     1      2097kB  215GB  215GB               primary
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sde: 215GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End    Size   File system  Name     Sinalizador
     1      2097kB  215GB  215GB               primary
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdf: 34,4GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system     Name  Sinalizador
     1      1049kB  34,4GB  34,4GB  linux-swap(v1)  SWAP
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdg: 107GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End    Size   File system  Name  Sinalizador
     1      1049kB  107GB  107GB  ext4
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdh: 53,7GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name     Sinalizador
     1      2097kB  53,7GB  53,7GB               primary
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdi: 53,7GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name     Sinalizador
     1      2097kB  53,7GB  53,7GB               primary
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdj: 53,7GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name     Sinalizador
     1      2097kB  53,7GB  53,7GB               primary
    
    
    Model: Msft Virtual Disk (scsi)
    Disk /dev/sdk: 53,7GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name     Sinalizador
     1      2097kB  53,7GB  53,7GB               primary

    Partitions inside Linux VM

    Ps.: Sorry about my bad english.



    0 0

    Hi all,

    Windows Server 2003 is not supported as a guest VM on Hyper-V 2016.

    But I have a case where we need to keep an old Windows Server 2003 running for approx another 5 years.

    Has anybody tried to run Windows Server 2003 as a VM under Hyper-V 2016?

    If yes - any problems/fixes/need-to-know-trics I should be aware of?

    Thx.

    PS: I know Windows Server 2003 is EOL and should be long gone, but it's not going to happen in this case!



    0 0

    I followed the recommended guide for creating NIC teaming and VLAN from this post.
    https://blogs.technet.microsoft.com/keithmayer/2012/11/20/vlan-tricks-with-nics-teaming-hyper-v-in-windows-server-2012/

    It is a very simple setup on the switch

    interface port 1:3
            description "GSPRDHV01"
            switchport mode trunk
            exit

    interface port 2:3
            description "GSPRDHV01"
            switchport mode trunk
            exit

    ----------------------------------

    Everytime I create a brand new VLAN on the Switches (which has similar commands to Cisco) and Apply the configuration for e.g. VLAN 192, the Hyper-V Nodes has a high rate of timeouts, However the VM's are not affected, just the Hyper-V Nodes

    (Which causes terrible connection to my DR Site for Hyper-V replica)

    Pinging HyperVNode5.lm.local [10.9.224.55] with 32 bytes of data:

    Reply from 10.9.224.55: bytes=32 time=5ms TTL=123
    Reply from 10.9.224.55: bytes=32 time=6ms TTL=123
    Request timed out.
    Reply from 10.9.224.55: bytes=32 time=6ms TTL=123
    Reply from 10.9.224.55: bytes=32 time=5ms TTL=123
    Request timed out.
    Reply from 10.9.224.55: bytes=32 time=5ms TTL=123
    Request timed out.
    Request timed out.
    Reply from 10.9.224.55: bytes=32 time=5ms TTL=123
    Reply from 10.9.224.55: bytes=32 time=5ms TTL=123

    So in a nutshell

    1. Pings from Internally with one another is fine
    2. Pings to virtual machine hosted on the Node are fine
    3. Pings from outside the WAN into the Node - FAILS
    4. I suspect the Issue lies on the LACP between the Switch and the Gateway but that is stab in the dark.

    What am I doing wrong here :((( ???



    0 0

    We have a 2 nodes Hyper-V cluster.

    Failover VM setting "Cluster-controlled offline action" is Save

    Hyper-V VM setting "Automatic Stop Action" is Shutdown

    1) What happens if a shutdown a host ?? Will VM move to other host ??

    2) What happens if a move a VM (domain controller) to other host ?? Is supported live migration on DC without shutdown computer ??


    0 0

    I know this has been asked many times over, but I can tell you every trick stated doesn't help me and I bet there are other out there still struggling like me to where I want to rip out those Broadcoms and get Intel's. Well not so easy to do when HP DL380 servers come with Broadcom LOM's right?

    I have the same issue with all my HP DL380's whether they're in a cluster or a stand alone Host. They have all the HP 1Gb 4-Port 331FLR's or 331T's, which are basically Broadcom's so even the LOM's are. I was back and forth with MS and HP Support Teams only to hit a brick wall and not have a resolve so issue still happens......even today which is why I'm finally writing this.

    Did the usual such as VMQ off, PowerSettings Off, updated driver to the "b57nd60a.sys" as noted in MS KB 2986895 with no luck. My VM's themselves only have one NIC connection though now thinking about trying dual NIC's to VM's with issue.

    I know it's not clustering or storage as per say and the reason is that I have the issue with a couple of my stand alone hosts that use local drive storage on server. What I do know is that the issue seems to be when the VM is stressing the network for long periods of time as it's only happening on my VM's (running 2012 R2 as well) that are the backup servers. These VM Servers that drop off the network use VEEAM B&R as well as stream Azure Backups. It seems when Azure is uploading for a long period of time this is when the VM's disconnect from the network and I get the yellow triangle of aggravation. In the cluster I can migrate the VM and be back up and running, though sometimes after migrating it'll drop later that day or maybe not. On the stand alone hosts I have no choice other than to reboot the VM to get it back up and running.

    Again, I'm just putting this out there because I'm sure I'm not the only one still having this issue even after doing everything people suggest. Please don't just post an answer that's a re-post from another thread or said links as I've been through them all. What I'd like to have is others that still have the same issue and/or and can confirm they have a fix such as just put in Intel NIC's. After all I have both a Cluster with HP SAN as well as HP stand alone hosts with local storage and have the exact same issue. Just funny how every other VM on the Host either Cluster or Stand Alone are unaffected when they use the same network paths.

    Thanks, Mark

     

    0 0

    So am using this command to vacate our nodes into the other forest: Get-VM | % { Move-VM -DestinationHost host.fqdn -IncludeStorage -DestinationStoragePath "C:\ClusterStorage\Volume12\$($_.name)" -verbose }

    I'm not sure, smells like a bug? Or I could have been stuffing something up somewhere.

    I'm trying did it in such way:

    destHost: vins0010

    srcHost:   vins011

    Move-VM vins003-seo -DestinationHost vins0010.dest.fqdn -IncludeStorage -DestinationStoragePath "C:\ClusterStorage\Volume1\Hyper-V"

    and got:

    Move-VM : Virtual machine migration operation failed at migration source.
    Failed to create folder.
    Virtual machine migration operation for 'vins003-seo' failed at migration source 'VINS011'. (Virtual machine ID
    8CAC65C6-E1CF-44DD-AC2C-DD7DC1F8D0DF)
    Migration did not succeed. Failed to create folder
    '\\VINS0010\VINS011.801624481$\{63f1501b-791d-4340-ac2d-0fac9d7e7df9}\Hyper-V\Virtual Hard Disks': 'The network path was not
    found.'('0x80070035').

    I m little confused. Why I got so strange destination folder path?

    '\\VINS0010\VINS011.801624481$\{63f1501b-791d-4340-ac2d-0fac9d7e7df9}\Hyper-V\Virtual Hard Disks'

    What does it mean VINS011.801624481$ in dest path and where are from it appears here? 


    0 0

    Hello Everyone,

    Yesterday I was doing some storage migration in the cluster. I added some 3PAR storage to my environment and wanted to migrate the VMs from the volumes of the HITACHI disks we have to 3PARs. We migrated the vhd files of the VMs to the 3PAR before we added 3PAR to the cluster, changed the path of the vhd files of the VMs to the 3PAR volumes, and then we added the 3PAR to the existing HITACHI cluster. Everything went smooth, and the Hyper-V manager was showing the new volume for the VMs. I powered on all the VMs and they were running smooth also. The only thing that I was concerned, was Failover Cluster Manager, because the VMs in  services and application were not reflecting the new volume that I assigned to those VMs. It was still showing the path of the old volume of the HITACHI disks.

    Since it is a service and application, I understand that the new volume is now operating in the 3PAR, and is not showing the right path,  but is it something that I have to configure or anything I should do to reflect the new volumes in the Failover Cluster Manager, cause I'm not very experienced with VMs environment.

    Thank you

    Enion


    0 0
  • 01/25/17--17:38: Hyper-v and Virtual switches
  • I know that it is not the same, BUT.... we have been running VMs on VMware 2.0.2 for a looooong time and the hardware is getting too old and starting to fail.  So, we decided to bite the bullet and get some newer hardware and install Hyper-V 2012 R2 Core on it.  Installing the base was no problem, but the virtual switch is getting on my nerves as we cannot get network connectivity on the VMs that we create.  We are running CentOS 6 as the VMs (running Web Servers and Mail Servers).   With VMware 2.0.2, we could set up the VMs on totally different subnets than from the hosts.  For example, we are running the host on a 184.71.91 network and the VMs are running on a 184.71.66 network and a 184.69.43 network.

    I was wondering if it is possible for Hyper-V to do the same, or are we out of luck?




    0 0

    I have a two Host servers (2012r2) ONYX, SmartGateway.  Onyx hosted a "MAIL" as a VM which replicated to SmarterGateway.  During our planned downtime over the weekend I did a "Planned Failover" from ONYX to SmarterGateway.  Afterwards I reversed the replication.  I failed to notice that replication did not work.  Today when thinking about the setup I removed the replication and pointed it towards our offsite replication Host across campus (same subnet) and discovered that it would not replicate.  Would not checkpoint either.  Googled errors.  Turns out permissions are screwed.  Read all of those.  HOW DO I ADD the Special user group NT VIRTUAL MACHINE\Virtual Machines.  Yes the local group policy is defined to allow it to Log On as a Service.  There is not a Domain Group Policy defined.  Nothing seems to let me fix this.  This is our production email server.  I really can't just turn it off and uninstall Hyper-V, reboot, reinstall and then add the Virtual Machine back.  Is there any way to fix this short of another couple hours of downtime?  Also note, the VM is running and doing it's job.  Scared of what will happen if I have to restart it or shut it down.