Hi there,
Just a quick question, how can I Lice Migration over my LACP configuration of 2 x 1 GB. Every time I do a LM of 2 VM at the same time, the maximum speed I get is only 1GB?
Regards
David
David
Hi there,
Just a quick question, how can I Lice Migration over my LACP configuration of 2 x 1 GB. Every time I do a LM of 2 VM at the same time, the maximum speed I get is only 1GB?
Regards
David
David
Hi Folks,
I've been fighting with this one for a while, when I've had a spare hour or two I come back to the test environment:
I have an HP StoreOnce device, presenting a VTL on it's FC ports, using NPIV. I have HP B-Series (Brocade) FC Switches, fully NPIV supported. And finally a HP DL380 G7 running Server 2012 with Hyper-V. Again with NPIV supported FC cards.
What I am trying to achieve, is a test environment where we can use Hyper-V & Virtual SAN to replace our physical test hosts for FC testing. I can create a Virtual SAN, assign a pair of virtual adapters to the VM, zone the devices on the fabric & voilla... VTL is available.
When I reboot the VM, the VTLs are gone...
If I remove & re-add the VM to the zone, they're back, until it next reboots. I've tried various options including persistent bindings for tape. There is only a single Hyper-V machine, so I should only be using WWPN A. Mapping the VM to a LUN on an MSA P2000 G3 works fine, and persists through reboots.
I'm a little stumped here - does anyone have any ideas?
Kind Regards,
Ashley
Hello all
I am currently finishing settin up a new env. for a customer, everything is ok except backup, i am currently backinup host and 3 vm on it.
i am receiving the following errors everytime the backup starts :
filter manager error id 3
volsnap error id 27
disk i/o error at logical bloc etc.. id 153
is everyone experiencing this ?
I already tried everything related to these knowns errors since they are quite famous in the windows history .. but no luck at the moment.
they only occurs if vms are on, if powered off backup runs fine.
however even with these errors backups finishes and restores are ok.
The bad part is that they are all related to the shadow that backup is mounting everytime for the procedure. (dont tell me to chkdsk my lun :) )
Thanks
Regards
I am performing a migration of a virtual server windows server 2002 r2 sp2 from a failover cluster with windows 2008 sp1 r1 passthrough disks to a Windows 2012 failover cluster with vhd.
Topology is this
Storage Server - 172.31.250.2 with multiple application shares provisioned for Hyper-V Storage
Hyper-V Host - 172.31.252.57 & 172.31.250.3
Both Servers running 2012 in a single 2012 domain
On the Hyper-V host within Windows Explorer I have mapped multiple shared drives located on 172.31.250.2
When I run the migrate storage wizard on this host and come to the point at which I can browse for a destination to migrate storage too, the drives do not show. If I open a windows explorer box at the same time they are visible there?
If I type the UNC path in as the destination instead of browsing for them from within the wizard this works and the storage is migrated. The network drives are a mix of hidden and unhidden shares and the behaviour is the same with both - they are all not
shown when browsing.
Even though the storage migration appears to work, I am concerned though that the drives do not appear from the wizard when browsing, when I can see nothing to suggest why they arent appearing?
Paul
Trying to install the all-in-one Hyper-V core from en_microsoft_hyper-v_server_2012_x64_dvd_915600.iso
System is an MB is an Intel P2600CP2 board with dual xeon E5-2620 processors
I have a 120gb SSD sata 6gb/s drive attached to a PCIe gen 2 controller card (this is where I will be installing the hypervisor)
I have 2 3TB sata drives attached to the 6gb/s ports on the motherboard (these will be setup in RAID 1 for VM storage)
When installing the iso, all disks show"This computer's hardware may not support booting to this disk" with details indicating that the disk's controller may not be enabled in the BIOS.
All disks are enabled in the bios.
Looking at this post - http://support.microsoft.com/kb/925481 - led me to believe that the mulitple disks were confusing the install. So I disconnected the 2 3TB from the motherboard. No luck... same thing.
ESXi v5.1 installed just fine.
Any ideas?
gimme some slamming techno!!!!
Dear Support,
I am in process of implementing Exchange Server 2010 SP2 on Hyper-V 2012 Failover Cluster,
I heard some rumours on the web, it saying the Exchange DAG is not supported with Hyper-V 2012 Failover Cluster.
Cause it will create disambiguity on the Exchange DAG cluster when one of the DAG nodes move from hyperv host to another hyperv host.
Is that true ? If it is true is there any work around to make it work ?
Rgds,
Hardi
Hello
I've encountered a problem in Hyper-V Server 2012 when attaching a VHD using the Hyper-V Manager console where I receive the error message:
"Failed to add device 'Virtual Hard Disk'. Attachment '<path to VHD>' failed to open because of error: 'The process cannot access the file because it is being used by another process.'"
I only receive this message when I attempt to attach the disk while the VM is running. If i turn the VM off, attach the disk and start the VM it works fine. The problem seems to occur on some VHDs (and VHDXs) but not others and I've encountered it on multiple hosts also. I can attach the disk just fine using the Add-VMHardDiskDrive Powershell cmdlet. The other workaround that I've found is that after I attach the disk using the Hyper-V Manager console, hit apply and receive the aforementioned error message, if I inspect the disk and then hit apply again it succesfully attaches. The errors in the event log are as follows:
Hyper-V-SynthStor
<VM Name>: Attachment <path to VHD>' failed to open because of error: 'The process cannot access the file because it is being used by another process.' (7864368).
Hyper-V-VMMS
'<VM name>' failed to add device 'Virtual Hard Disk'.
I've used procmon and procexp to look for any open handles but there is nothing reported. Restarting the hosts also doesn't fix the problem. I'm completely stuck so any suggestions would be great. Cheers
I was able to replicate 2 out 5 VMs on our server but I am not sure what this error refers to. Any ideas... The VMs are up and Running
Thanks
server 2008 r2 X64
The server lost network access. I can ping 127.0.0.1 but any other ip address on the network and I get a PING: transmit failed. General failure. If I uninstall the nic cards and reinstall them they are assigned random ip info and I can get to the internet but no network resources. The sever is multihomed but I have disabled the 2nd nic and still have problems. Any ideas?
Hi guys:
I got a VDI project, i use the Windows server 2012's VDI solution, everything is fine ,but about VM's formats ,we have different opinion:
1st:someone advice the differencing,the reason is it can save the storage.
2nd:someone advice the fixed, the reason is the differencing formats take more IOPS than fixed when there are many vms.
This project need 200 vms.
And we do some experiment (40 vms) ,compare the both formats,we have not find the difference, so we confuse this situation.
I look forward the real data , and thank you very much!!
Hi All!
So I'm running into a bit of a problem with a new Server 2012 Core Hyper-V deployment. The VMs' disks are painfully slow. Here's a brief overview of the setup:
+ 2x HP Bl460 Gen 8's in a C3000 enclosure w/ 2x Flex-10 modules
- Each server has 256GB RAM
- Each server has a HP FlexFabric 10Gb 2-port 554FLB Adapter and a HP Flex-10 10Gb 2-port 530M Adapter
- Each server has 2x Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz
+ A 6 node HP P4500 cluster with each node connected via two 10Gb NICs bonded
- Servers are connected to the SAN via iSCSI using the MS iSCSI Initiator and HP Leftand DSM for MPIO for multipathing.
- Jumbo frames are configured for the iSCSI NICs in the server, switch, and P4500 nodes (confirmed by "ping -f -l 8000").
+ The two Server 2012 boxes are clustered w/ MS Failover Clustering.
- A quorum disk is provided by the SAN.
- A 4TB LUN is presented to the cluster nodes as a CSV. (CSV Cache is enabled and set to 512MB)
- Jumbo frames are configured for the physical NICs and logical MS teamed NICs for Migration and Cluster networks (confirmed by "ping -f -l 8000").
So, I'm experiencing severely bad perfomance with the VM's that reside on the CSV. The read times for the machines are pretty great and when running any kind of perfomance tool (e.g. ATTO, ioMeter, etc.) they return pretty decent marks. I think values reported by the tools may be a bit misleading due to them using rather small file sizes so cache is being used quite a bit. The VMs though are suffering pretty bad. Since this a new deployment of Hyper-V (trying to move over from vSphere), we've only migrated a few machines to test performance and validate the config before everything gets moved over. The boxes that have been moved, aren't where they should be, especially with the gear they are running on.
A repeatable test that I use to gauge the performance so far is just a simply file copy of a 2.5GB file. Inside of a VM, it starts off at ~100MB/s for a bit and then quickly drops down to ~20MB/s and then varies from ~20MB/s to a few KB/s. These results show up on thin and thick provisioned VHDXs, copying over the network or local on the same disk, and across disks. If I'm directly on the Hyper-V node, I can copy the same file to the CSV and I hit ~115MB/s steady for the whole copy. Also, if I deploy a VM the Hyper-V node's local disks I see ~115MB/s as well. It's only from within the VM's that reside on the CSV.
Since I can copy the same file to the same CSV that the VMs reside on from the node at full speed, I'm assuming that the iSCSI side of things are working but that doesn't appear to be the case since VMs can't copy that file to their VHDXs.
I'm kind of at a loss here and pulling my hair out (plus a few bruises on my forehead from beating my head against the wall). Any help would be greatly appreciated!
Thanks in advance!
Louis
Hi,
I have taken backup of my VHD using Volume shadow copy (VSS - vshadow.exe). Now i want to restore the vhd to a new or existing machine. What im currently doing now is attaching the backped up VHD to a new new machine or change the vhd to exiting machine. Is this the standard procedure of restoring the a hyper-v from vhd? OR is there something better and different way of doing it.
System config:
Host (Hyper-V manager): Windows 2008 R2 Standard
Guest (VM): Windows 2008 R2 Standard.
Hi All
I got a Message in Best Practice Analyzer regarding 4k Sector Size and vhdx Files on this Disk.
There is a Solution described here: http://social.technet.microsoft.com/wiki/contents/articles/13075.hyper-v-avoid-using-virtual-hard-disks-with-a-sector-size-less-than-the-sector-size-of-the-physical-storage-that-stores-the-virtual-hard-disk-file.aspx
But there is no Link or Information how to do this Resolution.
Are there any more Informations?
Regards
Gargamelius
Hi
we planning to change our network IP class form C to B because of the 255 IP addresses already have been used and we need more.
we are using Hyper-V infrastructure having more than 10 server in 3 physical server and another more 10 physical server working independent in other functions
so i am little confused about the 3 Hyper-v servers and there IP configurations, every server having 4 physical NIC:
-Lan NIC (lan IP) - Class C
-DMZ NIC - Class C
-Live Migration NIC - Class C
-Heart Beat NIC - Class C
all NIC's connected to Cisco switch for VLan purposes
so what is best recommendations for this case, and sure i will change Lan NIC IP Address so DO I HAVE TO change the other NIC's IP classes ?
When I'm running a Hyper-V VM (which I run most of the time) that is set to use 8GB of RAM, occasionally my mouse will experience "stuttering" or lag. It can happen multiple times in a row, and sometimes is spaced out over hours.
Machine specifications:
Windows 8 Pro
i7-2500
16GB RAM
nVidia GeForce 470GTX
Hyper-V VM is on a secondary SATA drive that is only used for Hyper-V
Hyper-V VM is using 8GB fixed (another one is using 8GB min, 12GB max -- not running at the same time of course) with 2 vCPUs and two VHDX's, fixed.
SharePoint - Nauplius Applications
Microsoft SharePoint Server MVP - 2012