Is there a way to setup guest clustering within a Hyper-v cluster with pass-thru disks? Working with 2012 Hosts, 2008r2 quests and EMC SAN>
Thanks,
jay
Is there a way to setup guest clustering within a Hyper-v cluster with pass-thru disks? Working with 2012 Hosts, 2008r2 quests and EMC SAN>
Thanks,
jay
Not sure if this the right place, but hopefully there's someone here who can point me in the right direction...
We're looking at totally renewing our Hyper-V environment, which is currently living on some aging servers and an EVA SAN. All in need of replacing. Obviously we'll be moving to server 2012, and SMB 3.0 has caught our eye. It stacks up really well against the iSCSI SANs we've been looking at, possibly more flexible, and a lot more cost effective. There's some quite impressive videos out there on what SMB 3.0 can deliver, but I haven't yet found one which addresses our scenario.
We're looking to create a Hyper-V cluster which spans two physical locations, so the storage (File server cluster / iSCSI SANs) will also need to span to both locations. This is all in case one physical location goes down, for example fire. The file server host/SAN in each location will have it's own attached storage, but will be clustered with the other location, and the combined storage will be mirrored.
The only snag is, the only scenarios I've seen for SMB 3.0 is where the underlying storage for a File Server Cluster has to be available to all the nodes of the cluster.
I'm probably approaching this all wrong, so I'm opening it up for any ideas...
Andrew France - http://andrewsprivatecloud.wordpress.com
I'd like some opinions on running multiple virtual machines on a single host and how much performance can be affected.
Our servers run a single Xeon E3-1270 processor with 32GB memory. 4 Western Digital RE4 1TB drives are in RAID 10 on an Adaptec 6405 card with module for cache protection. There are two onboard NIC's, one dual-port NIC and one single-port NIC via PCI Express slots. I plan to team the onboard to access the host server. The dual-port NIC's I plan to team for the VM's, and the single port NIC I plan to connect to a different switch which connects our storage devices using iSCSI to seperate traffic.
The physical server OS will not run anything other than Hyper-V role. I plan on having 3 virtual machines on this server.
1 VM will run Active Directory and DNS (4GB memory allocated)
1 VM will run Sage accounting (16GB memory allocated)
1 VM will run Print and WSUS (4GB memory allocated)
That leaves 8GB memory for the host or if I need to spawn up another virtual machine.
On the other server, same specs.
1 VM will run AD/DNS replicated (4gb memory)
1 VM will run engineering software (database driven) (16gb memory)
1 VM will run Symantec Endpoint Protection and Ghost Server (4GB memory)
We have 38 users total. 12 of which use Sage concurrently throughout the day. We have 8 network printers.
Given the specs on the server, is there any bottlenecks you can think of that would hinder performance running 3 VMs on each host.
i have 2 windows 2008 R2 server 1 is a domain controller and another is a RDS server. Both are currently on VMWare 4.1 ad guest VM's.
I bought a server with windows 2012 and will be setting this up as Hyper-V and after that i want to move the 2 VM's from VMWare to Hyper-V.
is there any utility that i can use to do this? or should i use any other methods?
The DC only has file shares.
Hi there,
Currently I have a Windows Server 2012 Hyper-V Guest hosted on Windows Server 2008 R2. Whenever shadow copy is ran by the host the guests network connection disconnects. This happens everyday as the host is setup to backup the guest daily.
Here's some event log details:
Hopefully this is enough information. Really hope someone can help.
Thanks!
Host server: Windows Server 2008 R2 with Hyper-V installed (already joined the domain)
DCs: Windows Server 2008 R2 (VMs of the Hyper-V host server)
Remote management computer: Windows 7 32bit with RSAT installed (already joined the domain)
Remote management user: A memeber of Domain Admins and logged on the remote management computer
I added my Hyper-V host server in the Hyper-V Manager console in RSAT on my remote management computer. I could manage the Hyper-V host server such asHyper-V Settings and Virtual Network Manager. But I could not see or manage the VMs. I believe there are several VMs on my Hyper-V host server. And the Windows Firewall on both Hyper-V host server and remote management computer was diabled.
We recently installed Server 2012 Data Center with Hyper-V on a two servers at two different sites: Main Site and DR Site.
Hyper-V is enabled and working great at both sites on these servers and now trying to configure Hyper-V replica gets us this result:
Some of the items we made sure to verify:
So I am not sure what we are missing, but is there another port that needs to be open if we are going through HTTP because after starting to configuring the replica another message shows that "Could not get the configuration details of the specified server."
Searching around on TechNet didn't show any resolution for this, but if anyone would have some additional tips to try to troubleshoot that would be greatly appreciated!
Thank you!
-Jeff
Karthik R
Hi,
i am using Windows Server 2012 (Standard, English) as Hyper-V host. I do have a single vSwitch configured as "internal" network.
If I configure RRAS as a NAT router (on the host) I sporadically loose my network connection to the host. It works for a few minutes, sometimes for an hour, and without any obvious cause I loose my network connection/RDP session.
I used that exact configuration for years in WS 2008 R2. I have no idea, what the root of the problem is.
Thanks for your help in advance,
Thorsten
I am having a lot of trouble getting Hyperv Server 2012 with Storage Spaces and ReFS working reliably. (Or even finding anything with regards to best practices).
I have tried making the a giant 2TB (2 TB Enterprise Sata Drives - Mirror from storage spaces single volume.) ReFS partition with the intention of just using vhdx's on it but in this case then the pool starts offline every reboot.
I have also tried creating Volumes for each VM on the Storage space. Then off-lining them and adding them to hyperv. This seems to work pretty well as well but again has to be manually redone each reboot. I suspect there is some parameter I don't know that
I can add from powershell to make this possible. This breaks Centos 6 when used with a sparse volume.
I could potentially use standard Windows software raid but it has been implied to me that the self healing part of ReFS only works with Storage Spaces.
Single Box. (32GB ECC RAM / Xeon E3 1230v2 / Booting from USB / 2 Enterprise SATA 2TB Disks / 2 Intel Nics - 1 management the other for the VM's (Has the VM offload feature).
Dunno about sparse vs fixed vhdx's either.
It seems like they are fairly similar. (Tested under Centos because I know how to do it fairly easily under high load - if it is going to be ok upto a certain amount of disk capacity used then I will probably use it).
I really don't like the idea of running a VM with the disks passed through and then iscsi. (I know what can happen with ZFS even on Solaris when you need to use zdb I expect it to be even more difficult on Linux - Solaris isn't suitable because if you use
a partitioned disk (i.e to boot from) you lose the disk cache.)
I have fairly simple needs just trying to learn hyperv. (I know VMWARE and Xen reasonably well).
Right now I have a Single ReFS fixed volume taking up 75% of the pool and it seems to reboot and be mounted but I am convinced there must be something I am missing with regards to using a volume on the storage space as storage for a VM. (For some things like builds using a volume with raid0 like characteristics would be desirable for me).
Anything at all would be helpful there is lots of slides saying that what I am doing is a good idea but not anything that provides any detail about it.
Hi,
I have windows server 2003 R2 on my virtual machine. It's registered in ktu.local domain.
When I try to log on using my domain account it takes forever to load the system up. Also I'm unable to install any software that requires ktu.local domain account. I get the following error message:
Setup could not find the domain 'KTU' for the business Connector Proxy account.
It seems as logging and checking account password exceeds time limits.
How could i solve this?
Hi
I have an laboration environment with a Windows Server 2012 installed on a physical computer. On top of that i have an Windows Server 2012 virtual Terminal Server running in Hyper-V. Now what i really want to know is if I do have RemoteFX running on them already. Cause from what i have understood is that Windows Server 2012 automatically Virtualizes a GPU if you dont have a Physical GPU, which i dont in my case.
And how can i check if I do have RemoteFX running on the Virtual Terminal Server?
I'm a Rookie when it comes to Virtualization, so feel free to explain to me like im 5 years old :)
I know from reading Brian Madden's post that this can be done with Windows 7 SP1 and Windows Server 2008 R2 SP1, but can it also be done with Windows Server 2012 or is it too early to be testing these things ?Is it possible to use live migration to fail over to another host without using clustered servers? If so, can the failover be automatic?
Thanks!
Phil
We will be preparing a plan to transition from a 2008 R2 Cluster to a 2012 new Cluster. One area that is still not totally clear is the best method for Nic Teaming.
In the existing 2008 R2 Cluster we have 4 Dell R610 server's using multiple nic-teams. These are done at the Driver level using the Broadcom Driver suite and also the Intel drivers. That said, is it recommended in the 2012 environment we perform them with the built in Windows Teaming? Should we no longer perform the teaming within the drivers? Is performance using the Windows teaming comparable?
I like the concept of moving this from the drivers to the built in teaming, this seems like a better approach however I would like some input on this. Thanks
Hi,
What is the average time you would expect a VM to move from one host to another using live migration? We are seeing times of between 20-40 seconds. So if we have about 60 VM's it is taking a long time to put a host into maintenance mode as Hyper-V does one at a time.
What speeds to you see?
We are using Cisco UCS Blades in a Flexpod so would expect it to be fairly quick!
I successfully setup hyper V server on a test machine and install some guest OS on it. All of it work well with out any problem. But suddenly I lost connectivity to outside world from guest OS. Now I can ping to host OS (Hyper V Server), host to all guest and ping between guests. But none of them have access to outside world.
I was using external virtual network and connect it to physical NIC. All of the guest OSes are using this external Virtual network.
I search a lot on web. But there is no useful info. Please help me to clear the problem.
Hi all
I have windows server 2008 and using Hyper-v , for example there's 10 Vm's running on hyper-v
Vm1, vm2 , vm3 ...etc
I want a certain user to be able managing a certain vm (Reboot, Shutdown,Turn oFF, Re-install windows...etc)
for example
user A manage VM1
USer B manage VM2
User C manage VM3
is there any way to do something like this?
I am running W2k8R2 and have a few VM's on it. We had to power down for the weekend and I did so gracefully. On Monday when I powered everything back up all but one virtual machine came back. One remained in a saved state and when I tried to start it I just this error message.
Microsoft synthetic SCSI Controller (instance id) Failed to Power on with error ' the system cannot find the specified file' (0x80070002) virtual machine id.
I was also unable to turn it off as the option was grayed out.
I found instructions that said to delete the saved state and then try to restart, but no luck the instructions then said to remove the virtual network connection and then add the connection back and restart, no luck.
I am not sure why this happened. Does anyone have any suggestions on how I can get this virtual machine back running again.
I dont want to recreate a new vm every time this happens, there has to be a fix out there.
any help is greatly appreciated.
marc