The top four Hyper-V virtualisation problems that plague admins


The top four Hyper-V virtualisation problems that plague admins

From Hyper-V early betas to the release to manufacturing (RTM) versions, I have used Microsoft's server virtualisation technology for some time. As Hyper-V has matured, it has proved stable and fulfilled most of my needs. But while Hyper-V has made great strides with its R2 release, it still suffers from some inefficiencies and missing functionality. In this article, I will discuss my top four problems with Hyper-V and Hyper-V R2 and some possible workarounds.

Backup stability and support
From day one, Hyper-V backups have been a priority to ensure that data is safe and off-site. As a result, I have used custom scripts to utilise the Hyper-V Volume Shadow Copy Service (VSS) writer to perform

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Safe Harbor

online host-based backups. This worked well, but required various workarounds to corrected VSS instabilities. Therefore, making this solution work in a Hyper-V clustered environment proved tricky.

Early on, and still today, third-party ssupport for host based backups of Hyper-V virtual machines (VMs) was limited and released months or years after the initial release of Hyper-V. Microsoft System Center Data Protection Manager (SCDPM) 2007 SP1 also suffered from this lag, releasing months after Hyper-V went to RTM. SCDPM, however, has become the preferred backup method for most of our Hyper-V hosts. Nevertheless, the product still has stability and efficiency issues due to its large disk space requirements.

Unfortunately, this backup issue is not unique to Hyper-V. In the "Virtualisation Decisions 2009 Purchasing Intentions Survey," respondents for both VMware and Hyper-V expressed frustration over their inability to back up virtual workloads effectively.

My advice: See whether your current backup product supports your hypervisor of choice. If not, adopt another product or see whether you can script a backup that moves data to tape.

Disk storage: The performance-vs.-disk-space dilemma
Another shortcoming of Hyper-V is the amount of disk space necessary to deploy a production-level VM. Again, this is not limited to Hyper-V. But, from my experience, this is an area that needs serious work.

In the article onfixed vs. dynamic disks , I explain why you need both for production workloads and to maximise performance. In short, using fixed disks comes at a cost of ballooned physical disk space to avoid the performance-expansion tax of dynamically-expanding disks. This not only translates to more physical host-based disk space utilisation but also excessive tape space -- not to mention the hours required to back up VMs from the host level. It's comparable to provisioning 72 GB to a VM and having to back up the entire 72 GB, instead of just files on the Virtual Hard Disk (VHD).

Currently, I back up 16 TB of Virtual Hard Disks. Only two-thirds of this space is used, however. The rest is overhead from the free space on each VM's VHD. On top of this, using Hyper-V clustering with a VM per logic unit number (LUN), I have to overprovision the size of each LUN to account for the memory footprint and snapshots. The point is, at this juncture in the technology, there is significant disk storage bloat in the virtual server world.

My advice: There are vendors (i.e., Virsto) that are working on fixing this problem; but, for the time being, closely scrutinise the amount of disk space requested by your virtual servers.

VM mobility licensing restrictions
If you have a volume licensing agreement and run Windows Server 2008 Datacenter Edition, this shortcoming isn't an issue. No matter which hypervisor you use, under the current licensing rules, you can move a Windows Server operating system to another host every 90 days. If you move them more frequently, you have to buy another Windows Sever operating system license. This licensing structure conflicts with the great features of Hyper-V -- especially with Hyper-V Server R2 , which is free and includes live migration. Having a free product that can move VMs seamlessly from host to another is great, but having to double, triple or quadruple your Windows Server OS licenses to achieve this nirvana is not a good deal.

In my environment, I move VMs often. Luckily, I have Window Server 2008 Datacenter Edition on a volume license agreement, which has unlimited virtualisation rights. If I did not, however, I would be forced to make a tough decision between the stability of my environment or breaking that agreement.

From a technical perspective, keeping my environment stable is the top priority. While I'm in the clear, Microsoft's licensing issues have forced other organisations to purchase unnecessary licenses or breaching their existing license agreement.

My advice: Keep the pressure on your Microsoft contacts. Let's hope for a more reasonable solution from Microsoft!

Memory utilization and memory overcommit
The Live Migration feature may have helped Hyper-V make up ground compared with its competitors, but one hole that is still left in the product is the inability to overcommit the amount of physical memory in the virtual host server. Memory overcommit is the ability to assign more RAM within your VMs than is physically installed on a host server. VMware has this attractive technology, but Hyper-V lacks this feature.

In my environment, the amount of physical host memory is the main obstacle that prevents higher consolidation ratios per host. The ability to dynamically "pool" memory for all VMs on a host would help drive up these ratios (along with ROI figures that CIOs love). Because of this shortcoming, however, I am forced to be conservative with RAM allocation across workloads to achieve the highest consolidation ratios possible.

Without the ability for memory overcommit, consistent monitoring and manual memory resource adjustments are necessary to ensure the best performance out of my VMs.

My advice: As with disk storage, scrutinise the amount of RAM allocated to your VMs (and hope that Hyper-V R3 will have a memory overcommit feature). If this is your most critical need, look to other hypervisors.

In my opinion, there are other quirks in Hyper-V that are less significant, but these four have been my biggest pain points. To be fair, Hyper-V and other hypervisor vendors have made amazing strides in a relatively short period of time. Regardless, hopefully these issues and will be addressed soon.

What are some server virtualisation shortcomings that you see? Add to the conversation by  emailing them to me, I will post them at

About the expert Rob McShinsky is a senior systems engineer at Dartmouth Hitchcock Medical Center in Lebanon, N.H., and has more than 12 years of experience in the industry -- including a focus on server virtualisation since 2004. He has been closely involved with Microsoft as an early adopter of Hyper-V and System Center Virtual Machine Manager 2008, as well as a customer reference. In addition, he blogs at, writing tips and documenting experiences with various virtualization products.

This was first published in November 2009

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.