Tip

How to select Hyper-V high-availability hardware

Creating a Hyper-V high-availability environment involves a lot of effort, and choosing the right high-availability hardware isn't always obvious.

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

Before delving into more complex tasks, you need to lay the groundwork for the most basic form of Hyper-V high availability: the Windows Failover Cluster. Here's a look at the hardware you'll need.

High-availability hardware for a Windows Failover Cluster
The Windows Failover Clustering service has been around for more than a decade, and today's incarnation uses a suite of wizards and pre-deployment verification tests that eliminate many of its previous problems.

Before creating a Windows Failover Cluster, your environment must pass more than 30 individual tests, which automatically run on each cluster candidate and storage device, and you need to verify that storage, networking, hardware and software configurations are correct.

Passing these high-availability hardware tests requires the integration of server, networking, and storage elements. To fulfill the server hardware criteria, you need a minimum of two Hyper-V hosts to create a cluster that supports failover. If you have more hosts, though, it increases your capacity for running simultaneous virtual workloads.

When budgeting for Hyper-V high-availability hardware, note the following critical items:

Compatible high-availability hardware
If you've spent time researching Hyper-V, you probably know that its services require the support of onboard virtualisation extensions, hardware data execution prevention support and the ability to run 64-bit OSes. If you overlook any of these requirements, your virtual machines (VMs) won't power on.

Second-Level Address Translation support
Installing compatible hardware is only the start. Second-Level Address Translation (SLAT) hardware support is often overlooked but is equally important for scaling cluster usage. Even though it's not required to power on VMs, SLAT extensions are a kind of "second generation" virtualisation extension that are present in most modern server hardware. (Look for AMD's Rapid Virtualisation Indexing extensions with AMD hardware or Intel Extended Page Tables extensions with Intel hardware.)

SLAT's added instruction sets on the processor level not only improve the performance of all workloads but also add low-level system optimisations, which dramatically assist virtual workloads that experience a large number of context switches.

Lots of fast shared storage
Highly available VMs in a Hyper-V environment do not reside on the host's local storage. For those VMs to use Live Migration, their Virtual Hard Disk files must exist on an independent, shared storage area network (SAN).

When choosing a SAN, it's important to account for Hyper-V's heavy reliance on raw disk speed for overall VM performance. If you don't spend good money on high-speed SAN disks, it won't matter how fast your Hyper-V hosts operate, because they will be bottlenecked by storage speed. My advice is to get the fastest --and largest -- SAN you can afford.

Plenty of networking
In nonvirtualised environments, often only a single network connection exits each server. Smart organisations team these connections to ensure redundancy in the event of a connection failure.

In virtual environments, however, it is not usual to see at least four network connections exiting a server. In environments that are created for scalability, Hyper-V hosts with more than six or eight network cards are regular events.

Chapter two of my free e-book The Shortcut Guide to Architecting iSCSI Storage for Microsoft Hyper-V talks about this larger number of Hyper-V network interfaces, and it also explains why these additional network cards are necessary. If you're considering a fully redundant Hyper-V infrastructure, check out the chapter (and the rest of the e-book) for more details.

Many powerful hosts
Finally, you need to analyse how to distribute your available budget between buying powerful hosts and buying many hosts. Here's why.

Hyper-V in Windows Server 2008 R2 does not currently support the kind of memory sharing that enables more VMs to operate than the available physical RAM allows (although that may change with Dynamic Memory in Service Pack 1). This means that once you've consumed the available RAM, additional VMs will not power on.

While this limitation doesn't usually pose a problem in single-server environments, it presents a conundrum for multihost, clustered environments. For a VM to live-migrate to another host, the host must have enough available RAM to fulfill each VM's assigned RAM needs. So if you completely fill your virtual hosts to their maximum amount of RAM, you'll never be able to fail over VMs in the event of a problem.

Until Microsoft fixes this problem, your clustered environment will always need to reserve a certain amount of unused RAM. That amount should exactly equal the amount of RAM required to fail over at least one host's VMs.

This reserve ensures that when a single host fails, its VMs can successfully fail over to a surviving Hyper-V host. This surplus RAM doesn't need to reside on a single host. In fact, you should spread the load around the cluster hosts to ensure the optimum use of resources.

Now that you know what Hyper-V high-availability hardware you need, the next step is to put these pieces together. I'll talk about those steps in the second part of this three-part series on Hyper-V high-availability.

Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specialises in Microsoft administration, systems management and monitoring, and virtualisation. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.


This was first published in April 2010

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.