Tip

Selecting CPU, processors and memory for virtualised environments

In the first part of this series on hardware selection for virtualised environments, we discussed the choice between blade and rackmount servers.

Now, this tip delves

Requires Free Membership to View

further into the hardware selection process and covers hardware purchasing considerations for CPUs, processors and memory.

Selecting a CPU for a virtualisation deployment
When you purchase a CPU, the first decision is which brand: AMD or Intel? Over the years, many performance studies have compared the two. With constant changes in processor architecture, AMD is sometimes ahead of Intel, then vice versa. Both Intel and AMD have integrated virtualisation extensions, Intel Virtualization Technology (Intel VT) and AMD Virtualization (AMD V), respectively, into their latest processors in an attempt to speed up instruction execution in virtual servers.

The major difference between Intel and AMD processors is physical architecture. Intel uses a front-side bus model to connect processors to the memory controller, while AMD uses an integrated memory controller for each processor that is interconnected through a hyper-transport. And depending on the processor family, these processors tend to have different power consumption levels.

In terms of performance, when you compare processors from each vendor with similar speed, features and number of cores, Intel and AMD tend to be equal. Some performance studies show that Intel processors have an edge in performance, and others show the inverse. Both Intel and AMD processors work well in VMware ESX hosts, so it's a matter of brand preference when it comes to choosing one. Because Intel and AMD continually release new processor families, you should check which currently has the latest technology as you make a decision between the two.

So which CPU should you choose? In general, it's a good idea to choose a brand and stick with it, especially if the majority of your current servers already use a particular brand. The reason for this is that you can't move running virtual machines (VMs) from one host to another if the hosts run on different processors (though, see AMD demos live migration between Intel and AMD processors). For example, a VM started on a host with an Intel processor typically crashes if it's moved while running to a host with an AMD processor. If you decide to use different brands, it's best to isolate hosts of the same brand processor into separate clusters for compatibility purposes.

Selecting a processor: Virtualisation extensions
When purchasing a processor, choose a model that's optimised for virtualisation, such as those with AMD-V or Intel-VT extensions. To grasp why extensions are important, you need to understand how rings work on a CPU.

X86 operating systems use protection rings that provide levels of protection where code can execute jobs. These rings are arranged in a hierarchy, from the most privileged (Ring 0) to the least privileged (Ring 3), and are enforced by the CPU that places restrictions on processes. On nonvirtualised servers, an OS resides in Ring 0 and owns the server hardware and applications that run in Ring 3. On virtualised systems, a hypervisor or virtual machine monitor (VMM) needs to run in Ring 0 so a VM guest OS is forced into Ring 1 instead. Since most OSes have to run in Ring 0, the VMM fools a guest OS into thinking that it's running in Ring 0 by trapping privileged instructions and emulating Ring 0 to the guest VM.

Unfortunately, this operation can reduce performance, which forced Intel and AMD to develop Intel VT and AMD-V extensions to solve the problem. Both sets of extensions are integrated into their CPUs so a VMM can instead run in a new ring called Ring -1 (minus 1), which allows guest OSes to run natively in Ring 0. These extensions to the CPU enable better performance. VMM no longer needs to fool a VM guest OS into thinking that it's running in Ring 0 because it can operate there and not conflict with VMM, which has moved to the new Ring 1 level. To get the best performance from your virtual hosts, choose a CPU that uses these virtualization-optimised extensions.

Also, stay tuned for future processor releases from AMD and Intel that support the new Nested Page Tables (NPT). AMD's version is Rapid Virtualization Indexing (RVI); Intel's is Extended Page Tables (EPT). This new CPU technology helps reduce the performance overhead of virtualising large applications such as databases.

Selecting multicore CPUs
Another critical choice is the number of physical CPUs (sockets) and the number of cores that a CPU should have. A multicore CPU combines multiple independent cores on a single physical CPU, essentially turning a single physical CPU into multiple CPUs. An example of this is a server with two quad-core CPUs that has eight processors available for use. Depending on the CPU brand and model, these cores sometimes share a single cache, or have separate Level 2 caches for each core. Most virtualisation software vendors sell licenses by the socket and not by the number of cores that the socket has, so multicore processors are fantastic for virtualisation. For new servers, multicore CPUs are now virtually standard.

You also have to decide between dual- and quad-core CPUs. You may be tempted to choose quad-core over dual-core based on an assumption of the more cores, the better. But dual- and quad-core CPUs feature crucial differences. CPU core increases do not follow the same scale as CPU clock-speed increases. A 3.2 GHz CPU is twice as fast as a 1.6 GHz CPU, but a quad-core CPU is not four times faster than a single core. A dual-core CPU is roughly 50% faster than a single-core CPU (not 100%, as you might expect), and a quad-core CPU is only about 25% faster than a dual-core CPU. In addition, dual-core CPUs typically have higher clock speeds than do quad-core CPUs. Quad-core CPUs generate excessive heat and, as a result, cannot be clocked as high as single- and dual-core CPUs.

In general, quad-core CPUs are recommended for virtual hosts for two reasons. The first is that most virtualisation software is licensed by the number of sockets in a server and not the number of cores you have. This means you get more CPUs per license than you purchase. The second reason is that having more cores in a host server gives the hypervisor CPU scheduler more flexibility when trying to schedule CPU requests that are made by VMs. Having more cores available makes a CPU scheduler's job easier and improves VMs' performance on a host.

In some situations, however, dual-core rather than quad-core CPUs are preferable (if you don't plan on running more than six to eight VMs on hosts, for example). The faster CPU MHz of dual-core hosts increases speed to the VMs running on it. In addition, if you plan on assigning VMs a single virtual processor, dual-core processors can be a better option, because single vCPU VMs are easier for the hypervisor to schedule than multiple vCPU VMs.

Choosing memory for virtualisation
You don't want to skimp on memory, because it's often a host's first hardware resource to be used up. Not having enough memory on a host while having plenty of other resources available (CPU, disk, network, etc.) will limit the number of VMs that can be put on a host. While a memory overcommit functionality is available with some virtualisation software, exhausting all physical host memory isn't recommended, because VMs' performance will suffer.

A server's memory type is dictated by what a server supports, so check server specifications or use the online purchasing guides to see what's available. Check on how many memory slots are in your server and whether memory needs to be installed in pairs.

As different size dual in-line memory modules (DIMMs) can be used in servers (512 MB, 1 GB, 2 GB, etc.), you should choose a DIMM size that works with the amount of memory a server needs. Larger-size memory DIMMs (4 GB or 8 GB) are more expensive than smaller sizes, but they use fewer memory slots to leave more room for future expansion. Once you choose a DIMM size, stick with it. Mixed DIMM sizes in your server can cause reduced performance. For best results, use the maximum-size DIMM available in memory slots.

In addition to sizes, there are also many different memory types (e.g., PC2100, PC5300) that are based on the memory module's peak data transfer rate. Originally, the number after "PC" that was used to label memory modules stood for the clock rate of the data transfer (e.g., PC133). This was later changed to the peak data transfer rate in Mbps, so memory that is classified as PC5300 has a peak data transfer rate of 5300 Mbps. Most servers can use several different memory types, so choosing the fastest memory available is desirable if you can afford it.

The final memory-related decision that you'll have to make is between single, dual or quad-rank DIMMs. A memory rank is defined as a block of 64 bits, or 72 bits for error-correcting code (ECC) memory, created by using the DRAM chips on a DIMM. For example, single-rank DIMMs combine all chips into a single block, while dual-rank DIMMs split the chips into two blocks. Dual-rank DIMMs improve memory density by placing the components of two single-rank DIMMs in the space of one module, typically making them cheaper than single-rank DIMMs.

Unfortunately, in some instances a server chipset can support only a certain number of ranks. If a memory bus on a server has four DIMM slots, for example, the chipset may be capable of supporting only two dual-rank DIMMs or four single-rank DIMMs. If two dual-rank DIMMs are installed, then the last two slots should not be populated. If the total number of ranks in the populated DIMM slots exceeds the maximum number of loads the chipset can support, the server may not operate reliably.

So which DIMM type should you choose? Single-rank DIMMs allow a server to reach its maximum memory capacity and highest performance levels, but they cost more because of higher density. Dual-rank DIMMs are cheaper but can limit overall system capacity and restrict future upgrade options. If you can afford the more expensive single-rank DIMMs, opt for them. If you can't, dual-rank DIMMs work equally well. On some servers, you can mix single and dual-rank DIMMs if they're not in the same bank (though this method isn't recommended). And for best results, try to stick with the same rank type in all slots.

Finally, the market features several memory manufacturers, and it's best not to mix brands within a server. Buy OEM memory if that's what is present in a server or replace it all with memory from another vendor. Memory configurations and selections can be complicated, so always consult with the server hardware vendor to make sure you've made the right choices for your server.

ABOUT THE AUTHOR: Eric Siebert is a 25-year IT veteran who specialises in Windows and VMware system administration. He is a guru-status moderator on the VMware community VMTN forums and maintains VMware-land.com, a VI3 information site. Siebert is also a regular on VMware's weekly VMTN Roundtable podcast.

This was first published in July 2009

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.