Skip to main content

Handling Virtualization's New Challenges

Part two of a two-part series addressing some of the challenges and benefits of switching to a virtualized computing platform.

In my previous column, I began a two-part series describing some special challenges that arise from adoption of virtualization. This series follows on earlier columns that described in detail the important energy-savings benefits of virtualization. The savings, coming primarily from consolidation of multiple, dedicated servers onto a single, more capable platform, are compelling. So compelling, that sites that seek to cut their energy usage or just cut the costs of their data centers should be examining and evaluating virtualization carefully. I hasten to add that in addition to reductions in costs and energy consumption, virtualization offers several other important advantages, which I have discussed also in previous columns.

Like all solutions, however, it presents issues that require careful handling. In my last column, I discussed the difficulty of updating virtual machines (VMs) as well as the challenge of managing the proliferation of VMs. Today, I look at the problem that occurs once you get past the earliest stages of evaluation.

Deployment
Deploying VMs requires thoughtful planning. Running several VMs on the same host is rife with possibilities of conflict between them. These conflicts consist of competition for resources on the host hardware. These contended resources typically are processor capacity, RAM, disk I/O, and network bandwidth.

Processor capacity needs can be difficult to project ahead of time. Applications' needs are difficult to estimate on a given CPU unless you've run the software on the current machine. (How many cores does it need? What is the workload on each core? Is the workload constant or does it occur in bursts, etc.) As a result, determining sufficient processor capacity for a set of applications is necessarily done by trial and error. And if the applications are important to keep running at a specific QoS, the right way to start is to run a pilot on the intended host and determine exact usage needs. Once they have been established, the minimum number of high-CPU usage apps should be placed on a single platform --- ideally no more than one or two key apps. The remaining apps for the platform should be low-usage, or intermittently spiky in their CPU needs. If a limited number of apps are running on a virtualization host, then all of them will get some share of the CPU resources and none will be starved.

Note that this is not the case for systems loaded with numerous small applications. Much of the virtualization infrastructure today is not tuned (or even designed) with the expectation that 30+ apps will run on a single host. As a result, situations do occur in which some applications among the many live VMs are gradually starved by the virtualization layer. This factor is an important part of capacity planning for desktop virtualization, which is the placement of client PCs in VMs, with the user peripherals (monitor, keyboard, mouse, etc.) retained on the user's desk.

Memory is another limitation that I have discussed in the past. Most applications greatly overestimate their memory needs and can run in far less RAM than their specifications indicate. Trial and error will tell you if you have gotten the allocation right. All allowed RAM is allocated by the VM at start up, and there is no concept of RAM sharing among VMs. So RAM is definitely a hard limit. As a result, you want to have the most conservative RAM complement on your VM that still enable full speed operation.

Disk I/O is a limitation only if disk storage is local. If it is local, RAID is mandatory to distribute the I/O load across several spindles. Today, the common way of handling disk I/O efficiently is to use a disk farm and have the data read in via massive network pipes. Network contention between apps should not be allowed to occur on a virtualization host, as network adapters today deliver very high capacity for comparatively favorable pricing. Likewise for Fibre Channel. If you need more capacity, add it quickly.

Products to help with workload estimation and planning are now coming to market and these should be considered, especially as you move past the pilot stage and into serious implementation. Recon from PlateSpin and Capacity Planner from VMware are both good options, although I would start with the PlateSpin offering, as it supports all the major hypervisors and you can download an evaluation copy at no cost.

Administering the Hosts
To get a handle on the usage levels and to locate the trouble spots, it pays to install administrative software that is tailored to virtualization contexts. It enables you to see at a glance what is happening and which host systems need help. The market for such admin packages is well served by numerous vendors, including PlateSpin, VMware, Hyper9 (shortly), and others. The packages not only track usage of key resources on VM hosts, but they also allow you to do basic administration of VMs (such as inventory, loading, starting, stopping, and so on.) While some of these activities can be performed manually, the more you can automate them, the better your experience is likely to be.

Between careful planning and attentive monitoring, you can ensure a successful adoption of virtualization in your data center.

Next Month
For the last several columns, I've discussed what you need to know to set out on the virtualization path. You now have what you need to get up and running. So, next month, I'll return to examining power usage in key IT components.

More on this topic

More by This Author