Virtualization Servers: The New Green Platform for IT

Virtualization Servers: The New Green Platform for IT

In June and August of last year, I wrote a pair of columns in which I extolled the value of virtualization as a solution to excessive energy consumption. The primary benefit, as I described it, is that virtualization makes it possible to consolidate multiple applications onto a single server. That is, apps that currently run on dedicated systems can be moved en masse to a single server that consumes less power -- generally, far less power -- than that required by the dedicated servers.

This economy derives from two principal factors:

1) Modern servers are much more energy-efficient than their forbears. This is true in absolute terms and relative terms. In fact, in relative terms, such as watts per mips, today's systems are orders of magnitude more efficient.

2) With the exception of database-intensive apps, software running on dedicated hardware often doesn't use much of the available system resources. This is often the case in three critical areas: processor, RAM, and network bandwidth. Hence, these applications make ideal candidates for consolidation as they are unlikely to hog resources sufficiently to detrimentally affect the performance of other apps.

Until recently, the servers upon which this consolidation would be done consisted of generic servers (generally Intel-based boxes with either two or four multicore Intel Xeon processors). These servers have the advantage of being inexpensive, plug and play, available from multiple vendors, and they have favorable performance/power ratios. However, as I pointed out, these are generic servers -- the basic building blocks of enterprise data centers. They are not tailor-made for virtualization. And in that one aspect, a new market segment has emerged.

The Virtualization Server

Hardware vendors, notably Dell and Hewlett-Packard, have begun developing servers that are purpose-built for server-consolidation using virtualization. These servers are distinguished by using many-core processors and they have optimized RAM and I/O expandability. In addition, they sport several features, which I'll discuss shortly, that aid in their specific mission.

I recently started analysis of Dell R805 and R905 virtualization servers, which are very much representative of the category. These systems focus on the three areas that are most likely to affect application consolidation performance: they have lots of processor cores, lots of RAM, and lots of network bandwidth.

For example, the R905 can hold up to four quad-core processors (in this case, they're AMD Barcelona chips). These processors create 16 quasi-independent execution pathways, which in theory should be enough to run 16 concurrent tasks simultaneously. RAM can scale past 64GB. Given that most apps eligible for consolidation are 32-bit applications that can address a maximum of 4GB, the 64GB is the theoretical maximum you'd need (and, in fact, is substantial overkill) for 16 concurrent applications. But if you were to run more than 16 of them, the RAM head room would be welcome. Finally, the bandwidth needs are handled via a minimum of four Gigabit Ethernet (GbE) network adapters, with room for an additional four network connections. That's a lot of bandwidth. The R905 lacks much local disks storage. That is intended to be provided either via fiber-channel cards or the GbE adapters that talk to a spindle farm of some kind.

The system is driven by dual power supplies that reach 90% efficiency. That level represents a high conversion rate. Only a year ago, vendors were touting 80% efficiency in power supplies, so 90% is a distinct improvement.

Finally, these systems can come with an embedded copy of VMware ESXi 3.x. This feature bundles preconfigured VMware software on the system it ships. Some additional virtualization management software from Dell is included as well. This hardware/software combination means that sites don't have to buy and install the VMware software separately.

How Well Do They Do Virtualization?

Determining the quality of support for virtualization depends in good part on having usable benchmarks to run. Unfortunately, the industry is only now gaining the maturity to devise virtualization benchmarks that accurately reflect performance, so that buyers can make informed choices.

The first benchmark attempt, called GrandSlam from IBM, has been retired. A second effort by Intel, called vConsolidate, runs database, Java, mail, and Web servers and derives a performance rating by combining their results. Industry insiders state that while the approach is valid, vConsolidate has one weakness: it tends to disfavor AMD processors.

The VMark benchmark from VMware (downloadable at no cost here) is probably the most widely quoted benchmark at present. However, it's difficult to run (making it hard for in-house analysts to duplicate test results) and it tends to consume too little RAM. Hence, it's viewed as being not truly representative of the performance profile of a typical IT situation. The industry is currently looking to the Standards Performance Evaluation Corporation (SPEC), a vendor-based consortium that specializes in benchmarks, to develop a better virtualization measure. Per the company's website, it should release one during the second half of 2008.

Until then, you can compare systems based on their published VMark results. But in terms of absolute performance, for the time being, you'll have to do things the old-fashioned way: examine feature sets and test the systems in-house, to the extent possible.

One way or the other, though, you should begin thinking about this new category of servers, which will soon be making its debut in the datacenter.

Topics: