Skip to main content

Consolidating Servers to Reduce Power Consumption

One of the most effective ways to lower energy use, consolidation has quickly moved out of the realm of pipe-dream into a real-world, practical solution for businesses of all sizes. In his inaugural feature, GreenerComputing's Technology Editor Andrew Binstock explains the ins and outs of server consolidation.

One of the most effective ways to lower energy use, consolidation has quickly moved out of the real of pipe-dream into a real-world, practical solution for businesses of all sizes. In his inaugural feature, GreenerComputing's Technology Editor Andrew Binstock explains the ins and outs of server consolidation.

One of the most effective ways to lower energy consumption is to consolidate servers, especially little-used servers. Every major IT site has several -- often many -- small servers that each support a single legacy application that cannot be removed but which few processes use. These servers are prime candidates for consolidation onto a single hardware platform.

By moving these applications onto a single platform, IT sites save almost the entire energy consumption of the original server, the cooling costs of that server, and also gain greater manageability by having applications running on fewer physical machines. Depending on configuration, projections of savings from consolidating vary from several hundred to several thousand dollars per consolidated application per year.

Five years ago, the idea of consolidation was more of a pipe dream than a reality. Unless you were running on mainframes or high-end servers, the best you could do was port applications to a new platform and run them all together. So, you could migrate several Windows 2000 server applications, for example, to one Xeon server and run them all there. This solution had numerous impractical aspects, not least of which was the risk that by running them all together, any single application failure could bring down the entire server.

Fortunately, virtualization has dramatically changed this scenario. For readers not familiar with virtualization, it is a technology gaining considerable attention in IT. This software solution creates an instance of a PC on an existing hardware platform. You can then load that virtualized PC with whatever operating system and applications you want, and run it side by side with other such PCs. In this arrangement, each application is insulated from the other: if any one application crashes, the most it can do is bring down its own virtualized PC. The other applications keep right on running.

Each of these virtualized PCs is generally referred to by the term virtual machine, or VM. (This nomenclature is unfortunate as it implies some connection with the virtual machines that run Java and Microsoft .NET applications. To disambiguate the meanings, when speaking of virtualization, I refer to VMs when referring to runtime environments for Java or .NET, I qualify the VM to specify this runtime aspect.)

Shortly, I will describe how virtualization operates. However, it's important to note that virtualization today requires that the virtualized PC have the same processor as the underlying physical processor. As all widely used VM packages today run on x86 chips, such as those from Intel and AMD, the only virtual PCs they support need to run an operating system that works on this platform: Windows, Linux, MS-DOS, Solaris x86, and so forth.

Performance

The ability to run Linux in a VM on a Windows PC was the original selling point of modern virtualization. It was used by software developers to test their code for portability. Programmers could verify that their software ran correctly on Windows and Linux without having to change machines.

In those early days, virtualization was fairly slow: there was a distinct performance premium paid for running in a VM. Today, however, much of that has changed. Intel and AMD have both added extensive technology to their processors that enable virtualization to run much more quickly. The penalty for virtualized solutions now is typically only a few percent.

The improvements are so great that, if you use a virtualized PC interactively, it is fairly easy to forget that you are running on anything but native hardware. This performance and the many hardware features supported by virtualization software (64-bit operating systems, hyperthreading, multiple processors, etc.) make it an ideal solution for server consolidation.

What Is Involved

There are three main providers of virtualization software today: VMware, which leads the market by a wide margin; then Microsoft, and Xen. Products of the first two companies run principally on Windows systems, while Xen runs on Linux.

Because of its market dominance, I will focus on VMware, and point out differences with other solutions as they come up.

To try your hand at virtualization, go to VMware's website (VMWare.com) and download a copy of VMware Workstation. It's free for a 30-day trial. Once you install it, start it up, and create a VM. Because this is a virtualized PC you're creating, you have lots of options in terms of what the virtualized hardware configuration should look like. (See Figure 1.)
Figure 1. VMware Workstation's VM hardware settings dialog. (Courtesy VMware)


Notice that you can specify the amount of RAM, the size of the disk drive, whether to emulate a CD-ROM drive and USB ports, and so forth. Once you have chosen your preferred configuration, you load your operating system. Let's say it's Windows 98, because your predecessors committed to a package that is no longer sold that runs only on Windows 98. You now create a vanilla Windows 98 VM. You want to save a copy of this VM, so that in the future if you need another instance of Windows 98, you can make a copy from this saved instance. (Licensing terms for the operating system still apply, so you don't want to make lots of copies for no reason.)

Then, load/install the application into the Windows 98 VM. You can now run the VM on your current system and it should work just as it did on its previous dedicated hardware platform. Figure 2 shows such a scenario. (Actually, it shows Windows 3.1 running on Vista.)
Figure 2. Windows 3.1 running in a VM hosted on Microsoft Vista. (Courtesy: VMware).


This set-up has several advantages beyond letting you run multiple server apps on a single platform. By changing the options of the VM (as in Figure 1) you can scale the resources of the application: increasing and decreasing RAM and disk space as your needs and preferences dictate. You can also run several instances of the VM in parallel. In addition, because you can make a copy of the VM at any time, you're able to take snapshots of the entire running environment and run those at a future time. This is useful for testing changes you might want to make to the application without disturbing the production version. It also serves as a backup of sorts so that you can always go back to a specific point in time and restart the application from that point. This usage is not a substitute for proper backup but it is helpful in the case of fragile or cranky applications-you can always return to a known functioning point.

What's Ahead

We've examined the basics of virtualization and how you can test the technology with a workstation product. But IT departments that install virtualization generally run virtualization server software -- that is, they convert the entire server into a virtualization host. This approach will be the subject of my next article.

Andrew Binstock is the principal analyst at Pacific Data Works LLC, where he performs market analysis and writes white papers for private clients. He is also the technology editor of GreenerComputing.com. He can be reached at [email protected].

Server photo, licensed under Creative Commons, courtesy of Flickr user JohnSeb.

More on this topic