Google's 3 Secrets to Data Center Success

Google's 3 Secrets to Data Center Success

Data center - CC license by Flickr user gruntzooki

Google is as secretive as any large tech company about the nature of its data centers -- how many, where they are, and what they"re made of -- but that doesn't stop the search giant from dishing out advice for everyone else"s data centers on how to cut energy use.

In spite of Google"s size -- or, perhaps, because of it -- the company has managed to lead the pack in reducing its data centers" massive energy use. This week, at the Green:Net conference in San Francisco, Bill Weihl, Google's green energy czar (actual title), shared three secrets to success others can use to emulate his company"s data center efficiency.

Three of the keys to success, says Weihl, have little to do with the computing technology itself. The reason: For every watt of IT equipment used, there"s another watt used for "overhead" -- things like lighting, backup power, and cooling. Finding ways to reduce these energy guzzlers is a pathway often left unexplored by data center engineers.

1. Keep hot and cold separate. A typical data center has rows and rows of servers, Weihl explained, each taking in chilled air from the front and blowing hot air out the back. Simply aligning the servers so that fronts and backs face each other results in having rows of hot air alternating with rows of cold air. This is often done with a plastic roof covering the server aisles and heavy plastic curtains, like those used in meat lockers, at each end to allow for access. This keeps the cold air from being heated by the hot air, lowering cooling costs.

2. Turn up the thermostat. Because typical data centers don"t have good control over airflow, they need to keep thermostat settings at 70 degrees Fahrenheit or lower, said Weihl. Google runs its centers at 80 degrees, and suggests they can go higher. "Look at the rated inlet temperature for your hardware. If the server can handle 90 degrees then turn the heat up to 85, even 88 degrees," he counseled.

3. Give your chillers a rest. This involves using fresh air to cool servers as much as possible, and to use evaporative cooling towers, which lower temperatures by using water evaporation to remove heat, much the way perspiration removes heat from human bodies.

There"s more. Weihl counseled to "know your PUE," or power usage effectiveness, a metric used to determine the energy efficiency of a data center. (PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it.) While typical data center PUEs range from 2.0 to 3.0, Google"s run around 1.2. Said Weihl: "A PUE of 1.5 or less should be achievable in most facilities."

The energy savings go beyond the data center to the PCs they serve inside a company. Most computers these days have built-in power-management software, but they are shipped with the technology turned off. Weihl compared this technology to the hybrid technology in a Prius. "When you get to a stop sign, the engines shut off. Similarly, in a computer, a processor can slow down or completely go to sleep when idle. Most systems today, that power management is turned off. That's kind of like disconnecting the battery and electric motor on a Prius. You simply wouldn't do that."

Topics: