How to curb runaway power in the data center

Data centers represent a skyrocketing component of any enterprise's energy budget, and therefore a major share of operational costs. Best energy management practices can help contain these costs, and simultaneously put IT and facilities teams on an environmentally responsible path that aligns the corporate data centers with EPA energy standards.

The Scope of the Problem -- and the Opportunity

Surveys of all sizes and types of data centers identify many categories of wasted energy. For example, approximately 10 to 15 percent of all data center servers are idle (i.e., not processing useful work). An average server draws about 400 watts of power, for an annual cost of $800 or more. This adds up to billions of dollars of wasted energy, cooling, and management costs every year in the U.S. alone.

Traditional power management approaches have failed to curb this or other instances of wasted energy. As a result, data center managers have routinely over-budgeted power and cooling to accommodate spikes in demand and high-priority needs, and to avoid "hot spots" that would otherwise negatively impact server performance.

Finding Where the Energy is Going

The potential for savings and the high cost of energy have driven demand for new tools relating to energy management. Most of the resulting power management tools let IT managers examine the returned-air temperature at the air-conditioning units, and perhaps the power consumption for each rack in the data center. However, most lack visibility at the individual server level, and base their calculations on modeled or estimated data that can deviate from actual consumption by as much as 40 percent.

In contrast, a new class of holistic energy and cooling management solutions has emerged that offer fine-grained levels of monitoring. The latest innovations in this area focus on server inlet temperatures, and provide aggregation across a row or room, to create real-time thermal maps of server assets (see Figure 1).

Similarly, real-time power consumption by servers and storage devices can also be monitored and logged, leading to highly optimized rack provisioning and capacity planning within the data center. For example, to provision a rack of ten servers, each with a 650 Watts power supply rating, a data center manager might test a fully-loaded server and arrive at a requirement of 400 Watts per server, or 4 KW per rack of ten servers.

Alternatively, with a real-time monitoring tool, the data center manager can accurately determine the typical maximum power draw in a production environment. Field studies have shown that this approach can help boost rack densities by as much as 60 percent (or up to 16 servers per rack, in this particular example), and can support the accurate capping of power per rack to protect equipment in the unlikely event that demand spikes above the defined power level.

Perhaps even more important, advanced energy management is helping data center architects intelligently allocate power during emergencies. Equipped with accurate power characteristics, uninterrupted power supplies (UPSs) can be configured to deliver longer operation times to high-priority servers during power outages.

Thermal and hardware power consumption data can also be logged and used for trending analysis. Temperature data can benefit in-depth airflow studies for improving cooling and airflow, and lead to more energy-efficient designs of integrated facilities systems.

Next page: The bottom line, and the limits of efficiency