Ten Guidelines for Energy Efficient Data Center Design

Ten Guidelines for Energy Efficient Data Center Design

Data centers are prime targets for energy efficient design measures: a typical data center can consume 25 to 50 times as much electricity as a standard office space. But the mission-critical nature of data centers has historically put other concerns - mainly reliability and high power density capacity - ahead of energy efficiency in the minds of owners and designers. Also, data centers usually have short design cycles that leave little time to fully assess efficient design opportunities or consider first cost versus life cycle cost issues. This can lead to designs that are simply scaled up versions of standard office space approaches or that re-use standard inefficient strategies and specifications without regard for energy performance.

This article discusses alternatives to inefficient design practices within ten technology areas. There is no single correct way to design a data center, but the following guidelines can offer design suggestions that provide efficiency benefits in a wide variety of data center design situations. Figure 1: Hot Aisle-Cold Aisle configuration

Air Management
Modern data center equipment racks can produce very concentrated heat loads. Achieving precise control of airflow through rooms that collect and remove equipment waste heat has a significant impact on energy efficiency and equipment reliability in facilities of all sizes.

Optimal data center air management minimizes or eliminates mixing between cooling air supplied to equipment and hot air rejected from it. A correctly designed air management system can reduce operating costs and first cost equipment investment, increase the data center's density (W/sf) capacity, and reduce heat related processing interruptions or failures.

A few key design issues are: location of supply and returns; configuration of equipment air intake and heat exhaust ports; and large scale airflow patterns in the room.

Airside Economizer
Data centers operate 24 hours a day with a large, constant cooling load that is independent of outdoor air temperature. An air-side economizer is the lowest cost option for cooling data centers during mild winter conditions and on most nights.

Simply using a standard office system economizer offered on California Title 24 compliant units is not advised without doing a proper engineering evaluation of local climate conditions and space requirements. Due to the humidity and contamination concerns associated with data centers, careful control and design work may be required to ensure that cooling savings are not lost to excessive humidification requirements.

Providing adequate access to outdoor air for economization can be a significant architectural design challenge. Central air handling units with roof intakes or sidewall louvers are the most commonly used, although some internally located Computer Room Air Conditioning (CRAC) units offer economizer capabilities when installed with appropriate intake ducting.

Depending on local climate conditions, outdoor air humidity sensors may be required to permit lockout of economization during very dry conditions. In most areas, use of outside air is beneficial, but in critical applications, local risk factors should be known and addressed. For control of particulates and contamination, appropriate low-pressure drop filtration should be provided in order to maintain a clean data center environment while not imposing excessive fan power energy costs. Other contamination concerns such as salt or corrosive should be evaluated. Figure 2: Electricity use in two data centers.

Centralized Air Handling
Better performance has been observed in data center air systems that utilize specifically designed centralized air handler systems. These offer many advantages over traditional multiple distributed unit systems, use larger motors and fans, and can be more efficient. They are also well suited for variable volume operation through the use of Variable Speed Drives (VSDs), also referred to as Variable Frequency Drives or VFDs.

Most data center loads do not vary appreciably over the course of the day, and cooling systems are typically oversized with significant reserve capacity. A centralized air handling system can improve efficiency by taking advantage of surplus and redundant capacity to actually improve efficiency. The maintenance benefits of central systems are well known; reduced footprint and maintenance traffic are additional benefits.

Cooling Plant Optimization
This strategy offers many efficiency opportunities for data centers, both in design and operation. A medium-temperature chilled water loop design using 55°F chilled water provides improved chiller efficiency and eliminates uncontrolled phantom dehumidification loads (see the paragraph on humidification below). The condenser loop should also be optimized; a 5-7°F approach cooling tower plant with a condenser water temperature reset pairs nicely with a variable speed (VFD) chiller to offer large energy savings.

A primary-only variable volume pumping system is well matched to modern chiller equipment and offers fewer points of failure, lower first cost, and energy savings. Thermal energy storage can be a good option, and is particularly suited for critical facilities where a ready store of cooling can have reliability benefits as well as peak demand savings. Finally, monitoring the efficiency of the chilled water plant is a requirement for optimization: basic reliable energy and load monitoring sensors can quickly pay for themselves in energy savings. Fig 3. Server Rack provided with an integrated chilled water coil.

Direct Liquid Cooling
Direct liquid cooling refers to a number of different cooling approaches that all share the same strategy of transferring waste heat to a fluid at or very near the point of heat generation, rather than transferring it to room air and then conditioning the room air.

One option, currently available from many rack manufacturers, installs cooling coils directly onto racks in order to capture and remove waste heat. Underfloor is often used to run coolant lines that connect to the rack coil with flexible hoses. Many other approaches are available or being pursued, ranging from water cooling of component heatsinks to bathing components with dielectric fluid cooled with a heat exchanger.

Liquid cooling can service higher heat densities and be much more efficient than traditional air cooling. It is also adopted for reasons beyond efficiency because it can serve higher power densities (W/sf). In the future, products may become available that allow for more direct liquid cooling of equipment, by methods ranging from fluid passages in chip heatsinks to submersion in a dielectric fluid. While not currently widely available, such approaches hold promise and should be evaluated as they continue to mature.

Free Cooling via Waterside Economizer
Free cooling can be provided with a waterside economizer, which uses the evaporative cooling capacity of a cooling tower to indirectly produce chilled water to cool the data center during mild outdoor conditions (particularly at night in hot climates).

Free cooling is usually best suited for climates that have wetbulb temperatures lower than 55°F for 3,000 or more hours per year. It most effectively serves chilled water loops designed for 50°F and above chilled water, or lower temperature-chilled water loops with significant surplus air handler capacity in normal operation. Often, existing data centers can capitalize on redundant air handler capacity with chilled water temperature reset controls to retrofit free cooling.

Humidification Controls
Data centers often over-control humidity: this produces no real operational benefits, as humidification consumes large amounts of energy. Humidity controls are frequently not centralized, which results in fighting between adjacent units, with one humidifying while the other dehumidifying. Humidity sensor drift can also contribute to control problems if sensors are not regularly recalibrated.

Low-energy humidification techniques can replace electricity-consuming steam generators with an adiabatic approach that uses heat that is present in the air or recovered from the computer heat load for humidification. Ultrasonic humidifiers, wetted media ("swamp coolers") and microdroplet spray are some examples of adiabatic humidifiers. Fig 4. Sample Per Rack Electrical Cost Savings from More Efficient Power Supplies.

Power Supplies
Most data center equipment uses internal or rack mounted AC-DC power supplies. Higher efficiency power supplies will directly lower a data center's power bills, and indirectly reduce cooling system cost and rack overheating issues. Annual savings of $2,700 to $6,500 per rack are possible just from using more efficient power supplies.

Efficient power supplies usually have a minimal incremental cost at the server level, however, management intervention may be required to encourage equipment purchasers to select efficient models. In order to make a rational selection, purchasers need to be given a stake in reducing both operating costs and first costs of electrical and conditioning infrastructures, or at least be made aware of these costs.

Power supplies that meet the recommended efficiency guidelines of the Server System Infrastructure (SSI) Initiative1 should be selected. The impact of real operating loads should also be considered: select power supplies that offer the best efficiency at their most frequent operating load level.

Self Generation
The combination of a nearly constant electrical load and the need for a high degree of reliability make large data centers well suited for self generation. To reduce first costs, self generation equipment should replace the backup generator system.

It provides both an alternative to grid power, and waste heat that can meet nearby heating needs or be harvested to cool the data center through absorption or adsorption chiller technologies. In some situations, the surplus and redundant capacity of the self generation plant can be operated to sell power back to the grid, offsetting the generation plant capital cost. Fig 5: An efficiency gain is realized when operating smaller UPS systems at partial loads as opposed to larger systems at partial loads.

Uninterruptible Power Supplies
These systems provide backup power to data centers, and can be based on battery banks, rotary machines, fuel cells, or other technologies. A portion of all power supplied by the UPS to operate data center equipment is lost to inefficiencies in the system, which can total hundreds of thousands of wasted kilowatt hours per year.

UPSs differ in their efficiency levels; this should be taken into account when selecting a UPS system. The design of the data center's electrical system can also affect efficiency by impacting the typical load factor of UPS operation. Battery-based UPS systems are more efficient at a high load factor -- at least 40% of their rated capacity or higher. The type of UPS system configuration (line reactive versus double conversion) also impacts efficiency. More power conditioning capability often means more wasted electricity and additional heat loads that must be removed by the mechanical cooling system.

Data center design is a relatively new field that addresses a dynamic and rapidly evolving technology sector -- in a real sense, information technology and energy efficiency technology are beginning to merge. The most efficient and effective data center designs use relatively new design paradigms to create the required high energy density, high reliability environment.

These guidelines have been developed based upon benchmark measurements of operating data centers, input from practicing designers and operators, and many years of experience designing energy efficient cooling systems for data centers. They illustrate many of the new 'standard' approaches that are increasingly being used as a starting point by successful and efficient data centers.

As the current boom in data center construction continues, energy use by data centers is becoming an increasingly significant percentage of overall energy use, and new highly efficient designs are required in order to help manage the energy footprint of our information technology infrastructure.