Greening the Physical Data Center

Greening the Physical Data Center

According to a congressionally mandated study by the U.S. Environmental Protection Agency, data centers consume 1.5 percent of our nation's electrical energy. That may not seem like a lot on the surface, but it amounts to 61 billion kilowatt hours per year -- two times the usage level of five years ago. The number is predicted to nearly double again by 2011 to more than 100 billion kilowatt hours.

Unless something is done, this will require the equivalent of 10 to 15 new "baseline" power plants, which can't be built that quickly anyway. It would also spew as much as 47 million metric tons of additional CO2 per year. The alternative is brownouts and rolling blackouts during peak usage hours.

The EPA report, issued in August 2007, gave a number of ways in which data centers could be made more energy efficient. While these steps are important, enormous energy savings can be achieved by properly designing the physical space in which the data center resides and, in the process, reducing cooling loads. A big part of this is making power distribution more energy-efficient.

Produced in cooperation with Lawrence Berkeley National Laboratory, the report recommends a "holistic approach" to saving energy, including the development of Energy Star specifications for servers, dynamic power management, increased reliance on energy-saving technology, and major changes in approach for large Data Centers such as self-generation of power and heat reclaim to run absorption chillers for cooling.

The report sets forth mandatory design requirements for data centers in government buildings and recommended design approaches for commercial data centers.

There is no formula for good design; solutions are based on a thorough understanding of data center operations, modern computing technology and its support requirements, the latest infrastructure options and high-density power and cooling solutions, and growth projections.

Most data centers are part of larger office buildings. In this context, the physical design of a new data center often falls under the purview of the architect, working in concert with the engineer, technology consultant, and client.

Architects and engineers rely on a building rating system called Leadership in Environmental and Energy Design (LEED) to guide them in creating a green building. LEED points are awarded for various energy conservation measures and usage of recycled materials. The number of points obtained qualifies a building for LEED Basic, Silver, Gold or Platinum.

LEED standards address the building as a whole, but do not address the data center specifically. As of this writing there are only two stand-alone data centers that are LEED certified, and these are only LEED Basic. It is generally difficult to qualify any building with a significant data center above LEED Silver. Due to the power and cooling demands of a data center, it's understandable that few CIOs will trade reliability for efficiency.

While many of the steps that can be taken to reduce energy consumption do not result in LEED points, they can provide enormous energy cost savings as well as being socially and environmentally responsible. A number of these are so inexpensive to implement that there is no reason not to take advantage of the cost savings and operational efficiencies that accrue.

In an existing data center, much can be achieved by taking some basic steps. For a new building or a renovated facility, a specialty consultant will be needed who is thoroughly familiar with information technology systems and the newest infrastructure tools and techniques available that address today's and tomorrow's challenges.

Steps that can be taken without help in an existing Data Center are:
  • Rearrange cabinets into the industry-standard hot aisle/cold aisle configuration, with cabinets facing front-to-front and back-to-back. None of the following suggestions will have much effect unless this is done.
  • Check air conditioner set points. Best cooling is usually achieved at about 75F and 45 percent relative humidity. Lower temperature settings over-work air conditioners but don't improve airflow. This is the real basis of most data center cooling problems.
  • Block holes and seal cutouts in raised floors to stop air leakage. Several commercial solutions are available to seal cutouts around cable.
  • Install blanking panels in all unused spaces in equipment racks and cabinets. A variety of snap-in panels are available to make this easy.
  • Get rid of Plexiglas doors. Even those designed for optimal airflow are insufficient for modern hardware. Either remove the doors, or install "high-flow" (66 percent open area) doors front and back.
  • Clean-up cabling behind equipment, giving particular attention to folding cable managers that block heat escape.
  • Clean out old under-floor cable and any old, unused piping from legacy water-cooled mainframes. Minimize air blockages and airflow impediments to the greatest extent possible, particularly in front of air conditioners.
  • Do not leave spaces or open cabinets in equipment rows. Install old cabinets with solid doors or filler panels, or block with heavy plastic or fireproof plywood sheets.
  • Adjust raised-floor airflow tiles to provide air where it is needed. Use solid tiles in front of patch panels. They don't need cooling. Close down dampers where minimal equipment is installed to move more air to those locations that need it. Consider grate tiles (67 percent open) in front of high heat equipment, but do not overuse these as they can rob other areas of sufficient air.
Use a knowledgeable consultant when designing a new data center, or significantly renovating an existing one (which includes adding air conditioners). They will ask questions such as: What is the client's level of technological sophistication? What is the normal lifecycle of technology for that client? What is the duration of the building lease? Is the business growing steadily or cyclically? How will this affect the computer services?

A consultant can also advise on vapor barriers in walls; raised-floor and ceiling heights; efficient base cooling design (including Computational Fluid Dynamics modeling of under-floor and overhead air flows); supplemental cooling systems, including "Source of Heat" and direct liquid cooled approaches; scalable power and cooling designs; green fire protection systems; and energy-efficient lighting designs.

Users also have operational responsibilities. The EPA Report concludes that only 10 percent of the today's servers that are equipped with energy-saving features actually have that feature enabled. Further, the EPA says that, despite the difficulties of accomplishing it, they and the DOE will have an ENERGY STAR Rating System in place within a year.

Energy-efficient servers are already on the market. It is up to IT professionals to decide whether to buy them before the government starts mandating them.

Robert McFarlane, a principal at Shen Milsom & Wilke in New York City, specializes in the physical design of data centers. He's responsible for data center designs for the Abu Dhabi Investment Authority, Knight Equity Markets, L.P., and MapInfo Corp. He was instrumental in the design of the New York City Office of Emergency Management data center.
Topics: