Keeping things cool has long been a mantra for data center operators, but new research suggests it may not be essential for everyone.
Data centers have historically operated at temperatures ranging between 64° and 68° Fahrenheit (or 17° to 20° Celsius), prompting them to spend approximately 44 percent of their total power budgets on cooling.
Originally, the varied mix of equipment and associated warranties dictated these relatively cool temperatures, and service level agreements (SLAs) often included explicit language about how much deviation was acceptable.
But while it's true temperature control can affect equipment reliability and appropriate management and monitoring is needed for business continuity, new research supports the idea that higher temperatures are beneficial for most data centers.
So, how do you know when to raise the temperature, and by how much? Are there any changes recommended to reduce business risks?
Should every data center cut back on cooling?
When we ask, many data center managers can’t tell us why they set the thermostat at a particular temperature. It’s just the way it has been done for years.
But when well-known companies -- including Facebook, Google, Yahoo!, Korea Telecom and others -- publicize their high temperature ambient (HTA) successes at 80°F and above, we all pay attention. And when research and on-the-ground examples support the efficacy of HTA data centers, suddenly we are all tempted to reduce our cooling costs by just pushing up the thermostat.
Before arbitrarily cutting back on cooling and letting the ambient temperature rise, however, as a data center manager, you’ll want to review your equipment warranties, SLAs and compliance requirements. For those responsible for data centers supporting legacy systems that require lower operating temperatures, or for those whose organizations are subject to extremely stringent compliance requirements, you’ll want to continue to take a very conservative approach.
Next page: What works