Skip to main content

3 common habits of data center water stewards

Data centers are surprisingly thirsty. The good news is that water efficiency and energy efficiency go hand-in-hand.

Plenty of energy has been expended to make power-hungry data center operations more efficient consumers of electricity. Far less time has gone into thinking about their water consumption, but drought conditions in California—home to an estimated 800 such facilities—have forced the issue even among companies that don’t actually maintain operations there.

The concern: facilities packed with computer servers, networking gear and other information technology equipment require a substantial amount of water to keep them cool. 

How much? A typical cooling tower could require up to 8 million gallons of water per year per megawatt of electricity, according to figures cited by the Uptime Institute, an industry organization dedicated to best practices for efficiency.

If you're not familiar with that statistic, you're not alone. Uptime figures just one-third of companies managing substantial data centers track their water consumption officially, through measures such as the Water Usage Effectiveness (WUE) metric advocated by another industry group, The Green Grid. WUE measures water consumed against a facility’s total power consumption. Facebook has been especially transparent about its results. The closer to zero the better.

“If you don’t know how much you’re using, then you can’t see where the efficiencies are or how design changes are affecting a resource that’s scarce,” Rachel Peterson, director of data center center strategy and development at Facebook, told attendees during a panel discussion at a tech industry conference in October.

The good news is awareness is reaching a tipping point in more regions across the United States, despite the relatively low cost of water versus electricity. “There’s an awareness that we should actively be managing this, not just assuming [water] is a given, an unlimited resource,” said Aaron Binkley, director of sustainability for Digital Realty, a San Francisco-based data center management company. His remarks came during another moderated discussion about this topic during VERGE 2015.

“Water usage concerns are very new to our world, but it’s an important question,” echoed Joe Parrino, senior vice president of data center operations for T5 Data Centers, who also took part in the VERGE panel.

The experts speaking during these two separate events offered many insights into how their teams are handling pressure to decrease water consumption—everything from simple practices such as turning up the thermostat to more innovative experiments, such as using waterless cooling technologies. Here are three basic best practices that your sustainability and operations teams might consider emulating.

1. Question your electricity sources and reduce your dependence on them

The biggest water consumer associated with an individual data center isn’t necessarily one that’s under your organization’s direct control. It's linked to the power source used to run it.

It’s well-accepted that power generation is a thirsty activity, and generally speaking, power from nuclear and natural gas facilities gulps more water than solar and wind farms do.

“Look at your provider and where their electricity comes from, that will tell you if your provider is managing water responsibly,” Parrino said.

Regardless of the source, reducing power consumption will automatically decrease a facility’s water footprint, according to the experts speaking at VERGE.

“If I had the time to have someone at my facility do one project and spend the net 40 hours of their time doing one project that reduces energy consumption by 10% or water consumption by 10%, I’m going to have them do electricity, energy consumption, all the time,” Binkley said. “Frankly, it probably delivers those water savings anyway.” 

2. Consider using “gray” or recycled water

Although on-site water recycling technologies are still relatively expensive, a growing number of operators are switching to “gray” or reclaimed water for data center cooling applications. “Your WUE might not necessarily get better, but this seems a more sustainable practice for the long term,” said Keith Klesner, vice president of strategic accounts for Uptime.

Digital Realty uses this strategy wherever it has been able to negotiate contracts with local water utilities, and this supply can be 10% to 20% less expensive than the cost of potable water, according to Binkley.

The downside is that your company might have to invest in the pipes or infrastructure necessary to connect to the reclaimed supply. At the very least, this requires a conversation with the local water utility, and it could take months if not years to make it happen. The upside is that existing connections to potable resources can serve as a backup option, which is a plus from a reliance standpoint, Binkley said.

3. Optimize server equipment and settings

Although this usually gets less attention than a data center’s layout or the method used to keep it cool, the thermal efficiency of the server hardware, communications routers and data storage devices housed in it will have a direct impact on how much water is need to keep it cool.

This was a huge motivation behind Facebook’s decision to specify the design for all of these things through an initiative called the Open Compute Project, dedicated to ongoing refinements in this area.

“We rethought every piece of our stack, starting from the data center itself to the servers and storage, to the network, even to the software, to see how they were integrating and where we had efficiency issues,” Facebook’s Peterson said. 

Another way to keep servers more efficiently—while using less energy and water—is to identify and eliminate sloppy software programming techniques that increase the processing time necessary for applications.

“As a company, we’re taking a look at the average utilization across the company and now at individual services,” said Jennifer Fraser, data center engineering manager for Twitter, which debating this issue with Peterson.

That might involve, for example, measuring how new application features affect the amount of work that a server needs to perform. “Engineers are hypersensitive to their impact,” she said.

More on this topic