Building a better data center, piece by piece

Building a better data center, piece by piece

Image courtesy of IO.

While modular data centers design aren't yet commonplace, they've captured the attention of facilities and IT managers seeking to squeeze even more energy efficiency out of mission-critical IT infrastructure.

They aspire for results such as eBay Project Mercury, an installation that boasts an average power usage effectiveness (PUE) of 1.2. PUE measures the ratio of power needed for cooling IT infrastructure, versus the amount of electricity that actually runs servers, storage hardware and network gear. The "ideal" ratio would be 1.

Now, one of the better-known players in modular design, Phoenix-based IO, thinks it has the statistical proof to suggest that such results aren't atypical.

That data, analyzed by utility Arizona Public Services, suggests modular installations help reduce energy overhead needed for cooling infrastructure by up to 44 percent.

"Our calculations did show that the IO.Anywhere modular data center uses less energy than a traditional data center build-out, at least in the case of this IO data center," said Wayne Dobberpuhl, APS energy efficiency program manager. "Moving forward, we are working with IO to establish the right baseline for assessing the appropriate rebate for this efficiency work under our Solutions for Business program."

Data center modules, sometimes referred to as pods, offer a more standardized, integrated approach to assembling the hardware, power distribution equipment and software needed for a company core back-end IT infrastructure. Some can be deployed and configured using software in as little as 120 days, compared with months for a typical project.

An inside look at IO's modular data center

The efficiency improvement calculated by APS translates into average annual savings of $200,000 per megawatt of average IT load, along with 1 million gallons of water saved and 620 metric tons of carbon dioxide eliminated, said Patrick Flynn, lead sustainability strategist for IO.

The data covers PUE measurements gathered over a 12-month time period, covering both modular installations and traditional raised-floor architectures used within IO's 540,000-square-foot Phoenix co-location facility. These are real-world production environments, running a mixture of enterprise equipment and applications. The infrastructure shared the same chiller plant,  building and operations staff, so that means the environments were subject to the same heat, humidity and climate factors.

When all was said and done, the PUE readings for the traditionally designed environments were an average of 1.73, compared with 1.41 for the modular ones. For perspective, the global average for traditional installations is 1.8 to 1.9, according to the Uptime Institute.

"We recognize that PUE has an important place in customer assessments of a data center's cost effectiveness and environmental sustainability," Flynn said. "Part of our job at IO, therefore, is to validate PUE in actual deployments today, and to continually improve data center performance. Another part of our job, one we are working on, is to evolve the calculation of the PUE metric itself, so that it becomes a more meaningful tool for business."

IO has been around since 2007; its founders all have extensive experience in data center colocation operations and IT infrastructure. It claims 600 customers from all sorts of industries, including big names such as financial services company Goldman Sachs and the image and video hosting site Photobucket.

The IO infographic below compares the environmental footprints of a conventional data center versus one that uses modular design.

A comparison of environmental performance for modular versus construction-based data centers.

All images courtesy of IO.