Tackling Old Problems with New Solutions at Uptime's Symposium

Tackling Old Problems with New Solutions at Uptime's Symposium

For all the forward-looking solutions on offer, and just on the horizon, at the kickoff of the Uptime Institute's annual Symposium this week, there are a number of very old problems that remain to be solved.

I've been attending data center conferences for the last four years, and while it has been heartening to witness the steady expansion of focus on green IT and energy efficiency, at the same time it's frustrating to see the slow pace of change on some of the most intractable problems.

Over the course of my admittedly short history covering green IT, the tools of the trade have evolved rapidly. Whether you're talking about virtualization, data center infrastructure management tools, modular designs, or even the metrics to measure efficiency, a sea change has occurred in how companies address IT's impact.

What hasn't changed, to judge by the litany I heard at this week's Uptime Institute Symposium, are the organizational obstacles companies simply aren't addressing to put those tools to work.

First and foremost, the barriers between the IT and facilities departments continue to be a problem. Even as building management systems merge with data center management systems, and as IT starts to become the driver of all business operations, the story over and over at Uptime was that these two teams are at best uncommunicative, and at worst working at cross purposes.

"We need to fix this problem [of holistic data center management]," explained Steve Hassell, the president of Avocent, a division of Emerson Network Power. "We need people with one foot in the IT side of the business to combine with the facilities side of the business. You just can't do it with software, you need hardware and software."

Also yesterday, the Symposium featured a presentation from Akhil Docca of Future Facilities about how to use computational fluid dynamics to more accurately model airflow and thus HVAC in a data center. But at the core of his presentation was an attempt to bridge the gap between IT and facilities in quickly and efficiently commission and modify your data center deployments.

For more on this lingering issue, you can also see yesterday's article by Chip Pieper, "How IT Tools are Reshaping the Future of Facilities Management.

In addition to the ongoing division between IT and facilities, the ages-old problem of "who pays the energy bill" remains large on the scene. Although the problem partly lies between IT and facilities, the perception I got from yesterday's event is that -- even though the problem has been phrased much the same way for years on end -- the CIO still doesn't see the energy bill.

"When does a CIO care about energy?" Joe Polastre, CTO of Sentilla asked me. "When he or she can't deploy a new application." The implication being that energy use is still of secondary priority to getting to "five-nines" levels of uptime and availability.

That shouldn't be surprising, given that I was at the Uptime Institute's annual gathering. And at least one presenter I saw yesterday made the case that uptime is still the fundamental challenge, even to green-minded data center owners. Once an enterprise has all the uptime and availability it needs to perform core business objectives, then and only then can the IT team work on getting more efficiency out of its data centers.

"Today you hear a lot of people talking about efficiency," Avocent's Steve Hassell said. "But that is not the top concern of a CIO. It comes down to availability. [Poor energy] efficiency will get you yelled at, but [a lack of] availability will get you fired."

The problem with the never-ending chase for more uptime, more availability and more compute power is that it's exactly that -- never-ending. And it leads to gross inefficiencies in the IT industry.

Sentilla's Polastre, after talking about his company's roots in measuring the efficiencies of aluminum smelters, contrasted what is taken as acceptable in IT with what would fly in other industries.

"Why can a data center run at 8 percent of its capacity?" Polastre asked. In any other industry you'd lose your job in a heartbeat. Instead, IT is benchmarked with the wrong rulers: Polastre said that such low efficiencies are possibly only because the data center is measured only by tech metrics. A shift to using business metrics would put those efficiencies under the scrutiny that will drive change much more quickly.

In yesterday's closing keynote, George Slessman, the CEO of i/o, talked primarily about how the "snowflake" data center -- where each facility is custom-built from the ground up, and is more or less unique -- is at an end, largely because of the growing need to address both the inefficiency of today's data centers, but also because of the high costs of maintaining them over their 12-15 year lifespan.

But one telling tidbit it Slessman's presentation laid out how the focus on uptime hinders green IT. "Every 18 months the amount of computations a computer can do per kilowatt of power is doubling," he said. But, the power consumption of technology doesn't increase evenly with speed increases. "If you want your computer to go faster, you want the cycle times to increase, it requires more energy." In fact, over the course of several generations of CPU, an 8x increase in performance resulted in 36x more energy use, Slessman said.

I don't mean to imply that either the universe of green IT or the Uptime Symposium were all bleak and hopeless affairs -- far from it.

I'll have more to say about the many positive innovations that I saw at yesterday's event later today. But as the pace of progress continues or even picks up speed, the fact that so many organizations apparently still have not addressed the fundamental roadblocks to energy-efficient computing suggests that another approach -- one that puts efficiency ahead of productivity -- is more urgently needed.