Coronavirus, the stay-at-home workstyle, and cloud energy consumption
Between video calls, collaboration applications and streaming services, data centers are working overdrive. Here’s some perspective on how much power that requires and why efficiency matters more than ever.
The electricity consumption profile of companies around the world has been altered profoundly over the past month, as countries have adopted "shelter in place" measures to slow or at least control the spread of the coronavirus, corporate offices have shuttered and industrial production sharply curtailed.
One obvious exception: the world’s data centers and internet infrastructure, which are experiencing an unprecedented spike in usage.
Just one example: As homes have become places of both business and education, the use of videoconferencing and collaboration services has skyrocketed.
Research suggests that the average daily usage of the Zoom application is up more than 300 percent since December (although that was before the backlash about its security in early April). Meanwhile, Microsoft is reporting a massive uptick in adoption of its Teams application — with 44 million people using the application as of March 28, up 12 million from the week before.
That’s just two applications, and it doesn’t cover the impact of all those exercise videos or movies or multiplayer games that people of all ages are streaming to keep themselves occupied in the evenings or on weekends. I haven't seen any specific data measuring the change in consumption over the past month, but check your own utility bill for evidence.
The good news is many of the biggest cloud computing and data center providers have continued to focus on improving energy efficiency alongside their investments in sourcing renewable energy. This is a story I’ve been reporting on since February, but it has become even more relevant in the past few weeks.
How much more efficient? Research released in late February by researchers at Northwestern University, Lawrence Berkeley National Laboratory and Koomey Analytics found that total global data center energy consumption grew 6 percent between 2010 and 2018, even though the number of "compute instances" grew by 6.5 times during that same timeframe.
Put another way: That’s a modest increase, considering there was a 26-fold increase in data storage capacity, an 11-fold rise in data center IP traffic — and the number of physical computer servers was up 30 percent. Specifically, the sector consumed about 205 terawatt-hours in 2018, which represents about 1 percent of global electricity usage — about the same as back in 2010.
"Considering that data centers are energy-intensive enterprises in a rapidly evolving industry, we do need to analyze them rigorously," said Arman Shehabi, a research scientist with Lawrence Berkeley National Laboratory, who co-authored the study. "Less detailed analyses have predicted rapid growth in data center energy use, but without fully considering the historical efficiency progress made by the industry. When we include that missing piece, a different piece of our digital lifestyles emerges."
The big cloud computing providers, of course, have expended much energy on plans to consume less electricity.
Google, for example, can deliver seven times as much computing power today as it did five years ago, using the same amount of electrical power, according to a blog published by Urs Holze, senior vice president of technical infrastructure, in late February. As I reported about 18 months ago, it automates many tasks such as cooling and load balancing using artificial intelligence. That makes the typical Google data center about twice as energy-efficient as localized, enterprise data centers.
Zoom has trumpeted the sustainability benefits of virtual meetings for years, but has said very little about the energy used to support its services. Its strategic data center partner is the world’s biggest provider, Equinix, which supports a 100 percent clean energy goal (as of its latest sustainability report) and is a big customer of fuel cells from Bloom Energy.
A long legacy of energy efficiency
Remember that other company that is benefiting bigly from the spike of interest in team collaboration services? Well, Microsoft is testing all manner of approaches — in many places, it has figured out ways to minimize or eliminate mechanical cooling entirely, Brian Janous, general manager of energy and sustainability, told me when we caught up last month as the COVID-19 crisis began to deepen.
One of the more novel approaches, for example, is its test of submerged data centers — using "free water" rather than "free air" to keep servers, drives, networking gear and other gadgets cool. While it’s unreasonable to expect this design to dominate in the future, it could be particularly valuable for building new "edge" processing facilities near urban centers with access to water and with limited real estate. "That is where that type of data center becomes interesting," he said.
While the needs of business continuity require that data centers include basically a one-to-one ratio when it comes to backup resources, Microsoft is studying ways those idle resources could help stabilize the local grid in times of peak demand or instability. It’s moving to lithium-ion battery technologies and deploying artificial intelligence to help with that mission, he said.
A model for the future?
One of the more intriguing approaches I’ve heard for rewriting the rules of cloud computing energy consumption is being developed by a technology startup called Lancium, which hails from Houston. Technically speaking, Lancium is not focused on data center energy efficiency in the traditional sense — but it is working on various technologies that help servers adjust their electricity consumption in unique ways.
Lancium’s high-level vision is to create "Pausable Data Centers" specifically meant for high-performance cloud computing. These facilities — one is close to completion in Texas — are architected to operate alongside wind farms, especially those operating in regions where turbine production sometimes must be curtailed due to the overabundance of clean energy being sent to the grid, so prices sometimes turn negative for operators.
These Pausable Data Centers are designed to balance the electricity being sent to the grid, by taking that excess energy and using it instead for "interruptible" applications such as machine learning, industrial or scientific calculations, and modeling simulations. The technology can be spin up or shut down very quickly.
"The key to a 100 percent renewable grid is a really responsive load," Michael McNamara, co-founder and CEO of Lancium, told me when we chatted earlier this year.
According to the presentation it uses to pitch its concept, Lancium can offer a 50 to 90 percent cost savings over traditional cloud data centers by using ultra-low-cost wind power (helping wind farm operators), siting the facilities on lower-cost real estate outside of expensive urban areas and extending the life of older servers.
Lancium is discussing the concept with both wind farm operators in over-congested wind regions that could benefit from the offtake agreements and organizations that require high-performance computing, such as scientific research agencies, universities or pharmaceutical companies, McNamara said.
Prior to the pandemic, Lancium was hoping to announce its first facility this spring. For now, it’s not making any official statements. However, the company just secured its fifth patent for its Smart Response power management software. The feature lets data centers adjust server electricity consumption based on factors such as price and other power grid conditions.
"As we are now commercializing Lancium Smart Response, we look forward to working with major data center operators to enable large cost savings and help them achieve their environmental, social and governance objectives," McNamara said in a statement about the new patent.
This article was updated April 8 to clarify the source of the new data center energy efficiency research.