Why sustainability execs must learn to love artificial intelligence

Why sustainability execs must learn to love artificial intelligence

Steve Jurvetson onstage at VERGE
The Photo Group2015
Steve Jurvetson, the venture capitalist who has starred in tech success stories from Hotmail to Tesla, at VERGE 2015.

Just about the only emerging information technology guaranteed to generate more fear, uncertainty and doubt among humans than job-stealing robots is smarter-than-us artificial intelligence.

Yet sensors, systems and software that augment and automate decision-making, then take action based on the answers, will be vitally important for scaling many so-called smart solutions beyond early pilot tests. That goes for everything from urban parking guidance apps to energy-sipping lighting installations to autonomous vehicle steering controls.

The reason is pretty simple: The amount of information behind any one of these applications is simply staggering — some suggest as many as 150 billion "things" may be connected to the Internet within the next decade, creating myriad sources for interpretation. We’re talking everything from weather forecasts, to location-specific traffic updates, to building occupancy statistics to billions of images cataloging the world around us — and everything in between.

No single person can interpret all those data points quickly, if at all. However, by programming machines to adapt behavior when certain conditions are met, society and business can move another step forward to making sustainable operations more systemic. That theme was sounded multiple times last week during VERGE 2015.

"Big Data is the headache; deep learning is the solution," said well-respected venture capitalist Steve Jurvetson, partner at Draper Fisher Jurvetson and an early investor in multiple billion-dollar companies including SolarCity, Tesla Motors and Twitter.

During an onstage interview at VERGE, Jurvetson said it is no longer enough simply to find patterns in data — something that many software applications already do pretty well. The next imperative is teaching the software to make connections that are too complex for humans to perceive, a field that often goes by the name "machine learning."

"You generate a computer program that in and of itself is capable of learning something," he said. "It’s about adaption." 

To illustrate, Jurvetson points to the millions of sensors already dedicated to collecting information about which lights are on or off, or when temperatures increase or decrease dramatically. Using that data, for example, a building’s elevators might be "trained" to prioritize certain floors under certain conditions, such as when a certain percentage of offices go dark during a given timeframe.

Machine learning also could help prioritize what data is collected in the first place. "Over time, your algorithms would guide what to turn off. … There is an insane amount of images and data collected from all these systems," Jurvetson said.

What if the world's smartest people can't solve the world's biggest problems? Perhaps feeding data to a "brain in a box" could help, Jurvetson suggested.

Several projects highlighted during VERGE last week offer a glimpse of the possibilities.

One example is the interactive "Lightswarm" touted by San Francisco design firm Future Cities Lab. The technology — on display at the city’s Yerba Buena Center for the Arts — responds to auditory cues, turning lights on and off as a person walks by. It senses both footsteps and spoken words, concentrating illumination only where it’s needed at a given moment in time.

Lightswarm by Future Cities Lab
The Photo Group2015
<p>The Lightswarm installation features urban sensors in a collection of 3D-printed modules with LEDs that respond to nearby noise by changing color.</p>

Another proof point is a project that uses publicly available data about traffic incidents to flag places that are particularly accident-prone.

"We’re interested in how the AI of a building actually starts to harness things like social media and the predictive data that’s already kind of out there," architect Jason Kelly Johnson recently told GreenBiz. "It’s not just about a building being responsive and reacting instantaneously. It’s about a building becoming predictive and evolutionary."

Perhaps the most important considerations when designing machine-learning systems is identifying ideal behaviors or environmental conditions worthy of emulation. That’s one goal of smart car company Nauto, which is using in-vehicle cameras to record facial expressions of drivers and to correlate those images with what’s going on.

Nauto CEO Stefan Heck uses the phrase "augmented intelligence" to describe his company’s system, already tested in 23 cities. Aside from fine-tuning the technology with fleet management organizations, Nauto will work with insurance companies to better understand conditions that can cause driver distraction. The data eventually could be used to infuse vehicles — autonomous and otherwise — with better safety and efficiency metrics.

"This is a way to upgrade the human driver, by working with excellent human drivers," Heck said.