Get smarter on artificial intelligence
Artificial intelligence is one of the most divisive issues in high-tech. Just ask Tesla CEO Elon Musk, who is so concerned about the ethical implications of the technology that he recently co-founded a $1 billion-backed, non-profit organization called OpenAI dedicated to responsible AI research and development.
Musk and his colleagues aren’t trying to hinder progress. After all, AI is at the center of many "smart" technologies showing up in autonomous vehicles such as the ones Tesla is working on as well as urban cityscapes, commercial buildings, supply chains and agricultural operations around the world. They just want AI researchers and entrepreneurs to make sure the potential human impact is considered foremost, rather than an afterthought.
"AI systems today have impressive but narrow capabilities," OpenAI declared upon its launch in mid-December. "It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly."
The list of tech bigwigs collectively committing more than $1 billion to OpenAI over time includes Amazon Web Services, venture capitalist Peter Thiel (co-founder of PayPal and big data analytics powerhouse Palantir) and computer scientist Alan Kay (a world-renowned software programmer).
We’ve come a long way, baby
The organization’s very creation marks a tipping point for AI, the topic of frequent headlines throughout 2015.
Most people of a certain age associate the phrase with HAL, the nefarious computer in sci-fic classic "2001: A Space Odyssey" that seizes control of the spaceship. Generally speaking, however, AI refers to the process of using algorithms to control how computers or other machines respond to input. An alternate phrase is machine learning. Either way, the systems are trained by humans and the idea is to teach some device to mimic human behavior. One of the most famous real world examples is IBM’s Deep Blue supercomputer, which learned how to play chess so well that it eventually bested then world champion Garry Kasparov.
Right now, the AI market is infinitesimal: about $202.5 million in revenue last year for commercial applications and installations. That data is based on a market forecast from research firm Tractica. Things get really interesting around the 2019 timeframe, the firm predicts, with revenue related to AI applications poised to reach $11.1 billion by 2024.
"While artificial intelligence has been just beyond the horizon for decades, a new era is dawning," said Tractica analyst Bruce Daley. "Systems modeled on the human brain such as deep learning are being applied to tasks as varied as medical diagnostic systems, credit scoring, program trading, fraud detection, product recommendations, image classification, speech recognition, language translation and self-driving vehicles. The results are starting to speak for themselves."
Going deeper with neural networks
One of the most active AI categories is "deep learning" — more venture money has been dedicated to startups focused on this discipline than any other AI category, according to data from VentureScan. Intel Capital is one of the most prominent and generous investors.
Deep learning is closely related to neural network software, which uses the power of connected many connected computers to mimic the behavior of biological nervous systems and the brain to interpret and "learn" from information. The idea has been around for decades, but advances in processing speeds — at far more reasonable prices — have inspired a frenzy of experiments. One of the most prominent and largest projects is Google Brain, a network of more than 1,000 computers that the Internet company uses for facial recognition applications.
Deep learning works in stages, which means they become smarter and more sophisticated over time as they ingest more information. In the case of Google Brain, for example, the systems started out by detecting the stark contrast between light and dark pixels that made up and image. Over time, however, the system became capable of "seeing" the difference between objects and even human faces.
A more familiar example of deep learning in action is Siri, the personal assistant found on Apple iPhones, and close relatives such as Microsoft Cortana and Google Now. All three of these voice-activated applications use deep learning to become better at recognizing speech over time, much like a child becomes better at recognizing nuances that distinguish spoken words.
Legendary venture capitalist Steve Jurvetson believes deep learning will be critical for pushing applications such as parking guidance systems, smart lighting and building controls, and autonomous vehicles out of the pilot phase. One big reason: the amount of sensor input and other data required to make them truly useful is staggering. By programming these applications to adapt their behavior when certain conditions are met, however, they become much more practical.
"You have complex data sets, such as from measuring remote-sensing data from satellites and ground sensors, or from the Internet of Things, and the sensors that are all over the planet in various devices," Jurvetson told GreenBiz last fall. "Think of all the mobile phones that have temperature sensors, and cars that have sensors. We’re not really using that data for anything, but we could. Putting all the data on a screen wouldn’t necessarily give you insights. But you could imagine some kind of a learning harness being applied to try to figure out patterns in those data sets."
One relatively simple example of this idea in action is "Lightswarm," a new form artificially intelligent façade designed by San Francisco design firm Future Cities Lab. The system responds to auditory cues, rather than motion, to turns lights on and off as people walk by on a city sidewalk. It can tell in real time where a person is, by gauging the closeness of his or her footsteps or voice. Each module is made of 3D-printed components and programmed with algorithms that guide their behavior based on external auditory and lighting conditions.
When it comes to sustainable business applications, AI centrally will be central to next-generation building automation systems that gather data from myriad sources — from weather forecasts to local traffic conditions to corporate inventory systems — and then make decisions to optimize their operations. Scenarios might include choosing the best time for manufacturing equipment to run, switches lights off and on, choosing the best source for energy based on solar or wind conditions or even regulating conference room temperatures proactively when lots of meetings are scheduled.
AI could also play a role in supply chains. For example, image sensors might be "trained" to identify lumber that could be from endangered forests by examining the unique traits of wood. The possibilities are relatively endless.
Why now? Round up the usual suspects
The primary reason we’re hearing so much about AI and deep learning is because several of the most influential tech companies in the world — including Apple, Facebook, Google, IBM and Microsoft — are racing to establish leadership positions.
Many have snapped up startups to improve their AI expertise more quickly. Just as significant, some (Facebook, Google and Microsoft) are sharing the code behind some of their most successful projects, with the hope of accelerating market acceptance. Google, for example, has handled over its Tensor Flow technology, which it uses for speech recognition, photo searches and the new automated reply feature in Gmail.
"We hope this will let the machine learning community — everyone from academic researchers to engineers to hobbyists — exchange ideas much more quickly through working code rather than just research papers," wrote Google’s research team.
That's an idea that's far from controversial — one that's bound to accelerate AI adoption in the months to come.