Computing is found everywhere across the globe and cheap enough to be in the hands of a subsistence farmer. Let's review four emerging trends that harness the power, popularity, and economics of IT to boost sustainability programs.
The foundation making the trends possible
Consider the following examples as a refresher on how technology continues to advance:
• A smart phone is more powerful than all the NASA computers used in 1969 to send 2 astronauts to the moon, reports Michio Kaku in his book "Physics of the Future." A Sony PlayStation eclipses the power of a military super computer used in 1997.
• While power increased, computing costs plummeted. The computing power used by the military in 1997 and the PlayStation today, cost millions of dollars in 1997 while the PlayStation will only set you back $300.
• With declining costs, powerful technology is now widespread. There are 4.6 billion cell phones in use, more than half the population owns one. Twenty-seven percent of those phones are "smart" with the greatest penetration in developed markets. They are 1 billion broadband subscribers allowing for the quick transfer of data needed to make applications useful.
The 4 IT trends
1. Shifting to cloud computing to provide new apps quickly
Just like the Zipcar, cloud computing is based on sharing. An organization uses computing power owned by a third party when needed, rather than acquiring its own data center and equipment with the all the capital costs and on-going overhead.
This allows an organization to focus on its core competency, those activities that differentiate it from its competitors. Given that most data centers continue to be alarmingly underutilized at 10 to 20 percent on average, shifting management of computer equipment will likely provide big benefits for your organization and the environment.
But more exciting than optimization is the opportunity to provide new applications quickly, even if the organization is small. All of the apps reviewed in the next sections are available in the cloud.
2. Deploying sensors to enable new types of automation
Each year, poor driving contributes to 93 percent of all car accidents, or 4.9 million last year. We spend an extra 11 minutes per day on average stuck in traffic and much more in highly congested areas. Poor driving practices and congestion waste 3.9 billion gallons of gas, about 4 percent of total U.S. oil production. Google hopes to resolve these issues with its self-driving car.
In 2005, the best self-driving car completed a closed course in seven hours at an average speed of 19 mph. In just five years, Google developed a fleet of self-driving prototypes that have since driven 190,000 miles in street traffic, busy freeways and the open road.
How did Google achieve the breakthrough? Taking a whole systems approach, Google developed the brain, memory and eyes for its self-driving system.
The car uses an impressive algorithm and enormous processing power as its brain. Feeding the brain are the "eyes," made up of multiple sensors including a laser guidance system on its roof, radar mounted on the bumpers, a GPS system and various cameras. The sensors are used to capture data real-time but just as importantly, road data prior to any trips. That road data is stored as memory.
Live streaming of data from sensors and historical road data are combined to recognize static objects such as traffic lights from dynamic objects such as pedestrians. The Google car crunches more data, does it quicker and more reliably than human drivers.