3 measurement pitfalls the sustainability world should avoid

Editor's note: This is the second part in a multi-part series examining the pitfalls of sustainability measurements that draws on examples from outside the business world.

The World Economic Forum’s recent Global Agenda Survey 2012, compiled before Superstorm Sandy slammed the U.S. coastline, shows thought leaders worldwide ranked resource scarcity and climate change/natural disasters as Top 10 global trends. Would these issues rank even higher if the survey had been done after Sandy hit?

We believe the answer is yes, based on increased focus on climate challenges in politics, the press and corporate boardrooms. If Sandy’s clouds have any silver lining, it may be this opportunity to capitalize on rising awareness of sustainability as a business imperative. Given that, it’s all the more essential that our measurement systems be accurate, reliable, and account for human and organizational foibles. In this series, we offer a look at measurement pitfalls that have caused problems mostly outside the business world, to help lead to better sustainability measurements within our field.

In Part I of this series, which over the course of several articles will outline a dozen sustainability measurement pitfalls, we looked at the unintended consequences that have hampered the decade-long, multi-billion-dollars efforts to measure U.S. student learning and achievement. This example from education of Pitfall 1: Counting what's easy to count rather than what's important has practical lessons for business practitioners who are measuring the performance and value of their business’ sustainability activities.

Here in Part II, we offer three more examples from criminal justice and global finance to highlight how human subjectivity influences and complicates measurement:

Pitfall 2: Same data, but seen from different worlds

We like to think that numbers we cite are objective measurements. Numbers are numbers, data is data, facts are facts. That might be true, until you have a subjective, biased (perhaps unconsciously so), unavoidably emotional (however much we’d like to deny it) human interpreting them. The following criminal justice example is a good example of what we’re talking about:

New York City’s “stop and frisk” policy empowers officers to stop individuals on reasonable suspicion of criminal activity. In 2003, officers confiscated 604 guns through 160,851 such stops, finding one gun for every 266 stops. In 2011, 780 guns were found through 685,724 encounters, or one gun for 879 stops. So what do these numbers mean?

For stop-and-frisk opponents, the declining numbers were a sign that the department was stopping too many innocent people,” evidence of racial profiling and violation of civil rights. The NYPD, by contrast, saw the same statistics as proof that stop-and-frisk stops have effectively deterred criminals from carrying guns, as potential violators know they might be stopped by the police.

Whose view is correct? It depends whom you ask.

Which is exactly the point. If it were true that, as is often heard, numbers speak for themselves, their meaning wouldn’t be so ambiguous, dependent on interpretation, and predictable from the interpreter’s pre-measurement position.

Yet we often see this tendency from the “frisk” example in other fields that use metrics. Despite our best intentions, we cherry pick, and find ourselves stuck with an incomplete understanding of the available full story borne out of our unconscious confirmation biases. It can be difficult to avoid this trap, especially without a conscious effort.

Next page: Mis-categorizing what you think you are measuring