Skip to main content

The Complexities and Realities of Climate Modeling

All good behavioral and predictive models undermine themselves: Climate models, by existing, change behavior and institutions in ways that invalidate their initial assumptions. This does not make them any less useful, but in fact will help us address the climate crisis.

A recent book, "Useless Arithmetic: Why Environmental Scientists Can't Predict the Future," argues that quantitative modeling, the basis of much environmental science, cannot predict outcomes of complex natural processes. While this is somewhat contested terrain, it points to a profoundly challenging underlying issue.

Put simply, how do humans design and manage in a world where they increasingly dominate the dynamics of most major earth systems, when the complexity of those systems assures that any coherent framework with which they approach the issue will, by virtue of being coherent, necessarily be incomplete and inaccurate? Unpacking this dilemma may help us use models -- which, after all, are the only real tool we have to try to understand and address highly complex systems -- rationally and ethically.

Begin with a simple observation. The anthropogenic Earth is characterized by ever-changing networks of complex, integrated human/natural/built systems. These systems are too complex to perceive in their entirety. Rather, the perspective one takes toward the underlying complexity identifies the subset of information that is relevant. The structure that is perceived is not the system as a whole, nor some solipsistic invention of the observer, but rather arises as a function of the interaction between the two.

Consider an urban system, for example. If one seeks information regarding crime, punishment, and justice in Phoenix, Arizona, the query implicitly establishes the existing municipal political boundaries as appropriate. If one seeks information regarding Phoenix's long term water supply, the implied boundaries involve at least seven states (because Phoenix relies in part on Colorado River water); atmospheric physics and chemistry; complex political and legal relations with a number of Native American Nations; and cultural systems (e.g., the "yuck" factor inhibiting recycling sewage water into drinking water). If one seeks information regarding financial flows, Phoenix is best understood as a relatively small node in a global network which exists primarily in cyberspace.

In each case, the relevant subsystem is defined by the purpose for which we are assessing it. But the underlying complex system itself cannot be captured by any single viewpoint. The anthropogenic Earth requires embracing mutually exclusive but equally valid ontologies - not out of any perverted sense of political correctness, but because only by doing so can one begin to perceive and understand the system in itself, rather than the subsystem called forth by each query.

The same is true with models. The essence of modeling is intelligent simplification of a more complex reality by capturing the information that is "important" and “relevant,” and ignoring non-relevant information. This is a perfectly legitimate way to do science: Newton, after all, developed his laws of motion by ignoring minor forces (such as friction) which would have made his simple, and therefore powerful, conclusions far more complicated and confusing.

How does one know what information to drop when constructing a model? As with any observation of a complex system, that is a function of the reason for which the model is being created, and the structure of the underlying system. It is neither pure creation of the observer, nor is it a product of the underlying system; it partakes of both.

A properly created model is therefore one that calls forth the information necessary to understand the query posed by the modeler. Each model will have boundaries within which it is useful - that is, within which the information and model structure are appropriate to the queries addressed to it. But the boundaries are frequently not obvious, and especially in highly politicized debates there will be powerful tendencies to extend the model beyond them. Thus, for example, models of global climate change create probabilistic future scenarios that can inform public debate - but to present those results as inevitable, or demanding certain social responses, is invalid.

Among other things, any model projected into the future makes assumptions about technology states which are almost inevitably wrong: we cannot know what technologies may be 50 years from now, but we should at least be honest enough to admit that, and understand models that make assumptions about technology stated as useful thought experiments, but no more. More subtly, models are inevitably part of highly reflexive systems.

Climate models, by existing, change behavior and institutions in ways that invalidate their initial assumptions. All good models undermine themselves. Yet, it is clear that we need models to explore the complexity of the world we are creating. What is to be done? The next column will make some suggestions.

More on this topic