Arbitrary Realities: Models and Complexity

Arbitrary Realities: Models and Complexity

The complexity of the anthropogenic world is particularly challenging to environmental science, because much of the behavior of the integrated human/natural/built systems that characterize this era emerge at systems levels that are not approachable with the traditional tools of reductionist science -- direct observation or replicable experiments. Understanding such emergent characteristics of complex systems requires increasingly complicated models which, if they are to be useful, must simplify the underlying complexity. They do so by adopting worldviews and assumptions by which relevant information is identified and irrelevant information discarded.

However, this creates two further dilemmas. First, any model, by virtue of being a reasonable expression of a particular worldview, is necessarily incomplete and contingent. Second, because models immediately become part of the dialog that they are drawn from, they reflexively affect that dialog in ways which are essentially impossible to predict, and which invalidate the assumptions that underlie the model. Thus, modern science requires models, but all models are necessarily normative, partial and contingent, and most models reflexively falsify their assumptions, and thus invalidate their predictions. Needless to say, this conundrum poses significant problems for the scientific discourse going forward.

Note, however, that the problem of models arises primarily from their misuse in discourse, rather than from models themselves. That models are contingent and normative does not mean they are wrong in themselves. But the ease with which quantitative model results can be mischaracterized, and the strong incentives to do so by those involved in contentious discourses, means that models in our society are highly susceptible to abuse. Moreover, the tendency to overstate and misuse the predictive and analytical capabilities of models, by both scientists and activists, has serious implications, for it undermines not just the particular models involved, but the patina of objectivity which is itself the strongest validation of the scientific and technological discourses.

Consider, for example, climate change modeling. No one has ever observed climate change directly -- indeed, no one ever could. Rather, the existence of the phenomenon is derived from first principles (physics and atmospheric chemistry), and a series of observations that, taken together and integrated in models and discourse, lead to the assertion that anthropogenic global climate change exists and will have certain effects.

The constellation of data, models, and projections constituting the global climate change discourse are instantiated in public dialog, to the point where some governments, such as the U.K., are considering issuing each citizen individual emissions permits, and many scientists and activists are demanding immediate cuts in consumption and fossil fuel use in developed and developing countries alike. The sense of reality, indeed catastrophe, derived from model predictions, is powerful. As part of this exercise, projections into the far future (100 to 200 years), assuming incremental technological evolution, are common. We know that assumption is wrong, for the dynamics of future technologies are virtually impossible to predict -- but we have to assume something.

There is at this point little question that poorly understood anthropogenic changes in many natural systems, including climate, are occurring. Nor is there any question of the need for models as tools to explore potential systems behaviors, especially undesirable ones. The problems arise because many in the community, either inadvertently or deliberately, speak of model projections and scenarios in language that reifies them: predicted patterns "will" happen, or certain phenomena "are the result of climate change."

This is partly because mere scenarios lack the authority to drive the social engineering that they perceived as necessary and desirable; both to be heard, and to force people to change, requires a certainty that models cannot provide. The risk, of course, is that at some point the public will become aware that they have been stampeded by deliberate manipulation of information, and turn against both the environmental community, and the scientific discourse more generally, thus undercutting efforts to address real and serious problems.

The undeniable value of models, combined with their demonstrated ease of misuse, creates a significant duty for the scientific and technological community. Institutionally -- in funding programs, for example -- we must get much better at supporting divergent research programs representing different viewpoints, particularly in highly complex and contentious areas (not funding bad science, but recognizing that complex realities cannot be understood from single viewpoints).

We must also, as practitioners, become much more careful in how we think of, and present, model results. Otherwise we abuse reality, and because the scientific and technological discourses arise from a privileged posture in investigating reality, we undermine them as well.