Skip to main content

Toward a Real-Time Macroethical Assessment for Sustainability

Several years ago, Dan Sarewitz and Dave Guston wrote an influential article about "real time technology assessment," or RTAA. They understood technology as a complex phenomenon combining elements of engineering, economics, and social science, demanding an appropriately flexible approach.

It is increasingly apparent that a similar approach needs to be developed to be able to work rationally, responsibly and ethically with the integrated human/built/natural systems that increasingly characterize the anthropogenic world -- in short, that we also need a "real time macroethical assessment" capability, or RTMA.

While there are many subtleties to RTMA, its essential importance is evident. We have developed ethical systems appropriate for conditions where either a utilitarian -- "the greatest good for the greatest number" -- or a rules-based (think the Ten Commandments) ethical approach is adequate. Such ethical frameworks are routinely applied to the bounded systems with which we're most familiar, such as the acts of violence or white collar crimes dealt with by our legal systems.

For environmental engineers or scientists, individual ethical systems are augmented by professional ethical systems, usually codified by the appropriate professional groups and implicit in the training that students receive as they advance to professional status. Traditional environmental regulations reflect such ethical frameworks, which work well given the simple systems at issue.

In the case of an end-of-pipe treatment facility, for example, the costs and benefits tend to be fairly defined, and by and large we're familiar with them. Thus, releasing a carbonaceous material to surface waters tends to result in biological oxygen demand, which reduces water quality; how much is released is a reflection of cost, regulation, choice of treatment systems, prevailing national norms, political negotiation, and the like -- and it's a decision with a high ethical content.

These are understood problems, even if various stakeholders might balance the costs and benefits differently. We can estimate outcomes, and apply rules, with a fairly good probability of achieving the systems behavior, including ethical considerations, that we anticipate.

But with a complex system -- such as any suitably complex technology, or regional resource regimes such as the Baltic Sea, or the Everglades, or urban systems -- it is impossible ab initio to predict system behavior. What are the ethical implications of virtual realities? Of large scale shifts to biofuels? Of integrating human and robotic systems via brain-computer interfaces? Of changing large drainage patterns in the Everglades? Of ambient atmosphere carbon capture technologies? We simply don't know; moreover, we don't know of any way to know except as the system unfolds in real time.

Deontological and rule-based systems fail because they presuppose a particular stance in an environment where multiple mutually exclusive ontologies are implicated; utilitarian systems fail because the impossibility of knowing outcomes until they happen means that any attempt to weigh future utilities is at best partial and at worst duplicitous. Moreover, such systems because of their social dimensions are reflexive: that is, as information is developed it feeds back into the system itself, changing it unpredictably.

Thus, while traditional rule-based ethical systems can be treated as fixed, macroethical frameworks applied to complex adaptive earth systems tend to evolve as the system evolves. In other words, in simple systems we treat ethics as external to the system; with complex systems ethics are recursive, internal and themselves part of the evolving system.

Hence the need to develop macroethical approaches which enable "real time macroethical assessment." RTMA is the development of assumptions, axioms, and processes which enable operational ethics at the level of emergent behavior of complex systems to be developed, tested, and applied in real time. Without this capacity, we end up applying ethical systems appropriate only in simple systems to complex systems where they are obviously inappropriate, even dysfunctional. Think of trying to apply Newtonian physics to quantum mechanics. It is not that Newtonian physics are wrong, only that, in our ignorance, we have failed to understand where they are valid and where, because of increased complexity, they fail.

Analogously, sustainability has to date often been oversimplified (generally for ideological reasons), but is increasingly understood as a complex system phenomenon. In addressing it, we thus increasingly recognize the need to rely on new social and physical science, and new computational and modeling tools.

What we have so far failed to recognize, however, is that we also need a correspondingly evolved way of conceptualizing ethics. In short, if we are serious about moving towards real sustainability, we must also become serious about developing a robust RTMA capability.

More on this topic