This article is sponsored by Booz Allen Hamilton.
It should surprise nobody that artificial intelligence (AI) and machine learning capabilities play a role in addressing impacts of climate change. Scientists rely heavily on AI to analyze disparate data sets, build predictive models and estimate the relative impacts of various courses of action.
Energy companies use it to improve their grids and maximize yields on renewable energies. Vehicle fleet owners use AI to reduce fossil fuel emissions through predictive maintenance and more efficient scheduling of vehicles. Furthermore, AI is used to help make everything from agriculture and food distribution to office buildings and industrial facilities more efficient and sustainable.
AI, after all, is an extraordinarily powerful tool. But we must continually remind ourselves that AI is simply that, a tool. And like any tool, it has its capabilities, limitations and even potential pitfalls when misapplied. Our success in using AI to tackle climate-related challenges will depend on being mindful of that as we map out our use cases and approaches for applying AI.
AI will continue playing expanded roles across many climate-related use cases. Here are just a few examples:
Remote sensing. The European Union, U.S. and other countries are deploying a wide array of highly advanced satellites. This provides unprecedented levels of insight into the causes and impacts of climate change, enabling near real-time monitoring of the planet. AI capabilities will be increasingly critical in helping to translate that data into a real-time understanding of what dynamics are at play in creating current and future climate conditions.
Regulatory enforcement. Many agencies tasked with enforcing climate-related regulations often find themselves overstretched and under-resourced. AI can be an important tool for them. For example, agencies can employ AI to help spot early-warning signs of potential environmental hazards caused by companies or utilities. Companies, as well, are increasingly turning to AI tools to drive their regulatory compliance.
Citizen science. More and more we see scientific projects engaging volunteers to conduct scientific research and monitoring activities. Some of these climate-related projects range from tracking earlier bloom times for plants in the spring, altered arrival times and locations of migratory birds and the shifting habitats of frogs and pollinators. In many cases, AI tools can drive collaboration and translate citizen-collected data into helpful insights.
When using data and AI, proceed with caution
But as AI takes on larger roles in our climate-related activities, we must also become smarter about its limitations and potential pitfalls so we can avoid undesirable outcomes, namely:
Unintended outcomes or consequences due to data gaps and privacy issues. We know that data can be skewed by explicit or implicit biases. It either can contain personally identifiable information or be used to point toward it. Alternatively, data may be of poor quality or even irrelevant to the intended outcomes. Using an AI model trained using bad data can affect what it learns, biasing its outputs. Take, for example, the use of satellite imagery, which could create inadvertent privacy concerns for people who appear unwittingly in it. Problems when not viewed through multiple lenses could lead to outcomes that miss the mark of what you intended. Teaming with experts from fields such as anthropology, law and sociology can ensure the right questions are asked to avoid data gaps and pre-empt privacy concerns.
When it comes to data, more is not always better. Many data conversations revolve more around how big a data set is rather than what the data set consists of and just as importantly, what does it not include. The right diversity of information is required for building an effective AI. Therefore, AI developers must be intentional about the data they decide to use. Appropriate use of data and a well-informed understanding of how and why it was gathered can ensure it is appropriate for the use case.
Data and AI as a sole decision-maker can lead to undesirable outcomes. Data sets are rarely perfect. Even slight gaps or errors in the data, or undetected biases or blind spots in the algorithms, can lead to undesirable outcomes over time. We’ve seen this occur when AI is applied to bail hearings. Therefore, AI should support decision making as part of a human-machine team, but decision making should never be left solely to AI. AI can learn patterns based on what has happened previously, but human interaction is needed to validate that those patterns are applicable to current scenarios. Stay vigilant over what your AI is telling you over time and be sure to crosscheck its results with existing experts and your stakeholders.
Ensuring best outcomes
How can you avoid some of these pitfalls? Start by simply educating yourself and asking key questions along the way:
Learn more about the many common types of biases that can manipulate data-derived outcomes — selection bias, historical bias, confirmation bias — so you know what to look for and how to spot it.
Consider including sociologists, historians and community stakeholders on your project teams. Often, climate-focused AI projects consist of data scientists, environmental scientists, designers and technologists. This approach may produce a functional AI tool, but it may not be right for the job. This is especially critical when applying AI to climate, as many problems are linked to societal issues. Bringing in subject matter experts, such as sociologists, can help ensure the right questions are asked, the right data sets are included, the right problem statement is driving the effort, and, ultimately, that the AI tool delivers effective and equitable results.
Include qualitative research methods — not just quantitative data or applying models to text —in your application. AI applications rely on data that’s been collected and consequently, AI-centric approaches can undervalue the elements of the data collection process, including qualitative research and human-centered design approaches. That’s a mistake. Qualitative insights — information gathered from asking questions to affected stakeholders or by simply observing them, for example — in many ways can be even more critical. This process can guide how the problem statement underlying an AI project is developed and how the AI tool to address that problem is developed.
Research the data sets you plan to use to understand how they were created, where they may be vulnerable, and how they can either advance or undermine your intended outcomes.
Fuse multiple datasets to improve understanding. This is particularly important in the context of Earth observation datasets. Many geophysical processes are complex and require information from a wide array of sources to accurately capture the impacts of climate change. As an example, Booz Allen is developing analytic models and methods to improve groundwater characterization in regions that suffer from water scarcity. This requires data from several sources, including climate model forecasts, hydrologic observations and context about the geophysical factors in a particular region. Standardizing, correlating and fusing these disparate data sources is of paramount importance for all applied analytic methods.
Going forward, AI will certainly play a larger role in helping us confront climate change in the 21st century, but we need to make sure that as we build those AI solutions, we take the time and effort to make sure they are effective, fair and appropriate. If we don’t, we will struggle to solve the problems of today and lose valuable time trying to solve the problems of tomorrow. When it comes to the climate crisis, we do not have time to waste.