- Wag The Dog Newsletter
- Posts
- When Uncertainty and Risk Meet Artificial Intelligence
When Uncertainty and Risk Meet Artificial Intelligence
How AI can support communicating risk and uncertainty during emergencies

Dear reader,
It’s midnight, and your phone buzzes with the kind of call no one wants to take. A chemical plant has had an accident. Your job? To explain the risks to the public, corral multiple agencies into a coordinated response, and manage a whirlwind of shifting facts without sowing panic or losing control.
Communicating in a crisis is never simple, but when uncertainty looms, how severe is the danger? Who’s at risk? What happens next? The challenge becomes even sharper.
In this week’s edition of the Wag The Dog newsletter, I wanted to cover where artificial intelligence is beginning to prove its worth, offering communicators new tools to clarify the unknown, navigate the chaos, and keep people informed and prepared.
Watch out for “going down the rabbit hole”, as I am listing all the references of my “research notebook” on this topic in the article as well. 😅
Enjoy!
Table of Contents
Introduction
As crisis communicators wrestle with the art of explaining risk and uncertainty, AI is emerging as a powerful ally. The technology can help turn abstract probabilities and opaque data into actionable, understandable insights for the people who need them most. But as with any technological revolution, the devil is in the details.
The challenge: making uncertainty understandable
Uncertainty is the shadow that hovers over every crisis. Whether it’s the path of a hurricane, the spread of an infectious disease, or the risks of a chemical leak, the core question is the same: how do you communicate something you don’t know exactly?
For years, crisis communicators have struggled with the delicate balancing act of being transparent while not alarming the public - or worse, inadvertently downplaying the risk.
AI systems are now being developed to help master this balancing act. Recent research shows how these systems can not only process huge amounts of data but also translate that data into formats that are easier for humans to understand.
“AI systems can be extremely helpful when it comes to quantifying and visualising uncertainties,” according to Collins et al. (2023)1 , whose study shows applications ranging from catastrophe risk modelling to medical decision support.
But there’s a catch: AI can’t just spit out raw probabilities and call it a day. As Kirwin et al. (2023)2 point out, most people don’t think in percentages. Telling someone that there is a “35% chance of severe consequences” means little if it is not accompanied by a clear context and intuitive explanation.
Instead, the best AI systems communicate uncertainty through means that appeal to human perception — such as natural frequencies (“1 in 10 people”), engaging imagery and simple, understandable language.
AI in action: turning data into decisions
There are already real-life examples in which AI has proven its value in highly sensitive scenarios.
During emergencies in the Ukrainian chemical industry, for example, AI-supported systems helped the authorities to quickly assess risks and communicate them effectively to the affected communities.
Gan et al. (2023)3 document how the technology transformed dense technical assessments into guidelines that the public could act on immediately.
Similarly, in recent health crises, AI tools were used to monitor social media to detect misinformation and recognise signs of public concern early. Kaufhold et al (2018)4 found that these tools allowed communicators to intervene before the misinformation got out of hand and ensure that uncertainties were answered with credible, accurate information.
What makes these examples so successful is not just the sophistication of the AI, but also the way the technology has been integrated into human decision making. AI can process data and present it in a user-friendly way, but it takes skilled communicators to interpret that data, tailor it to the local context and convey it in a way that builds trust.
The fine print: Why AI can’t do it alone
Despite its promising capabilities, AI is not a panacea for the problem of uncertainty. Without careful control, it can even make the situation worse. As Mittermaier et al. (2023)5 warn, biases in AI models — or even a lack of transparency around how these models work — can undermine public trust.
When people feel that decisions are being made by black-box algorithms, or when AI tools give advice that feels tone-deaf or unapproachable, they switch off.
This is why experts emphasise the importance of combining AI with human judgement. Machines are brilliant at analysing patterns and probabilities, but they can’t read the room. They don’t know the cultural nuances of a particular audience or how a poorly worded message can cause panic in one group while calming another.
For AI to work in crisis communications, it needs human communicators who understand the emotional, social and political contexts at stake.
Practical steps for communicators
So how can crisis communicators use AI without losing the human touch? The study by Savoia et al (2023)6 offers some starting points:
Translate uncertainty into clarity: Use AI to process and visualise complex risk data, but let humans shape the final message to make it understandable and meaningful.
Monitor public sentiment: Use AI tools to track social media and identify emerging concerns so you can respond to fears before they spread.
Scenario planning: Use AI-powered decision support systems to model different communication strategies and predict how they might play out.
Be transparent: Inform the public about how AI is being used. Explain its possibilities, but also its limitations. Transparency creates trust, especially when uncertainty is high.
Ethical questions that cannot be ignored
The increasing use of AI in crisis communication also raises sensitive ethical questions. Who is responsible if an AI-generated recommendation goes wrong? How do we ensure that AI systems do not reinforce existing inequalities or prejudices? And how do we protect the privacy of the people whose data flows into these tools?
These are not just theoretical concerns. As Hunkenschroer and Luetge (2022)7 argue, they are fundamental to the responsible use of AI in situations where trust is everything. For most researchers and practitioners, the goal is not to replace human communicators with autonomous systems.
Rather, as Savoia et al. (2023) put it, AI should act as a powerful assistant— that helps untangle complexity, but always under the guidance of experienced professionals who understand the human side of the equation.
A new chapter in crisis communication
The advancement of AI is opening up new ways of dealing with the uncertainty that characterises so many crises.
Tools are being developed that personalise risk messages for different audiences, translate technical data into clear guidance for the public and even predict future communication challenges through predictive analytics.
The Florida Division of Emergency Management, in collaboration with the University of Florida, has introduced the Broadcast Emergency Alerts and Communications Operations Network (BEACON). This AI-driven system delivers real-time emergency messages via AM radio, FM HD channels, and a mobile app, significantly reducing dissemination time from hours to minutes. Tested during Hurricanes Helene and Milton, BEACON successfully issued over 4,000 messages and aims to expand statewide by the 2025 hurricane season, offering alerts in multiple languages, including English and Spanish.
But for all its potential, the role of AI in crisis communication must remain just that - a role. The basic principles of communication in uncertain times have not changed. People still need clarity. They still need honesty. And they still need to feel that someone - a real person - understands their fears and is working to keep them safe.
When uncertainties arise, AI can help communicators overcome the challenge. But it’s the partnership between machines and humans that will truly define this new chapter in crisis communications.
And, as always, it will be the trust of the audience that will determine whether the message gets through.
References and further reading.
1 Collins, K. M., Barker, M., Zarlenga, M. E., Raman, N., Bhatt, U., Jamnik, M., Sucholutsky, I., Weller, A., & Dvijotham, K. (2023). Human Uncertainty in Concept-Based AI Systems. ArXiv.org. https://arxiv.org/abs/2303.12872
2 Kirwin, E., McCabe, C., & Round, J. (2023). OP46 The Decision Uncertainty Toolkit: Risk Measures And Visual Outputs To Support Health Technology Decision-making During Public Health Crises. International Journal of Technology Assessment in Health Care, 39(S1), S12–S13. https://doi.org/10.1017/s0266462323000806
3 Gan, R. K., Delgado, R. C., Bruni, E., González, P. A., & Alsua, C. (2023). Chemical Industry Disaster Risk Assessment During Complex Emergencies in Ukraine. Prehospital and Disaster Medicine, 38(S1), s70–s70. https://doi.org/10.1017/s1049023x2300208x
4 Kaufhold, M., Gizikis, A., Reuter, C., Habdank, M., & Grinko, M. (2018). Avoiding chaotic use of social media before, during, and after emergencies: Design and evaluation of citizens’ guidelines. Journal of Contingencies and Crisis Management, 27(3), 198–213. https://doi.org/10.1111/1468-5973.12249
5 Mirja Mittermaier, Raza, M. M., & Kvedar, J. C. (2023). Bias in AI-based models for medical applications: challenges and mitigation strategies. Npj Digital Medicine, 6(1). https://doi.org/10.1038/s41746-023-00858-z
6 Savoia, E., Piltch-Loeb, R., Stanton, E. H., & Koh, H. K. (2023). Learning from COVID-19: government leaders’ perspectives to improve emergency risk communication. Globalization and Health, 19(1). https://doi.org/10.1186/s12992-023-00993-y
7 Hunkenschroer, A. L., & Christoph Luetge. (2022). Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda. Journal of Business Ethics, 178(4), 977–1007. https://doi.org/10.1007/s10551-022-05049-6
Sponsor
Stay up-to-date with AI
The Rundown is the most trusted AI newsletter in the world, with 800,000+ readers and exclusive interviews with AI leaders like Mark Zuckerberg.
Their expert research team spends all day learning what’s new in AI and talking with industry experts, then distills the most important developments into one free email every morning.
Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.
What I am reading/testing/checking out:
Tool: Cerebras Coder, turn your ideas into fully functional apps in less than a second
Article: Key Guidelines for Writing Instructions for Custom GPTs
Tool: Rabbitholes.ai - Long conversations with A.I. without repeating yourself
Call for speakers: The Environment, Science & Risk Communication (ESR) Section of the International Association for Media and Communication Research (IAMCR) invites the submission of abstracts for IAMCR 2025
Let’s meet!
Here are the events and conferences I'll be speaking at. If you're around, feel free to message me, and we can meet up for a coffee or a Negroni.
🇦🇪 AI for Crisis Communications Workshop, 29-30 January 2025, Dubai, United Arab Emirates
🇧🇪 AI in PR Boot Camp II, 20-21 February 2025, Brussels, Belgium
🇦🇪 New Horizons in Digital Content Creation and Data Analysis Conference, 23-24 April 2025, Abu Dhabi, United Arab Emirates
🇲🇽 Crisis Communications Boot Camp, 29-30 May 2025, Mexico City, Mexico.
🇸🇦 Crisis Communications Boot Camp, 4-5 June 2025, Riyadh, Saudi Arabia
How satisfied were you with the content in this edition? 📚 |
PS: I hope you've enjoyed this newsletter! Creating it each weekend is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Thank you for reading!
Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content. Some links in this newsletter may be affiliate links, meaning I earn a small commission if you click and make a purchase; however, I only promote tools and services that I have tested, use myself, or am convinced will make a positive difference.
Reply