- Wag The Dog Newsletter
- Posts
- AI Agents in Crisis Communication
AI Agents in Crisis Communication
A Practical Guide to What They Can and Can't Do

Dear reader,
In this week's edition of the Wag The Dog Newsletter, I am taking a clear-eyed look at AI agents - those increasingly hyped tools that promise to monitor, automate, and optimise your crisis and emergency communications.
While the tech world loves to talk about their potential to revolutionise everything (the trend for 2025), the reality is more practical and far less sci-fi. Understanding what these agents actually do - and where they fall short - can help you use them effectively when the pressure is on.
From rapid monitoring to routine automation, we’ll break down the different types of AI agents, their strengths and limitations, and why human judgement remains irreplaceable when the stakes couldn’t be higher.
Let me know if you find this helpful!
PS: this article was inspired by a blogpost by Tobias Zwingmann’s entitled Understanding AI Agents (Without The Hype).
Table of Contents
Introduction
Here's the truth: AI agents can help, and in some cases they'll even make a measurable difference. However, they're no substitute for human judgement, emotional intelligence, or the ability to adapt to the unexpected.
In the world of crisis and emergency communications, where seconds matter and trust can evaporate in an instant, it's important to know what these agents actually do, and what they don't.
Below are the basic types of AI agents and what they can do and where humans are still in charge.
1. Simple Reflex Agents
These agents are the simplest. They react to immediate conditions with predefined "if-this-then-that" rules. Think of these agents like automatic light switches; they respond to a trigger but have no memory or anticipation.
Example in crisis communication: A social media bot automatically hides posts containing swear words or flagged keywords during a crisis to prevent the spread of misinformation or abuse.
Strength: Quick, predictable responses.
Weakness: No understanding of context. A keyword filter may block legitimate concerns or allow inappropriate content through because it doesn't match a rule.
2. Model-Based Reflex Agents
These agents are a little smarter. They create an internal "model" of the environment, i.e., they use past information to predict what might happen next.
Example in crisis communication: An AI agent analyses interaction data from a previous product recall to predict which channels (e.g., email, SMS, social media) are best for emergency messaging this time.
Strength: Better decision-making through historical context.
Weakness: Still working within rigid boundaries. If the situation suddenly changes - e.g., a channel goes offline - they can't adapt.
3. Goal-Based Agents
These agents want to achieve a specific goal. They evaluate possible actions and plan steps to achieve their goal, like a chess player working towards a checkmate.
Example in crisis communication: An AI agent is tasked with increasing the reach of news during a disaster. It identifies key stakeholders, designs notifications, and plans distribution across different platforms to increase visibility.
Strength: structured, targeted planning.
Weakness: If the "goal" isn't clearly defined or the wrong priorities are set, the agent will blindly aim for the wrong outcome. A human must define and monitor the goal.
4. Utility-Based Agents
These agents don't just strive for a goal - they want to achieve it as well as possible. By making trade-offs and weighing factors such as efficiency, cost, or urgency, they choose the "best" action based on a utility function.
Example in crisis communication: An agent manages evacuation warnings during a flood. It determines the best communication method for each region (SMS, sirens, or radio) based on population density, network stability, and speed.
Strength: Makes optimised decisions when resources are limited.
Weakness: Its "utility" priorities require human input. For example, if costs are weighted too heavily, important messages may not reach certain groups.
5. Learning Agents
Learning agents get smarter over time. They adapt based on feedback and improve their performance as they gather more data and experience.
Example for crisis communication: A media monitoring AI that tracks how well your statements are received. Over time, it learns to better recognise emerging issues, sentiment trends, and key opinion leaders to plan future crisis responses.
Strength: Continuous improvement makes her more valuable over time.
Weakness: Learning requires training and data, which takes time - something you may not have in the midst of a crisis.
Multi-Agent Systems (MAS)1 These are teams of agents that work together - either collaboratively or competitively - to solve problems in shared environments.
Example in crisis communication: Multiple AI agents coordinate to deal with a cyberattack. One agent monitors new threats, another mitigates the damage by redirecting traffic, and a third takes care of public updates.
Strength: MAS can efficiently divide and manage large, complex problems.
Weakness: Communication breakdowns between agents - or conflicting goals - can derail efforts without human oversight.
Advanced Architectural Patterns
In addition to these basic types, there are various ways of organising and structuring AI agents. A common pattern is the hierarchical approach, where agents are organised in layers, with higher-level agents coordinating lower-level agents. You can think of it like a management structure, only for AI.
In crisis communications, for example, a higher-level agent could oversee the overall strategy, while lower-level agents take on specific tasks such as writing messages or media monitoring. However, this is more about how we organise agents than about a specific type of agent itself.
Where AI agents shine in a crisis or an emergency

Fast monitoring and analysis: Agents can process massive amounts of data at lightning speed - news, social media, and stakeholder feedback - to identify emerging risks and summarise key trends.
Routine automation: Chatbots and automated systems can handle frequently asked questions and routine communications so people can focus on strategic decisions.
Optimised content delivery: Utility-based agents help you deliver the right message to the right people at the right time through the most effective channels.
Scenario planning: Learning agents simulate possible crisis outcomes so you can better plan your response.
Where Humans Still Have the Edge
Even as AI agents become faster and smarter, there are some things they simply can’t replicate, at least, not yet.
Context and Cultural Nuance:
AI models are undeniably good at identifying patterns and approximating tone. They can distinguish reassurance from an ill-timed joke, but their understanding is based on statistical patterns, not lived experience. This leaves them prone to tone-deaf errors, particularly in culturally specific or unprecedented situations - mistakes a human communicator would instinctively avoid.
Ethical Judgement:
AI can be programmed to follow ethical frameworks, but let’s not pretend it understands morality. It can optimise for preset rules, sure, but when faced with complex ethical trade-offs or novel dilemmas, AI has no moral agency - it doesn’t “know” what’s right. It executes, but humans are still the ones who take responsibility.
Trust and Accountability:
Research shows AI-generated apologies can be effective in low-stakes settings, but in a serious crisis, the public expects accountability - and that means hearing from a human. AI might deliver the words, but it can’t convey the sincerity or authenticity needed to repair broken trust.
Adaptability to the Unexpected:
AI systems are improving when it comes to handling surprise scenarios within their training limits2 , but they still falter when the situation is genuinely new. Humans excel at connecting dots across unrelated experiences, improvising on the fly, and applying creative judgement in ways machines simply cannot.
AI agents are powerful tools, but in high-stakes moments, when context, ethics, and trust matter most, human judgement is still essential.
The bottom line
AI agents are tools, not saviours. Use them for what they do best: automate tasks, analyse data and optimise decisions. But keep humans closeby — because when it comes to trust, judgement and empathy, there is no substitute for the human touch.
In a crisis, AI agents can support the response, but they aren't the hero. That role is still yours.
References and further reading.
1 Thai, T., Shen, M., Varshney, N., Gopalakrishnan, S., & matthias. scheutz. (2022). An Architecture for Novelty Handling in a Multi-Agent Stochastic Environment: Case Study in Open-World Monopoly. https://www.semanticscholar.org/paper/An-Architecture-for-Novelty-Handling-in-a-Case-in-Thai-Shen/aca475e95cb1b90273285469e01229b05a095b34
2 Comes, T. (2024). AI for crisis decisions. Ethics and Information Technology, 26(1). https://doi.org/10.1007/s10676-024-09750-0
Sponsor
Start learning AI in 2025
Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.
It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.
Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.
What I am reading/testing/checking out
Together with Justin Snair I joined the LeaderReady podcast by Eric McNulty from the faculty of the National Preparedness Leadership Initiative (NPLI), a joint program of the Harvard T.H. Chan School of Public Health and the Center for Public Leadership at Harvard's John F. Kennedy School of Government.
Article: More Humanitarian Organizations Will Harness Al's Potential
Tool: the PESO model as a GPT to interact with and question
Article: Unpacking Catastrophe: The Deep Roots of Global Humanitarian Crises
Video tutorial: how to leverage o1, OpenAI's new series of reasoning models
Article: Costs Pile Up As Climate Change Adds $600 Billion In Insurance Losses
Let’s meet!
Here are the events and conferences I'll be speaking at. If you're around, feel free to message me, and we can meet up for a coffee or a Negroni.
🇦🇪 AI for Crisis Communications Workshop, 29-30 January 2025, Dubai, United Arab Emirates
🇧🇪 AI in PR Boot Camp II, 20-21 February 2025, Brussels, Belgium
🇦🇪 New Horizons in Digital Content Creation and Data Analysis Conference, 23-24 April 2025, Abu Dhabi, United Arab Emirates
🇲🇽 Crisis Communications Boot Camp, 29-30 May 2025, Mexico City, Mexico.
🇸🇦 Crisis Communications Boot Camp, 4-5 June 2025, Riyadh, Saudi Arabia
How satisfied were you with the content in this edition? 📚 |
PS: I hope you've enjoyed this newsletter! Creating it each weekend is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Thank you for reading!
Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content. Some links in this newsletter may be affiliate links, meaning I earn a small commission if you click and make a purchase; however, I only promote tools and services that I have tested, use myself, or am convinced will make a positive difference.
Reply