Key Takeaways

  • AI handles text-heavy, pattern-based crisis prep tasks well — drafting, scenario-building, micro-learning exercises.

  • Crisis communication is adversarial. Current AI models behave like fluent opponents, not adaptive ones.

  • Stakeholders in real crises probe for weaknesses, model your responses, and change tactics. AI simulations rarely do this.

  • Hidden constraints — legal limits, political sensitivities, internal power dynamics — drive real decisions. Most AI simulations ignore them.

  • A convincing but unrealistic simulation is worse than a poor one. It creates false confidence.

  • Better AI crisis simulation requires multi-agent design, adaptive stakeholder behaviour, and consequence-based evaluation.

What Does AI Actually Do Well in Crisis Communication Today?

AI delivers genuine operational value in the early, text-heavy stages of crisis preparation — but its usefulness has a ceiling most practitioners haven't yet identified.

Large language models work best when the task is pattern recognition, content generation, or structured scenario building. In crisis communication, this translates into three practical areas.

First, crisis anticipation. A model configured with game theory frameworks can surface tensions where stakeholder goals clash — places where a decision creates unexpected vulnerability. It doesn't predict the future. But it forces better questions before response planning begins.

Second, micro-learning. Short, 20-minute AI-handled role-playing exercises make decision-making practice more frequent and more accessible. Traditional simulations are expensive. AI alternatives lower the barrier significantly. Practitioners can deploy these through existing tools without specialist setup.

Third, scenario drafting. AI cuts the preparation time for complex simulation timelines, stakeholder role descriptions, and injects — allowing trainers to focus on calibrating pressure rather than producing logistics.

"I use AI daily in my practice," says Philippe Borremans, founder of RiskComms and a crisis communication specialist with 25 years of field experience. "But I don't use it as a generic assistant. I use systems configured with my own professional knowledge and methodologies. The difference matters enormously."

The pattern is consistent: AI accelerates prep and lowers the cost of practice. The problem starts when organisations mistake prep-phase utility for simulation realism.

Why Can't AI Replicate the Pressure of a Real Crisis?

Real crises are adversarial, multi-actor environments defined by incomplete information — and current AI models are not trained for that kind of competition.

Ankit Maloo's 2025 analysis in Latent Space draws a useful distinction: systems trained on text behave differently from humans trained in high-pressure, competitive environments.

Crisis communication falls firmly in the second category. Journalists probe for narrative gaps. Regulators ask "clarifying" questions that are actually traps. Activists look for moral leverage. These aren't just communication challenges — they're strategic contests.

Current language models are trained on the outputs of decisions, not the messy trade-offs that produced them. They generate plausible, well-structured responses. What they don't do is adapt to you.

What Is the Adversarial Gap in Crisis Simulations?

The adversarial gap describes the difference between an AI that produces realistic-sounding pressure and one that actually applies it — and most current simulations sit firmly in the first category.

In a real crisis, your opponents are watching how you respond and adjusting. A journalist who notices you're avoiding a particular phrase will return to it. An activist group that spots inconsistency between your public statement and a leaked internal document will amplify it. The interaction is dynamic. Each move changes the next one.

AI simulations tend to stay in character. Once you identify the pattern, it rarely shifts. Participants can learn to manage the simulation without learning to manage the crisis.

Why Does Politeness Hide Strategy in Crisis Interactions?

Many of the most dangerous crisis interactions look administrative on the surface — and experienced communicators know how to read what's actually happening beneath the language.

A regulator who asks for "clarification" may be building a paper trail. A journalist requesting information "for balance" may have already written the lead. The question isn't what they're asking. It's what they're doing. Experienced practitioners read the timing, the channel, the sequence of contact.

Large language models take language at face value. They answer the question asked. They miss the fact that the question is a probe.

"In simulations, this leads to stakeholder roles that ask plausible questions but fail to apply authentic pressure," Borremans notes. "Participants feel they're being challenged. They're not."

How Does AI Fail When Opponents Start Adapting?

Real crises involve mutual modelling — each party watching the other and updating their strategy — and AI simulations currently cannot replicate this dynamic.

When you spot a pattern in how an AI plays a journalist or an activist, the pattern stays fixed. In reality, the moment you become predictable, your opponent changes. This is what makes senior leaders describe simulation exercises as "thin." They've seen the script before. The AI hasn't learned to improvise.

What Hidden Constraints Do AI Crisis Simulations Miss?

The factors that actually determine crisis outcomes often cannot be discussed openly in a simulation — and AI training data captures none of them.

Legal limits define what can be said and when. Political sensitivities inside an organisation determine who makes the call. Internal power struggles shape whether a response is fast or slow, bold or defensive. These constraints don't appear in press releases or post-crisis reports. They appear in the room where the decision gets made.

AI training data contains the results of those decisions. Not the trade-offs. Not the constraints. This is why simulations that feel realistic on messaging terms can still feel shallow to anyone who has managed a real crisis at a senior level. The hard parts — the things you couldn't say, the choices you couldn't make — simply don't show up.

Is There a Difference Between Writing Quality and Outcome Quality in Crisis Communication?

Absolutely — and confusing the two is one of the most significant risks in AI-assisted crisis training.

A message can be well-constructed, appropriately toned, and legally sound — and still trigger a disaster. Did the situation stabilise? Did trust erode further? Did a secondary audience interpret the message differently than intended? These are the questions that matter. They're questions about consequences, not about craft.

Large language models don't learn from what happens after a message goes out. They generate text that would be plausible in the training data. They have no way to know whether the response stabilised the situation or accelerated it.

This creates a specific and serious risk: a simulation that flatters participants. Messages sound professional. Stakeholders respond reasonably. The exercise concludes without exposing anyone's real weaknesses. Participants leave feeling more confident than they should. That is not training. It's reassurance.

"A simulation that doesn't expose your weaknesses fails," Borremans says. "The goal isn't to practise performing well. It's to practise performing under conditions that are trying to break you."

How Do You Build AI Crisis Simulations That Actually Work?

Closing the adversarial gap requires a deliberate change in simulation design, moving from single-agent, text-quality assessment toward multi-agent, consequence-based environments.

The key design shifts are straightforward, even if the implementation is not:

  • Multi-agent systems with conflicting goals. Stakeholders should not be playing the same crisis from the same angle. A regulator's goal and a journalist's goal and an activist group's goal should sometimes align — and sometimes pull in opposite directions simultaneously.

  • Adaptive stakeholder behaviour. Roles should change based on what participants do. A stakeholder who gets a credible answer early should behave differently than one who spots evasion.

  • Consequence-based evaluation. The measure of success should not be how clearly a statement is written. It should be whether the situation stabilised, whether secondary audiences responded as intended, whether cascading effects were anticipated.

  • Environments where choices persist. Decisions made in the first hour should shape the options available in the third. A commitment made publicly should constrain what can be said privately. The exercise should have memory.

None of this is impossible. Multi-agent AI frameworks exist. The gap isn't technical — it's in how crisis communication professionals are currently specifying what they need from the tools.

Frequently Asked Questions

Is AI useful for crisis communication at all? Yes — significantly so, in the preparation and planning stages. AI is particularly effective for drafting holding statements, building Q&A grids, creating simulation scenarios, and enabling frequent, low-cost micro-learning exercises. The limitation begins when AI is used to simulate the adversarial dynamics of an actual crisis.

What is the adversarial gap in crisis simulation? The adversarial gap is the difference between an AI that produces realistic-sounding crisis pressure and one that actually applies adaptive, strategic pressure. In real crises, opponents — journalists, regulators, activists — observe how you respond and adjust their tactics. Current AI simulations maintain fixed behaviour once you identify their patterns.

Why do AI crisis simulations feel realistic but teach the wrong lessons? Because they evaluate quality by the standard of the text produced, not the consequences that follow. A well-written response that sounds professional can still fail in a real crisis. Simulations that praise your outputs without testing your decisions create false confidence.

What does better AI crisis simulation design look like? It requires multi-agent systems with conflicting stakeholder goals working simultaneously, adaptive behaviour that changes based on participant responses, consequence-based evaluation rather than message-quality scoring, and persistent decision environments where early choices shape later options.

Can AI replicate hidden constraints like legal limits or internal political pressures? Not currently. AI training data captures the outputs of decisions, not the trade-offs behind them. Legal limits, political sensitivities, and internal power struggles shape real crisis decisions in ways that rarely appear in public-facing documentation — and therefore don't appear in AI training data.

How should crisis communication professionals use AI right now? As a preparation accelerator, not a simulation substitute. Use AI for scenario drafting, stakeholder mapping, holding statement development, and micro-learning exercises. Use human-facilitated simulations — with AI support — for testing decision-making under authentic adversarial pressure.

What is the risk of over-relying on AI crisis simulations? The danger is not that AI produces bad simulations. It's that it produces convincing ones. Participants may leave exercises feeling prepared for challenges they haven't actually faced. That gap between perceived and actual readiness is where real crises become catastrophic.

Why do senior leaders describe crisis simulations as "thin"? Because most simulations focus on visible messaging and miss the hidden constraints that make real decisions hard. Senior leaders who have managed actual crises know what it feels like when the communications strategy is blocked by legal risk or internal disagreement — and they don't find that pressure replicated in most exercises.

Is this problem specific to AI or a broader issue with crisis simulation design? Both. Traditional simulations have always struggled to replicate adversarial pressure authentically. AI has made it easier to produce simulations quickly — which risks scaling the same structural weakness faster, at lower cost, and with a veneer of technical sophistication.

Where is AI crisis simulation headed? Multi-agent frameworks, adaptive role behaviour, and consequence-modelling are all technically feasible now. The next development phase is less about building the technology and more about crisis communication professionals specifying clearly what realism actually requires — and resisting the temptation to accept fluency as a substitute for it.

References

  1. Maloo, A. (2025, February 7). Experts Have World Models. LLMs Have Word Models. Latent Space. https://www.latent.space/p/adversarial-reasoning

Philippe Borremans is the founder of RiskComms and a crisis, risk, and emergency communication specialist with 25 years of international experience. He works with organisations navigating complex crisis scenarios and develops frameworks including the Universal Adaptive Crisis Communication (UACC) Framework and the AI-Augmented Crisis Decision Matrix (ACDM).

Reply

Avatar

or to participate

Keep Reading