In partnership with

Dear {{ first_name | reader }},

There was no warning. No slow build-up. One moment, everything was fine; the next, a major global tech company was in full-blown crisis mode.

An AI-generated deepfake video of the company’s CEO hit the internet, showing them making inflammatory remarks about data privacy and user manipulation. It spread like wildfire.

Social media erupted. News outlets ran the clip on repeat. Some experts questioned its authenticity, but for the vast majority, the damage was already done. Regulators quickly took notice, floating the idea of formal investigations into the company’s data practices.

The stock price wobbled as investors worried about the financial fallout. Competitors stayed quiet—publicly, at least—but behind the scenes, they were watching closely. This was either a disaster that would haunt the company for years or an opportunity for them to take a leadership position in AI misinformation regulation.

It was, in short, the kind of nightmare scenario that keeps crisis communication professionals awake at night.

Except, in this case, it wasn’t real 😅

I created this fictional but highly realistic scenario (just ask $RACE ( ▼ 3.1% ) or $KNBE ( ▲ 0.02% )) to test my self-developed Game Theory1 for Crisis Communication AI sidekick.

The goal? To map out the potential responses the company could take, predict how key stakeholders would react, and determine which crisis strategy would lead to the best possible outcome.

And here’s what the analysis revealed…

Table of Contents

The Crisis Playbook: What The Company Could Do, And What It Should Do

When a company faces a crisis of this scale, it typically has three broad strategic choices:

  • Proactive: Respond immediately, deny the deepfake, provide AI forensic proof, demand social media platforms take it down, and advocate for tighter regulation of AI-generated misinformation2.

  • Defensive: Delay the response, launch an internal investigation before making a statement, and cautiously question the authenticity of the video.

  • Avoidance: Say nothing, hope the controversy fades on its own, and avoid giving it more oxygen.

Each approach comes with consequences.

If the company responded proactively, it could contain the damage quickly. The faster it provided indisputable proof that the video was fake, the better chance it had of stabilising public perception and keeping regulators at bay.

However, acting too quickly—without tight forensic analysis—could lead to even more credibility issues if the evidence wasn’t compelling enough.

A defensive response would buy time to confirm the facts, but at a steep cost. In the absence of a strong rebuttal, the media and public would fill the silence with speculation.

In a world where social media outrage moves faster than the truth, this option risks allowing the narrative to spiral out of control before the company could take back control.

The avoidance strategy, which some companies might be tempted to try, was the riskiest of all. In some cases, letting a controversy die on its own can work—but not when government agencies, the media, and a sceptical public are actively demanding answers.

The AI model predicted that doing nothing would almost certainly lead to regulatory investigations, a prolonged media firestorm, and significant financial damage.

The Game Theory Outcome: The Best Response

The AI-powered game theory analysis confirmed what experience already suggests: the best approach was a proactive response.

By immediately denying the deepfake and backing up the claim with forensic AI analysis, the company could limit the spread of misinformation before it became entrenched in public belief. However, the AI model also suggested that just proving the video was fake wasn’t enough.

The company needed to do more than play defence. It needed to reframe the crisis as an opportunity to lead on AI misinformation policy.

This meant shifting the conversation from “Is this video real?” to “How dangerous are AI deepfakes, and what should companies do about them?”

By bringing in independent cybersecurity experts, collaborating with policymakers, and pushing for better AI-generated content detection tools, the company could position itself as part of the solution rather than the problem.

The media, facing a choice between sensationalising the scandal or covering the broader issue of AI misinformation, would likely begin shifting their focus to the risks of deepfake technology itself.

The game theory analysis also predicted that competitors would hesitate to exploit the crisis outright. Many would fear that a deepfake scandal could just as easily happen to them next.

If the company played its cards right, it could even build industry-wide support for stronger AI content regulations, ensuring that no competitor could use the crisis against them without risking their own credibility.

What Happens Next? The Game Theory Twist That Changes Everything

So far, this is the expected crisis communication management playbook: respond fast, prove the video is fake, shift the conversation, and neutralise the damage.

But that’s not where the game theory analysis stopped.

The AI also revealed that there is a path where the company doesn’t just survive the crisis—it uses it to its advantage.

  • What if, instead of merely containing the damage, the company leveraged the controversy to become a global leader in AI misinformation prevention?

  • What if it developed an AI deepfake detection tool and partnered with governments and news organisations to set the standard for AI-generated content verification?

  • What if, by pushing for new industry regulations, the company locked competitors into playing by rules it helped define?

This alternative outcome is where the real power of game theory shines; not just predicting how to handle a crisis, but how to turn a crisis into a strategic opportunity.

Want to explore this analysis further?

For those interested in a deeper understanding of this crisis simulation, I've prepared an extended report that includes the game theory modeling, stakeholder response analysis, and alternative scenarios that could reshape how organizations approach AI misinformation challenges.

This extended analysis is available to supporting subscribers in your inbox. Not yet a supporting member? Join the conversation here or log in and upgrade.

References and further reading.

1 Game Theory Applications in Crisis Management | Restackio. (2025). Restack.io. https://www.restack.io/p/ai-for-crisis-management-answer-game-theory-cat-ai

2 Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1). https://doi.org/10.1177/2056305120903408

Stay up-to-date with AI

The Rundown is the most trusted AI newsletter in the world, with 1,000,000+ readers and exclusive interviews with AI leaders like Mark Zuckerberg, Demis Hassibis, Mustafa Suleyman, and more.

Their expert research team spends all day learning what’s new in AI and talking with industry experts, then distills the most important developments into one free email every morning.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

What I am reading/testing/checking out:

  • Article/Analysis: A critical view on the Maui Emergency Management Agency Wildfire After-Action Report

  • Tool Preview: Ari - a deep research agent that thinks, reads, and analyzes up to 400 sources, and publishes polished reports in under 5 minutes.

  • Research: We Feel, We Understand: Examining the Moderating Effects of Publics' Empathy on Crisis Outcomes Across Crisis Types and Response Strategies

  • Tool Test: A conversation on all things crisis with a virtual avatar called Maya

Let’s meet!

Here are the events and conferences I'll be speaking at. If you're around, feel free to message me, and we can meet up.

How satisfied were you with the content in this edition? 📚

Login or Subscribe to participate

By the way: I'm a huge fan of Beehiiv

For those managing communications at any scale - from individual thought leadership to comprehensive corporate messaging strategies - Beehiiv merits serious consideration.

In the interest of full disclosure, I do receive a commission from qualified referrals. However, my recommendation comes from direct experience implementing Beehiiv across multiple communication initiatives with consistently positive outcomes. 😊 ✍️

PS: I hope you've enjoyed this newsletter! Creating it each weekend is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Thank you for reading!

Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content. Some links in this newsletter may be affiliate links, meaning I earn a small commission if you click and make a purchase; however, I only promote tools and services that I have tested, use myself, or am convinced will make a positive difference.

Reply

Avatar

or to participate

Keep Reading