- Wag The Dog Newsletter
- Posts
- Tech's Deepfake Crisis: The Game Theory Approach
Tech's Deepfake Crisis: The Game Theory Approach
The Deepfake Crisis That Shook Tech—And What Game Theory Revealed About It

Dear reader,
There was no warning. No slow build-up. One moment, everything was fine; the next, a major global tech company was in full-blown crisis mode.
An AI-generated deepfake video of the company’s CEO hit the internet, showing them making inflammatory remarks about data privacy and user manipulation. It spread like wildfire.
Social media erupted. News outlets ran the clip on repeat. Some experts questioned its authenticity, but for the vast majority, the damage was already done. Regulators quickly took notice, floating the idea of formal investigations into the company’s data practices.
The stock price wobbled as investors worried about the financial fallout. Competitors stayed quiet—publicly, at least—but behind the scenes, they were watching closely. This was either a disaster that would haunt the company for years or an opportunity for them to take a leadership position in AI misinformation regulation.
It was, in short, the kind of nightmare scenario that keeps crisis communication professionals awake at night.
Except, in this case, it wasn’t real 😅
I created this fictional but highly realistic scenario (just ask $RACE ( ▲ 2.5% ) or $KNBE ( ▲ 0.02% ) ) to test my self-developed Game Theory1 for Crisis Communication AI sidekick.
The goal? To map out the potential responses the company could take, predict how key stakeholders would react, and determine which crisis strategy would lead to the best possible outcome.
And here’s what the analysis revealed…
Table of Contents
The Crisis Playbook: What The Company Could Do, And What It Should Do
When a company faces a crisis of this scale, it typically has three broad strategic choices:
Proactive: Respond immediately, deny the deepfake, provide AI forensic proof, demand social media platforms take it down, and advocate for tighter regulation of AI-generated misinformation2 .
Defensive: Delay the response, launch an internal investigation before making a statement, and cautiously question the authenticity of the video.
Avoidance: Say nothing, hope the controversy fades on its own, and avoid giving it more oxygen.

Each approach comes with consequences.
If the company responded proactively, it could contain the damage quickly. The faster it provided indisputable proof that the video was fake, the better chance it had of stabilising public perception and keeping regulators at bay.
However, acting too quickly—without tight forensic analysis—could lead to even more credibility issues if the evidence wasn’t compelling enough.
A defensive response would buy time to confirm the facts, but at a steep cost. In the absence of a strong rebuttal, the media and public would fill the silence with speculation.
In a world where social media outrage moves faster than the truth, this option risks allowing the narrative to spiral out of control before the company could take back control.
The avoidance strategy, which some companies might be tempted to try, was the riskiest of all. In some cases, letting a controversy die on its own can work—but not when government agencies, the media, and a sceptical public are actively demanding answers.
The AI model predicted that doing nothing would almost certainly lead to regulatory investigations, a prolonged media firestorm, and significant financial damage.
The Game Theory Outcome: The Best Response
The AI-powered game theory analysis confirmed what experience already suggests: the best approach was a proactive response.
By immediately denying the deepfake and backing up the claim with forensic AI analysis, the company could limit the spread of misinformation before it became entrenched in public belief. However, the AI model also suggested that just proving the video was fake wasn’t enough.
The company needed to do more than play defence. It needed to reframe the crisis as an opportunity to lead on AI misinformation policy.
This meant shifting the conversation from “Is this video real?” to “How dangerous are AI deepfakes, and what should companies do about them?”
By bringing in independent cybersecurity experts, collaborating with policymakers, and pushing for better AI-generated content detection tools, the company could position itself as part of the solution rather than the problem.
The media, facing a choice between sensationalising the scandal or covering the broader issue of AI misinformation, would likely begin shifting their focus to the risks of deepfake technology itself.
The game theory analysis also predicted that competitors would hesitate to exploit the crisis outright. Many would fear that a deepfake scandal could just as easily happen to them next.
If the company played its cards right, it could even build industry-wide support for stronger AI content regulations, ensuring that no competitor could use the crisis against them without risking their own credibility.
Reply