Deepfakes: When seeing is no longer believing

How to prepare for a deep fake related crisis.

In partnership with

Dear reader,

Not so long ago, a video of a head of state or the CEO of a company making an announcement was proof enough. Thanks to deepfakes, that certainty has now disappeared. These AI-generated fakes can be so convincing that even seasoned professionals struggle to recognise them.

And here’s the kicker: most people can’t even tell the difference

A new study by iProov, a leading biometric identity verification company, tested 2,000 UK and US consumers and exposed them to both real and fake content.1 

The results are clear: only 0.1% of participants could tell the difference between real and fake in all images and videos, meaning that 99.9% of people - including employees, customers, journalists, and authorities - are at risk of being fooled.

For crisis communicators, this changes everything.

A fake emergency statement, a fake scandal, or a fake executive statement can lead to companies doing extensive damage control before they even realise they’ve been manipulated. If your crisis plan isn’t already prepared for deepfakes, it’s inadequate.

Kind regards,

Philippe

Table of Contents

The Deepfake Threat to Crisis and Emergency Communications

Deepfakes are not only a cybersecurity problem; they're a direct attack on trust. And in a crisis, trust is everything. Here’s how deepfakes can throw organisations into turmoil:

1. Undermining public trust in official statements

Imagine an AI-generated deepfake from a government official announcing a city-wide evacuation. Within minutes, social media is ablaze, emergency lines are overloaded, and real emergency responders are overwhelmed.

Even after a hoax is exposed, the damage lingers. How many people will hesitate the next time a real emergency arises, unsure whether the alert is genuine? Deepfakes not only create immediate chaos but also erode long-term credibility.

2. Misleading the emergency services

Fake emergency calls, fake 911 calls, or fake reports of ongoing attacks can distract police, firefighters, and medical personnel from real crises. In a time when seconds count, distraction by a fake can mean the difference between life and death.

3. Financial and reputational sabotage

Deepfake fraud is already happening2. Criminals have already successfully used AI-generated voice fakes to trick employees into transferring millions of dollars.

In one high-profile case, an engineering firm lost $25 million after an employee was tricked into transferring money through a fake video call in which he pretended to be an executive.3

On a larger scale, a convincing deepfake of a CEO announcing financial distress can send share prices into freefall before the company even has a chance to respond.

4. Supercharged social engineering

Phishing emails used to be the hackers’ favourite trick. Now they can send fake voice messages from the "CEO" instructing employees to disclose sensitive data. And it gets worse: thanks to advances in real-time deception technology, even live video calls can be manipulated.

Since 99.9% of people can't tell the difference, companies can no longer rely on their "gut feeling" to spot a scam.

How to prepare for the deepfake era

Deepfake attacks are inevitable. The only question is whether your organisation is prepared. Here’s how communicators can limit the damage.

1. Teach your employees to recognise the signs (as best they can)

Although deepfakes are almost unrecognisable to the human eye, they aren't flawless4. Telltale signs include:

  • Unnatural blink patterns (or none at all)

  • Inappropriate lighting (shadows or reflections that don't match)

  • Unusual audio disturbances (slight robotic distortions in the voice)

  • Unusual facial expressions (especially around the mouth when speaking)

However, the expectation that employees will reliably recognise deepfakes is unrealistic, as proven by iProov’s research. Training should focus less on "recognising fakes" and more on verification by secondary methods

2. Build multi-layered verification systems

Crisis communication needs built-in authentication measures When a company or government official makes a statement, it should include the following:

  • A secondary method of confirmation (e.g. digital watermarks, pre-agreed security phrases, official cross-posting on social media)

  • Multi-factor authentication for sensitive internal requests

  • Clear communication about who the public should contact for genuine updates

  • For high-risk actions, no request - no matter how urgent - should be processed based on video or voice alone.

3. Use of AI-supported deepfake detection

AI-generated deceptions require an AI-driven defence. Organisations should invest in deepfake detection software that can scan video and audio files for signs of tampering. While these tools are not foolproof, they are getting better, and using multiple layers of detection can provide an additional line of defence.

4. Pre-bunking: reconnaissance before the attack takes place

By the time a deepfake hoax about your organisation gets into circulation, the damage is already done. That’s why pre-bunking— - actively educating stakeholders about deepfake risks — is so important.

  • Regularly communicate how official updates are published

  • Warn employees, customers and partners about deepfake threats

  • Create rapid response protocols to quickly combat misinformation

5. Develop a deepfake crisis manual

A deepfake-specific crisis plan should include the following:

  • Quick detection: Assign teams to monitor fake content in real time

  • Pre-approved messages: Have press release templates ready to respond immediately

  • Partnerships with platforms: Build relationships with social media companies for quick takedown requests of content

  • Legal response: Work with the legal teams to assess the legal framework and take action against bad actors

6. Strengthen official communication channels

During a crisis, the first message that is seen is often the one that sticks. Organisations should ensure that they have:

  • Verified communication channels (official websites, apps, SMS emergency alerts)

  • A strategy for rapid correction when misinformation spreads

  • A policy of transparency — silence only fuels mistrust

Deepfake attacks thrive in uncertainty. If you don't control the story, someone else will.

What’s next? The fight for trust

Deepfakes are on the rise. If anything, they’re getting better - and the public’s ability to recognise them is getting worse.

The challenge for crisis communicators is not just to respond to deepfakes but to stay one step ahead of them. The next deepfake crisis won’t just be a test of your organisation’s response time. It will be a test of your credibility, your resilience and your ability to maintain public trust.

The reality is this: it’s no longer about if deepfake attacks will happen. It’s about when.

And the real question? Will you be prepared when they happen?

References and further reading.

1  iProov Study Reveals Deepfake Blindspot: Only 0.1% of People Can Accurately Detect AI-Generated Deepfakes | iProov. (2025, February 12). IProov. https://www.iproov.com/press/study-reveals-deepfake-blindspot-detect-ai-generated-content

2  Brodsky, S. (2024, September 10). Deepfake detection. Ibm.com. https://www.ibm.com/new/announcements/deepfake-detection

3  Adaptive Security. (2024). Adaptivesecurity.com; Adaptive Security. https://www.adaptivesecurity.com/resources/u-k-engineering-group-loses-25m-in-hong-kong-deepfake-video-scam

4  Keeping it real: How to spot a deepfake. (2024). Csiro.au; CSIRO. https://www.csiro.au/en/news/all/articles/2024/february/detect-deepfakes

Start learning AI in 2025

Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.

It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

What I am reading/testing/checking out:

  • Article: 2025 Homeland Security Threat Forecast: Erosion of Trust

  • Tool: a comprehensive framework for the collective management of community feedback in humanitarian settings

  • Report: Adversarial Misuse of Generative Al - by Google (opens PDF)

  • Article: Combating Misinformation: Limited State Resources During Crises

Let’s meet!

Here are the events and conferences I'll be speaking at. If you're around, feel free to message me, and we can meet up for a coffee or a Negroni.

How satisfied were you with the content in this edition? 📚

Login or Subscribe to participate in polls.

PS: I hope you've enjoyed this newsletter! Creating it each weekend is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Thank you for reading!

Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content. Some links in this newsletter may be affiliate links, meaning I earn a small commission if you click and make a purchase; however, I only promote tools and services that I have tested, use myself, or am convinced will make a positive difference.

Reply

or to participate.