Deepfake Detection: Safeguarding Trust in Official Communications

Exploring how advanced AI can combat the rising threat of deepfakes.

Dear reader,

Today, I am taking a closer look at a topic that's becoming increasingly relevant in our professional lives: the rise of AI in identifying and countering deepfakes.

While a recent study found that 87% of communications professionals considered misinformation the biggest threat to their brands, another one found that only 21.6% of participants were able to accurately identify deepfakes.

With about 40 elections taking place worldwide, it's more important than ever for government, crisis, and emergency communicators to be able to distinguish what's real from what's not.

It's not easy, but the technology that allows the creation of deepfakes could also be the one that counters or at least identifies them.


The growing threat of synthetic media, also known as deepfakes.

With the proliferation of this technology on social platforms, public trust in crisis communications faces serious risks. Innovative detection tools, on the other hand, offer a strong and effective defence mechanism.

By quickly analysing media for manipulated elements, advanced AI can detect synthetic content and prevent the spread of false stories. As deepfake technology continues to advance, this capability is becoming increasingly important.

Challenges in recognising high-quality fakes

With recent improvements in AI techniques, deepfakes have reached unprecedented levels of realism. In some cases, even digital forensics experts struggle to distinguish fakes without robust tools. This poses a danger in times of crisis when emergency response relies on the trust of the community.

AI and political campaigns

The role of AI in political campaigns is a controversial topic. The use of AI to create fake news and manipulate images can have a significant impact on public opinion. This is why there are increasing calls to restrict or ban the use of AI in political campaigns.

AI and micro-targeting

AI has also revolutionised the practice of micro-targeting; sending targeted messages based on digital trace data. AI makes it easier and cheaper to create many different versions of a message to find out which is most effective for a particular group of people. This has effectively democratised the creation of disinformation.

Detecting deep fakes

There is no clear sign by which you can recognise a deepfake. However, you can look out for certain inconsistencies, such as unnaturally smooth skin, misplaced shadows, and unnatural lip movements, to name but a few.

  • Pay attention to the face. Facial changes are the primary focus of high-quality deepfake manipulations. Look for irregularities in skin quality and age.

  • Look at the eyes and eyebrows. Shadows may not appear where they should naturally.

  • Check the glasses; look for unnatural glare and changing angles when the person moves.

  • Examine facial hair: deepfakes often struggle to convincingly add or remove facial hair.

  • Look for unnatural blinking: does the person blink too much or too little?

These tips can help you distinguish between real and fake content. But don't forget that it can be difficult to recognise high-quality fakes.

Quick deployment in public emergencies

Viral misinformation spreads quickly via social platforms. Detection tools need to be deployed on a large scale to tackle fake content in real time during public crises.

By using cloud computing, deep learning, and AI hardware, deepfake scanning can authenticate large volumes of video and images within minutes. With these mass recognition capabilities, authorities can mitigate threats from synthetic media when social coordination is at stake.

Maintain an open dialogue about evolving protective measures

As deepfake technology becomes more advanced, detection methods must also improve to stay ahead of the curve. Transparency regarding capabilities manages public expectations and curbs speculation.

Officials should provide up-to-date information on breakthroughs in AI-based forensic tools. Open dialogue about developing safeguards against misinformation can increase trust in responsible sources of information.

In short…

Advanced deepfake detection is an important defence against viral misinformation in public emergencies. As AI-generated synthetic media becomes more deceptive, neural network analysis will become increasingly important to ensure the integrity of communications.

To ensure that people can trust official sources, it is important to focus on improving safeguards and rapid response capabilities. In times when clarity and social coordination are crucial, it is important to maintain awareness of these aspects.

As technology advances rapidly, overcoming this threat requires a coordinated technical and social effort.

Did you check out my crisis and emergency communication resources yet? You can download templates, checklists, and practical guidance on this page. You will find tools such as a Crisis Communication Plan Template, the Audience Canvas for Emergency Communication, and much more.


Leadership news that's smart & fun!

Get the latest in leadership news and trends with a free newsletter every Monday & Friday! Designed for time-strapped managers, we keep things informative, enjoyable, and brief.

What I am reading/testing/checking out:

  • Report - Misinformation in the Pharma Sector.

  • Tool: The all-in-one AI workspace for writers.

  • Newsletter: Tim Ferriss's 5-Bullet Friday

  • Article: Unresolved crises, like zombies, return with a vengeance by Patrick Trancu.

How satisfied were you with the content in this edition? πŸ“š

Login or Subscribe to participate in polls.

A Quick Note on How I Create Content for Wag The Dog

As you know, I'm passionate about AI and its applications in the fields of PR and crisis communication. So, it shouldn't come as a surprise that I use AI to help draft my articles.

Why? Well, for starters, English isn't my first language. While I'm comfortable with it, AI gives me that extra edge to ensure clarity and coherence. Secondly, I write about AI, so what better way to understand its capabilities than to use it in my own work?

I value transparency, so it's crucial for you to know that although AI assists me in drafting, I personally review and edit each article to guarantee its authenticity.

PS: I hope you've enjoyed this newsletter! Creating it each day is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Please click the share links below to spread the word with your friends and colleagues; it would mean so much to me. Thank you for reading!

Join the conversation

or to participate.