- Wag The Dog Newsletter
- Posts
- Battling the Conspiracy Beast: How AI Might Be Our Unexpected Ally
Battling the Conspiracy Beast: How AI Might Be Our Unexpected Ally
A new study shows how AI an play a positive role in combating conspiracy theories
Dear reader,
This week’s article was triggered by a scientific study I discovered through an update from the always excellent Ethan Mollick, Associate Professor at The Wharton School and author of Co-Intelligence.
In emergencies, conspiracy theories pose a significant threat to public safety and social cohesion. From pandemics to natural disasters, these unfounded narratives often spread like wildfire, fuelled by fear, mistrust and the rapid spread of misinformation via digital channels.
But now a study has shown that AI, often depicted as the tool to create misinformation and related conspiracy theories, can actually help with this important problem.
Let’s dive in and let me know what you think; AI friend or foe?
Table of Contents
The danger of conspiracy theories in emergency situations
Conspiracy theories in emergencies can have serious consequences for the real world.
They can undermine public health efforts, as seen during the COVID-19 pandemic, when false narratives about the origin of the virus or the safety of the vaccine hampered vaccination campaigns.
In extreme cases, as with QAnon, these beliefs can even lead to violence and societal disruption.
The origin of these theories often lies in a perfect storm of factors: widespread mistrust of authorities, fear of the unknown and the human tendency to look for patterns and explanations in chaotic situations1 .
Digital and social media have exacerbated this problem, leading to what some researchers call an "infodemic" - a tsunami of misinformation that can spread faster than factual information.2
Current strategies to combat conspiracy narratives
Combating conspiracy theories and misinformation is a complex challenge that requires a multi-faceted approach. Current strategies include:
Promoting media literacy: educating the public on critical thinking and fact-checking techniques.
Technological solutions: Developing AI-driven fact-checking systems and content moderation algorithms.
Legal and policy approaches: Exploring new legal frameworks for digital platforms, taking into account freedom of expression concerns.
Customised communication strategies: use credible alternative explanations and address broader issues without directly confronting misinformation.
Collaborative efforts: Foster partnerships between media, technology companies and researchers to create a more resilient information ecosystem.
Despite these efforts, the fight against misinformation often seems to be an uphill battle. However, a recent study suggests that artificial intelligence could be a powerful new tool in this fight.
AI: An unexpected ally against conspiracy theories
In a groundbreaking study entitled "Durably reducing conspiracy beliefs through dialogues with AI"," researchers Costello, Pennycook and Rand3 have demonstrated a novel and surprisingly effective approach to combating conspiracy beliefs using AI technology.
In the study, participants had a brief conversation with an AI chatbot (GPT-4 Turbo to be precise) about conspiracy theories they believed in. The results were astounding:
The AI conversations reduced belief in each conspiracy theory by about 20% on average.
This reduction in belief lasted for at least two months after the intervention.
The effect was consistent across a wide range of conspiracy types, from classic theories about the JFK assassination to more recent narratives about COVID-19.
Even participants with deeply held beliefs showed a significant reduction in their conspiracy beliefs.
The intervention had spillover effects: It reduced beliefs in unrelated conspiracies and influenced behavioural intentions related to conspiracy beliefs.
These results are particularly noteworthy because they refute the prevailing view that conspiracy believers are resistant to evidence and counterarguments. Instead, the study suggests that many people can actually update their beliefs when presented with sufficiently convincing evidence.
The key to AI's success appears to lie in its ability to offer personalised, evidence-based dialogues tailored to each individual's specific beliefs and evidence.
This approach overcomes the limitations of previous interventions, which often relied on generalised debunking4 attempts that did not address the specific evidence that each believer found convincing.
While the authors of the study point out that their findings also highlight the persuasive power of AI (which could potentially be misused), they also emphasise the positive potential of this technology when used responsibly.
For example, AI-powered chatbots could be used to provide accurate information for conspiracy-related search terms or to engage with users who share misleading content on social media platforms.
This study offers a glimmer of hope.
It shows that even with deeply entrenched conspiracy beliefs, reasonable dialogue - albeit with an artificial intelligence - can make a real difference.
We may find that AI is an invaluable ally in our efforts to spread the truth and combat misinformation, especially in times of crisis when clear, factual communication is particularly important.
In the fight against the conspiracy beast, we seem to have found an unexpected but powerful new ally. The challenge now is to use it wisely and responsibly.
What do you think?
🎧 Do you listen to podcasts? This newsletter is now available in audio format on Google Podcasts, Spotify, Stitcher, Deezer, Listennotes and many more.
References and further reading.
1 Krekó, P. (2020). Why conspiracy theories soar in times of crises. https://www.eurozine.com/why-conspiracy-theories-soar-in-times-of-crises/?pdf
2 A Dangerous Infodemic: An Examination of the Impact Social Media Misinformation has on COVID-19 Vaccination Status | Proceedings of the 23rd Annual Conference on Information Technology Education. (2022). ACM Conferences. https://dl.acm.org/doi/10.1145/3537674.3554754
3 Durably reducing conspiracy beliefs through dialogues with AI. (2024). Science. https://www.science.org/doi/10.1126/science.adq1814
4 Helfers, A., & Ebersbach, M. (2022). The differential effects of a governmental debunking campaign concerning COVID-19 vaccination misinformation. Journal of Communications in Healthcare, 16(1), 113–121. https://doi.org/10.1080/17538068.2022.2047497
Sponsor
Save 13 Hours Weekly of Podcast Pitching with PodPitch.com
The best way to advertise isn't Meta or Google – it's appearing on dozens of podcasts that your customers already love.
You could write a few emails yourself to podcast hosts...
Or you could automate thousands of emails going out weekly, pitching your people as the PERFECT next podcast guest.
With PodPitch.com...
Log in with your email
Load your brand info
Click "automate"
Emails pitching your team as the perfect next guest will start sending out automatically to podcast hosts.
Big brands like Feastables are already using it instead of expensive PR Agencies.
What I am reading/testing/checking out:
Webinar: Emerging Technologies & AI: What Emergency Management Leaders Need to Know
Tool: FiveThirtyNine - an AI driven forecasting machine
Article: A local newspaper in Hawaii has turned to AI-generated presenters to draw in new audiences.
Paper: An examination of the relationship between risk perceptions, cultural-religious beliefs and coping during COVID-19 pandemic control in South Asian countries: a systematic review
Tool/Article: Inside the Pod: The AI Research Assistant You’ve Been Dreaming Of
Let’s meet!
Here are the events and conferences I'll be speaking at. If you're around, feel free to message me and we can meet up for a coffee or a Negroni.
🇺🇸 Al in PR Conference + Bootcamp, 17-18 October 2024, Chicago, USA
🇬🇧 Crisis Communications Boot Camp, 4-5 November, London, United Kingdom
🇺🇸 International Association of Emergency Managers (IAEM) Annual Conference, 7 November, Colorado Springs, USA (remote/virtual).
🇳🇿 Emergency Media and Public Affairs (EMPA) conference, 7 November, Wellington, New Zealand (remote/virtual)
🇧🇪 AI in PR Boot Camp II, 20-21 February 2025, Brussels, Belgium
How satisfied were you with the content in this edition? 📚 |
PS: I hope you've enjoyed this newsletter! Creating it each weekend is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Please click the share links below to spread the word with your friends and colleagues; it would mean so much to me. Thank you for reading!
Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content. Some links in this newsletter may be affiliate links, meaning I earn a small commission if you click and make a purchase; however, I only promote tools and services that I have tested, use myself, or am convinced will make a positive difference.
Reply