- Wag The Dog Newsletter
- Posts
- The Persuasion Machines: New Crisis Playbooks for AI-Driven Reputation Threats
The Persuasion Machines: New Crisis Playbooks for AI-Driven Reputation Threats
University of Zurich study reveals AI systems 6x more persuasive than humans, forcing a rethink of communication defense strategies

Dear reader,
This week let’s talk about the University of Zurich study and research project...
Their researchers ran AI bots on Reddit that were SIX TIMES better at changing people's minds than humans – and no one could tell they weren't real people. Not in some controlled experiment, but in the wild – arguing about politics, education, gender issues, you name it.
For those of us in reputation management, this should be on the agenda. If an AI can out-argue the best human communicators, what happens when it targets your organisation or client?
In this week’s edition of the Wag The Dog newsletter, I break down practical defences communication teams should be implementing right now, from blockchain verification to new rapid response protocols.
But I also ask an important question; does the end justify the means in research?
Give it a read below – your crisis playbook might need an update!
Table of Contents
Before we go to the main story…
🛡️ Want to strengthen your crisis comms playbook against rage farming?
On May 15, I’ll be hosting a live training webinar in collaboration with P World:
“Defending Your Organization Against Rage Farming”
🕐 1–4 PM ET / 19:00–22:00 CET
This is a hands-on session for teams who want to get into real scenarios, ask questions, and walk away with actionable tools.
👉 Register for the Live Webinar
But if you’re looking for something more in-depth — or can’t make the live event — I’m also launching a self-paced online course with step-by-step frameworks, templates, and the full 24-hour response system.
🔗 Join the waitlist for the Rage Farming Defense Course
Two formats, same goal: helping you prepare before the outrage storm hits.
The Research Project
Researchers from the University of Zurich have shown that AI systems are now beating humans at persuasion in real-world settings—not by small margins, but by being three to six times more effective.
The findings, detailed in a paper titled "Can AI Change Your View? Evidence from a Large-Scale Online Field Experiment," 1 show how AI-powered bots infiltrated Reddit's r/ChangeMyView forum and consistently outperformed human debaters at changing opinions on divisive topics.
Understanding the Threat
For months, AI accounts went undetected on this popular debate forum, with success rates far higher than the typical human.
The most effective approach, using personalised messaging based on each user's demographic and political profile, ranked in the 99th percentile compared to regular users and the 98th percentile against expert debaters, according to the research.
These AI systems not only wrote good comments but also used a series of persuasive techniques, such as:
Scanning users' posting histories to figure out their gender, age, political views, and location
Creating sixteen different potential responses for each interaction
Using another AI to judge which response would be most persuasive
Timing messages to look like they came from a human
For crisis communicators, the Zurich study is a five-alarm fire. When AI can persuade better than your best spokesperson and slip past detection, your response time isn't measured in hours anymore, it's minutes.
How It Worked
What makes this experiment particularly alarming isn't just how persuasive these systems were but how they did it.
Unlike older bots that relied on volume and repetition, these succeeded through quality, personalisation, and psychological insight. The personalised approach, which performed best in the study, shows a major shift in AI persuasion tactics.
By analysing a user's digital history, the AI crafted arguments that hit home with their values, beliefs, and communication style. This is a type of influence that works at a scale traditional methods can't match.
Consequences for Society
The implications go well beyond academic interest. Based on the research findings, several key risks stand out:
Democracy at risk: AI systems can now persuade people in ways humans can't match, letting small groups potentially gain outsized influence over public opinion.
Undetectable infiltration: Throughout the experiment, no users spotted the AI accounts as non-human, suggesting we're bad at detecting these operations.
Trust breaking down: If we can't tell the difference between human and AI-generated persuasion, we lose the basis of real conversation—knowing who we're actually talking with.
The study results show that the personalised persuasion approach ranked in the 99th percentile among all users, raising serious concerns about potential misuse.
For businesses, the threat extends to brand protection and reputation management. AI campaigns could quietly shape opinions about products, services, or company actions faster than PR teams could respond.
This creates a new kind of reputation risk where damaging narratives could spread before crisis managers even notice them.
Countermeasures: The Communication Professional's Playbook
For PR professionals and crisis communication managers, these findings are both a red flag and a major shift. The old approach to reputation management needs to change.

Here's how PR and crisis teams should adapt:
AI-Powered Monitoring: Set up systems that can spot unusual patterns in conversations about your brand. Look for sudden opinion shifts, coordinated messaging, or arguments that seem unusually persuasive.
Blockchain Verification: Consider using blockchain to verify your official communications. This creates a permanent record of what you said, when, and by whom. Each press release or statement gets a unique digital signature that people can verify, making it harder for AI to fake your company's voice.
Faster Response Plans: Create crisis plans specifically for suspected AI campaigns. These should include quicker response times (minutes, not hours) and pre-approved message templates ready to go.
Voice Authentication: Set up stronger verification systems so stakeholders can tell the difference between your real communications and possible AI fakes. This could include special verification channels or writing styles that are harder for AI to copy.
Real Relationships: Build stronger direct relationships with key stakeholders, journalists, and community members. These human connections become more valuable as digital channels become less trustworthy.
Truth Anchors: Include distinctive, verifiable details in your messaging that create anchor points, making it harder for AI systems to create believable counter-narratives.
Cross-Channel Consistency: Make sure your information is consistent across multiple channels, making coordinated AI campaigns harder to pull off.
Getting Ahead of Attacks: Don't just respond to misinformation—anticipate potential AI-driven attacks and prepare your audience for them before they happen.
Blockchain technology offers companies a practical way to verify their communications.(I wrote about this… 4 years ago 😅) 2 By recording each official statement with a timestamp and digital signature that can't be altered, stakeholders can check if content really came from the organisation.
For many organisations, these solutions will mean adding technical experts to communication departments. PR professionals will need to understand both messaging and verification technology.
Creating a verification system where people can check the authenticity of company statements against blockchain-verified originals could become standard practice in high-risk industries.
The cost of these verification systems might seem high (they are not!), but as the Zurich study shows, the cost of unchecked AI persuasion campaigns could be much higher. Organisations caught unprepared risk not just temporary reputation damage but lasting loss of trust.
Looking Forward
Even more worrying is how quickly these capabilities are evolving. The research used AI models available in late 2024 and early 2025. Given how fast the technology is advancing, what might be possible with models released just months from now?
The Reddit experiment revealed more than just how persuasive current AI can be; it showed how vulnerable our information systems are to manipulation by AI. The findings suggest AI can persuade people in ways and at speeds humans can't match, creating new risks to privacy and increasing the potential for manipulation.
The fight for attention and belief has entered a new phase. We need to see if our democratic institutions, regulations, and organisational defences can develop quickly enough to preserve real human conversation in an age where machines can argue more persuasively than we can.
Stay safe!
PS: This experiment has ignited a significant ethical debate.3
Does science trump ethics? Researchers deployed bots posing as sexual-assault survivors and trauma counsellors without disclosing their artificial nature—a clear breach of ethical standards.
They neither sought informed consent nor heeded their ethics committee's full recommendations. Their methods—deception, potential emotional harm and privacy violations—prioritised data over dignity.
The case forces a question: can research into AI persuasion justify methods that themselves manipulate and deceive? Or does such an approach undermine the very science it claims to serve?4
PSS: The Zurich study’s covert deployment of AI bots on Reddit without user consent or disclosure likely breaches multiple GDPR provisions related to lawful, fair, and transparent processing of personal data, as well as the rights of data subjects.
Simultaneously, it violates the EU AI Act’s transparency obligations that require users be informed they are interacting with AI systems and that AI-generated content be clearly labeled. These combined breaches expose the researchers to potential legal liability under both frameworks.
References and further reading.
1 Can AI Change Your View? Evidence from a Large-Scale Online Field Experiment. (n.d.). Retrieved May 1, 2025, from https://regmedia.co.uk/2025/04/29/supplied_can_ai_change_your_view.pdf
2 IPRA | ITL #417 The transformative power of blockchain: opportunities for Communicators. (2025). Ipra.org. https://www.ipra.org/news/itle/itl-417-the-transformative-power-of-blockchain-opportunities-for-communicators/
3 Reddit - The heart of the internet. (2025). Reddit.com. https://www.reddit.com/r/changemyview/comments/1k8b2hj/meta_unauthorized_experiment_on_cmv_involving/
4 AI-Reddit study leader gets warning as ethics committee moves to “stricter review process.” (2025, April 29). Retraction Watch. https://retractionwatch.com/2025/04/29/ethics-committee-ai-llm-reddit-changemyview-university-zurich/
Sponsor
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
What I am reading/testing/checking out:
Research: Psychological resilience and crisis READINESS: connecting employee well-being and internal crisis communication
Article: Will the humanities survive artificial intelligence?
Article: How cybersecurity teams can involve HR to optimise incident response
Online Event: The RC3 Long Night of Research - Tackling Global Humanitarian Challenges through Research
Let’s meet!
![]() Here are the events and conferences I'll be speaking at. If you're around, feel free to message me, and we can meet up. |
|
How satisfied were you with the content in this edition? 📚 |
PS: I hope you've enjoyed this newsletter! Creating it each weekend is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Thank you for reading!
Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content. Some links in this newsletter may be affiliate links, meaning I earn a small commission if you click and make a purchase; however, I only promote tools and services that I have tested, use myself, or am convinced will make a positive difference.
Reply