The European AI Act and its impact on Crisis Communication

Explore the European AI Act's risk-based approach to AI regulation, focusing on its impact on crisis communication.

Sponsored by

Dear reader,

As you might have heard, the European AI Act 🇪🇺, was officially adopted last week following a political agreement in December 2023.

It represents the world's first comprehensive legislation to regulate artificial intelligence (AI) systems within the European Union. This ground-breaking regulation aims to:

Ensure that AI systems deployed in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly, while promoting innovation and competitiveness.

For crisis and risk communication professionals, the implications of this new legislation are significant and complex.

In this Wag The Dog edition, I’ll try to get my head around the implications, but of course… (here comes the disclaimer):

The information provided in this article does not, and is not intended to, constitute legal advice; instead, all the information included in this article is for general informational purposes only. 😅

Let me know how your organisation is tackling these AI related legal implications.

Enjoy!

Table of Contents

🇪🇺 The European AI Act: A Risk-Based Approach

At the heart of the AI Act is a system that categorises AI based on the risks it poses. It's a bit like how we assess risks in crisis management – we look at the potential impact on people's safety, health, and fundamental rights.

The AI systems that could have a big impact in these areas are categorised as "high risk" and come with a lot of liabilities. Think of it as a thorough safety review before they come to market, and then ongoing monitoring to make sure they stay in line.

As crisis communicators, it's important to understand these categories and the rules associated with them. This is crucial when advising on or deploying AI in sensitive areas such as disaster response, public safety, or critical infrastructure management. We need to know what we're dealing with and what is required of us.

The AI Act also contains a list of AI practices that are simply not allowed. These include things like manipulating people by exploiting their vulnerabilities, using social rating systems, or unauthorised use of biometric data.

There are also AI systems that pose only a limited risk, and for these, the main requirement is transparency – so that people know when they're interacting with AI.

Now let's look at general AI and generative AI, like ChatGPT.

These systems must also comply with transparency rules, and the systems that could pose systemic risks must be assessed and any serious incidents reported to the European Commission.

This is a big deal for us because it means that we need to know how AI is developing and how it could affect public discourse, misinformation, and our emergency communication strategies.

Finally, we can't forget the challenges of compliance and implementation. The AI Act has a wide reach and also applies to organisations outside the EU if they offer AI products or services to EU citizens.

In this respect, it's a bit like GDPR. This means that organisations need to be proactive and that we, as crisis communicators, have a role to play in developing responsible and ethical AI policies to stay ahead of these regulations.

🚨 Possible implications for Crisis Communication

Here’s an overview of how I think this new regulation could impact our work:

Enhanced Transparency and Trust

When it comes to transparency and trust, the AI Act lays out clear obligations for AI systems, especially the ones that interact directly with people, like chatbots and emotion detection systems.

Essentially, if you're using AI to engage with the public, you need to be crystal clear about it. This means labelling any AI-generated content.

I think this is a real opportunity in the context of crisis communication. By ensuring that our communications are transparent about how we're using AI to disseminate information or analyse public sentiment during a crisis, we can actually enhance public trust. It's about being open and honest, showing that we have nothing to hide when it comes to our use of AI.

(Example: I have now included the AI "disclaimer" at the bottom of my newsletter instead of having it as a separate paragraph.)

But then… Will content, labelled as AI generated content, be perceived differently in by people at risk in the context of risk and crisis communication?

Research suggests it really doesn’t make a difference.

Surprisingly, our results demonstrate that regardless of the user interface presentation, participants tend to attribute similar levels of credibility. While participants also do not report any different perceptions of competence and trustworthiness between human and AI-generated content, they rate AI-generated content as being clearer and more engaging.

Huschens, M., Briesch, M., Sobania, D., & Rothlauf, F. (2023). Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content.

Prohibition of High-Risk and Manipulative AI Practices

Now, the AI Act isn't just about transparency – it also takes a hard line against certain AI practices that are deemed too risky.

This includes AI systems designed for manipulative practices that exploit vulnerabilities. In a crisis situation, where emotions are running high and people are particularly vulnerable, this is something we need to be extra cautious about.

As communication professionals, it's our responsibility to steer clear of any AI technologies that could cross this line, even inadvertently. The last thing we want is to undermine public trust or find ourselves on the wrong side of the law. It's about being vigilant and making sure that the AI we use is ethical and above-board.

Obligations for High-Risk AI Systems

For AI systems that are identified as high-risk, like those used in critical infrastructure, law enforcement, and emergency services, the AI Act has some pretty stringent requirements. We're talking risk assessments, data governance, human oversight – the whole nine yards.

What does this mean for us in crisis communication? Well, if we're using AI in these high-stakes areas, we need to make sure we're adhering to these regulations to the letter.

We’ll need to ensure that the AI we use in crisis response is safe, reliable, and respectful of people's rights. This regulatory framework is going to be a major factor in how we select and deploy AI technologies in our crisis management and communication strategies going forward.

Global Impact and Setting Standards

Even though the AI Act is an EU regulation, its impact is going to be felt globally. If you're a company outside the EU but you're offering AI products or services within the Union, you've got to comply with the Act.

This means that crisis communication professionals worldwide may need to adapt their practices to meet these standards, especially if they operate or communicate within the EU.

In a way, the AI Act is setting the tone for ethical AI use on a global scale. It could become the de facto standard for AI in crisis communication, no matter where you are in the world.

Preparation and Compliance Challenges

Most probably, meeting the AI Act's requirements for transparency, data governance, and human oversight is going to be a challenge for many of us in crisis communication. We may need to make significant changes to how we select, deploy, and manage AI technologies during emergencies.

But these efforts can also be an opportunity.

By aligning our practices with the AI Act, we can ensure compliance, and take a hard look at how we use AI ethically in crisis communication. It's a chance to enhance our practices and make sure they're in line with the EU's goals of trustworthy AI.

In my experience, being proactive and embracing these kinds of challenges is always better than being caught off guard.

It's about being proactive, flexible, and prioritising ethics in everything we do. So, how are you preparing to implement the guidance of the European AI act?

SPONSOR

The Rundown is the world’s fastest-growing AI newsletter, with over 500,000+ readers staying up-to-date with the latest AI news and learning how to apply it.

Our research team spends all day learning what’s new in AI, then distills the most important developments into one free email every morning.

What I am reading/testing/checking out:

  • New Book (pre-order): Co-Intelligence, Living and working with AI by Ethan Mollick

  • Article: AI Predicts Flooding via PreventionWeb.

  • Research. Rage against the machine? Framing societal threat and efficacy in YouTube videos about artificial intelligence.

Newsletters

  • Techpresso: Get smarter about technology in 5 minutes.

  • Dear CEO: Learn the tactics the best leaders are using to navigate tricky workplace dilemmas.

How satisfied were you with the content in this edition? 📚

Login or Subscribe to participate in polls.

Before you go…

If you have found my articles helpful or gained knowledge from them, I would greatly appreciate it if you could spare a few moments to write a review.

Your review will help me improve and also assist other communication professionals in finding the resources they need to do well in their roles.

Thank you for your support! It's readers like you that make Wag The Dog more than just a newsletter.

PS: I hope you've enjoyed this newsletter! Creating it each day is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Please click the share links below to spread the word with your friends and colleagues; it would mean so much to me. Thank you for reading!

Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content.

Join the conversation

or to participate.