I Fed AI My Emergency Alert Playbook. Here's What Happened.

From flooding alerts to Twitter posts: can AI really help us communicate faster when lives are on the line?

Dear reader,

In this week's edition of the Wag The Dog Newsletter, I'm sharing results from a practical experiment that might interest anyone wrestling with time pressures in emergency communication.

Over the past few days, I've been testing how artificial intelligence can actually help us craft better emergency messages faster.

Using Google's new Opal platform, (#GOOGL) I built a simple application that takes emergency communication best practices and applies them systematically to create multi-format alerts.

The results? More promising than I expected, but with some clear limitations that point to interesting questions about where AI fits in our profession.

Below, I'll walk you through exactly what I built, how it performed, and what this thirty-minute experiment revealed about the current state of AI for crisis communicators.

Happy reading!

PS: this newsletter will soon cross the cape of 1200 active subscribers. You’re one of them! Thank you for your support. 🙏

Table of Contents

What Happens When You Give AI Emergency Communication Guidelines?

Time saves lives. In emergency communication, those precious seconds between disaster striking and alerts reaching the public can mean the difference between safety and catastrophe.

But how much can artificial intelligence actually help us work faster without sacrificing quality? I decided to find out.

A Simple Question, A Quick Test

What if I took everything I know about emergency message best practices and fed it to an AI system? Could it actually produce something useful, or would it just churn out generic text that missed the nuances of effective crisis communication?

With access to Google's new Opal platform 1 I had a chance to test this hypothesis. Opal lets you create applications using natural language instead of code. Perfect for a quick experiment.

Building the Test: Half an Hour of Setup

The concept was straightforward. Take the research-backed frameworks for emergency messaging2 , upload the best practices I've compiled over the years, and see if an AI system could apply them systematically.

This wasn't about creating the next big thing in crisis communication. It was about understanding what current AI technology can and cannot do for our profession.

The setup took about thirty minutes. I created a workflow with interconnected tasks: gather input about the hazard, research current information, apply best practices from uploaded documents, and produce multiple message formats.

Each step had specific prompts telling the AI exactly what to do.

The Opal editor on the left, the preview on the right.

The Test Run: Flooding in Sao Silvestre

For the demonstration, I fed the system basic information about a fictional flooding scenario.

  1. The source: National Weather Service.

  2. The hazard: flooding

  3. The location: three kilometers around the village of Sao Silvestre.

  4. The timeline: happening now, end time unknown.

  5. The protective actions: seek higher ground, don't drive through flooded areas.

What happened next was interesting to watch. The system didn't just take my input and generate text. It actually searched for additional information, cross-referenced the documents I'd uploaded about message structure, and applied different formatting rules for different platforms as requested.

The AI produced four versions of the same alert:

A 90-character legacy format for older systems: "Flood warning flooding three kilometres south to west seek higher ground now info URL NWS." Bare bones, but it captured the essentials.

A 360-character version for modern phones that included source credibility, specific dangers, and clear actions. Better context, still concise.

A long-form web version that used positive framing, provided specific safety details (like the six-inch rule for moving water), and maintained professional tone while emphasizing urgency.

A Twitter-optimized format with hashtags and the "turn around, don't drown" messaging.

Two fo the four messages the platform created.

What Worked - And What Didn't

The results were better than I expected. Most of the messages could actually be used with minimal editing. The AI consistently followed the structural guidelines I'd provided; prioritising source credibility, using active voice, including specific location information, and maintaining coherent messaging across all four formats.

Some phrases were genuinely well-crafted. The long-form version's opening - "This active situation poses a risk of widespread flooding, potential drowning and damage to homes and structures. Take action immediately to protect yourself and your family" - struck exactly the right balance between urgency and clarity. The safety details about six inches of moving water were contextually appropriate and actionable.

The system did reveal some areas for improvement. It mixed measurement units (kilometres in my input, inches in safety guidance - the result of a fractured global system driven by historical inertia and national sovereignty if you ask me 😅).

More significantly, it couldn't adapt tone or content for different audiences, a critical limitation since the same emergency affects children, elderly residents, tourists, and local workers differently.

This audience adaptation challenge represents my next testing phase. I'm planning to add an input field that specifies target audience, then see how well the AI can adjust language complexity, cultural references, and action recommendations accordingly.

Can it write differently for a university campus versus a retirement community? For visitors versus long-term residents? That's the test that will really show whether AI can handle the nuanced communication we need in emergencies.

Even without that refinement, though, these messages provided strong starting points that captured essential information clearly and followed established best practices consistently.

The Human-in-the-Loop Reality

This test reinforced something I believe strongly: AI should amplify human expertise, not replace it. The technology handled the systematic application of best practices efficiently. It remembered character limits, applied research-backed frameworks, and produced multiple formats quickly.

But it couldn't assess community context. It couldn't make strategic decisions about tone based on local relationships. It couldn't adapt messaging for specific cultural considerations or adjust language for audiences with varying literacy levels.

The principle of "human-in-the-loop" is essential. No emergency communicator should copy, paste, and send without review. What this test showed is that AI can create a much better starting point than a blank page, especially when time pressure is mounting.

Time Saved, Quality Maintained

In emergency management, time truly is the resource we cannot create. Every minute spent crafting messages is a minute when people remain unaware of danger.

This test suggests that AI tools could handle much of the initial composition work, freeing us to focus on the strategic and community-specific elements that require human insight.

The system took seconds to produce what might have taken me twenty minutes to draft manually. That's not because the AI is smarter than human communicators, it's because it can instantly apply systematic processes that we might forget under pressure3 .

What This Test Reveals About AI's Current State

This wasn't a revolutionary breakthrough. It was a practical exploration of where AI technology stands today for our profession. The results show both promise and clear limitations.

AI can handle systematic tasks well. It can remember and apply multiple formatting requirements simultaneously. It can cross-reference information from various sources quickly. It can maintain consistency across different message formats.

But AI cannot replace the judgement that comes from understanding communities, assessing local context, or making strategic communication decisions. It produces good first drafts, not finished products.

Lessons for Our Profession

This test suggests several things about how we might integrate AI tools into emergency communication work:

✅ AI excels at applying known frameworks consistently. If we can articulate our best practices clearly enough for an AI system to follow, we can speed up the initial drafting process significantly.

✅ Human expertise becomes more valuable, not less. When AI handles the systematic work, human communicators can focus on strategy, community adaptation, and the nuanced decisions that truly impact public safety.

✅ Quality control remains essential. AI-created content needs human review, editing, and approval. The technology creates better starting points, not finished products.

✅ Training implications are significant. Future emergency communicators will need to understand both traditional communication principles and how to work effectively with AI tools.

Looking Forward: Realistic Expectations

This test wasn't meant to solve all our communication challenges. It was meant to understand what's possible with current technology when applied thoughtfully to our specific professional needs.

The results suggest that AI can be a useful tool in our toolkit, not the magic solution, but a practical aid that saves time and applies best practices consistently. Like any tool, its value depends on how skillfully we use it.

As we continue exploring these possibilities, the key is maintaining realistic expectations. AI won't revolutionize emergency communication overnight.

But it might help us work more efficiently when every second counts, allowing us to focus our human expertise where it matters most; understanding our communities and crafting messages that truly serve public safety.

The test continues. The learning never stops.

Time saves lives. Maybe AI can help us use that time more wisely.

References and further reading.

1  Wright, W. (2025, July 25). Google’s new AI tool Opal turns prompts into apps, no coding required. ZDNET. https://www.zdnet.com/article/googles-new-ai-tool-opal-turns-prompts-into-apps-no-coding-required/

2  Content Guide | The Warn Room. (2019). The Warn Room. https://www.thewarnroom.com/contents-guide

3  Integrated Public Alert & Warning System. (2024, December 12). Fema.gov. https://www.fema.gov/emergency-managers/practitioners/integrated-public-alert-warning-system

What I am reading/testing/checking out:

  • Research article: Orru, K., Hansson, S., & Nero, K. (2025). Social vulnerability triage: a dynamic scenario-based system for disaster planning and response. Journal of Risk Research, 1–22.

  • Article: AI for good, with caveats: How a keynote speaker was censored during an international artificial intelligence summit

  • Tool: new AI on the block - Kimi K2, the open‑source trillion‑parameter MoE model, smarter coding, and agentic task automation.

  • Research article: Ross AD, Siebeneck L, Wu H-C, Kopczynski S, Nepal S, Sauceda M. Seven Challenges for Risk Communication in Today’s Digital Era: The Emergency Manager’s Perspective. Sustainability. 2024; 16(24):11306.

Let’s meet!

Here are the events and conferences I'll be speaking at. If you're around, feel free to message me, and we can meet up.

How satisfied were you with the content in this edition? 📚

Login or Subscribe to participate in polls.

PS: I hope you've enjoyed this newsletter! Creating it each weekend is a labour of love that I provide for free. If you've found my writing valuable, the best way to support it is by sharing it with others. Thank you for reading!

Parts of this newsletter were created using AI technology to draft content. In addition, all AI-generated images include a caption stating, 'This image was created using AI'. These changes were made in line with the transparency requirements of the EU AI law for AI-generated content. Some links in this newsletter may be affiliate links, meaning I earn a small commission if you click and make a purchase; however, I only promote tools and services that I have tested, use myself, or am convinced will make a positive difference.

Reply

or to participate.