Can emergency management professionals trust AI-generated crisis messages — and more importantly, can the public?
That question sat at the heart of this webinar, recorded on February 24, 2026, and it's one the crisis communication field can no longer afford to sidestep.
Bringing together academic rigour and real-world practice, the session draws on an international research collaboration between Page Center scholars and practitioners to examine how emergency managers and their stakeholders actually perceive AI-produced content when the stakes are high.
Alice Cheng of North Carolina State University and Yan Jin of the University of Georgia share their findings, and I joined them as the practitioner voice — bridging the gap between what the research shows and what it means when you're the one drafting messages in the middle of a crisis.
Gary Sheffer, a member of the Page Center advisory board, moderated what turned out to be a rich and at times challenging conversation.
Whether you're a researcher, a communicator, or an emergency manager trying to figure out where AI fits into your next response plan, this one is worth your time.
Study Findings Overview
The primary finding of the study revolves around the "Trust Gap" between emergency management professionals and the general public regarding AI-generated crisis communication.
While AI can rapidly generate and disseminate safety alerts, the research indicates that the public’s willingness to follow these instructions depends heavily on the perceived "humanity" and transparency of the source.
If users suspect a message is fully automated without human oversight, their trust in the accuracy and urgency of the information drops significantly, which can be life-threatening during active disasters.
Furthermore, the study explores the collaborative dynamic between AI tools and human crisis managers. Findings suggest that AI is most effective when used as a "co-pilot" rather than an autonomous actor.
Practitioners noted that while AI excels at data processing and identifying emerging risks in real-time, it lacks the nuanced ethical judgement and cultural sensitivity required to handle "high-emotion" crisis scenarios.
The research emphasises that "AI collaboration" must involve a human-in-the-loop system to verify facts and ensure that the tone of the communication remains empathetic and appropriate for the context.
Lastly, the researchers highlight the risks of AI-driven misinformation and "hallucinations" in high-stakes environments. The study demonstrates that emergency responders are particularly concerned about the speed at which AI can inadvertently spread false data during a disaster.
The findings call for a standardised framework for "AI Transparency Labels", which would clearly indicate when AI has been used to generate or translate crisis messages.
By establishing these guardrails, organisations can leverage AI's speed while maintaining the credibility necessary to keep the public safe during a risk event.