I'm writing this from Riyadh, where I've spent the past two days at EPR2025, a gathering of emergency preparedness and response specialists from around the world.

Smart people. Serious people. People who have dedicated their careers to keeping populations safe when things go wrong. And nearly every one of them keeps invoking the same two words: Chernobyl and Fukushima.

It's understandable. These are the Mount Everests of nuclear accidents. Level 7 events on the INES scale. The worst that can happen.

But here's the problem: by making these catastrophes our default reference point for all radiological emergencies, we're teaching everyone—officials, media, and the public—to panic when they shouldn't.

Key Takeaways

  • Most radiological incidents involve sealed sources, not reactor meltdowns – yet emergency planning defaults to worst-case scenarios

  • Evacuation during Fukushima killed more people than radiation would have – over 50 elderly patients died from evacuation itself

  • The IAEA established five different Emergency Preparedness Categories – each requires different communication strategies, but we keep using the reactor playbook for everything

  • The availability heuristic distorts risk perception – decades of media coverage make us think "radiation emergency" automatically means apocalypse

  • Communication strategies must match the actual incident category – not every radiological event requires evacuation zones and mass panic

Why Do Emergency Responders Default to Worst-Case Scenarios?

Because Chernobyl and Fukushima dominate our mental model of what "radiation emergency" means. When you're planning for emergencies, the instinct is to prepare for the worst possible outcome. That makes sense, right?

Except this approach creates what I call a "Reference Accident" bias. Think about what happens in your head when someone says "radiation emergency." I'm sure you don't picture a small industrial gauge that fell off a truck. You see cooling towers. Hazmat suits. Evacuation zones.

The Red Forest at Pripyat. Hydrogen explosions at Reactor Unit 3.

This is what psychologists call the availability heuristic – we judge probability based on what easily comes to mind. And thanks to decades of media coverage, HBO miniseries, and those stock photos news outlets love, what comes to mind is apocalypse.

According to research published in Risk Analysis by Emir Efendić in 2021, "availability may upstage affect in the construction of risk judgments," meaning vivid, memorable events disproportionately shape how we assess actual risk (Efendić, E., 2021, Risk Analysis, 41(11), 2003–2015, https://doi.org/10.1111/risa.13729).

But the vast majority of radiological incidents look nothing like Chernobyl or Fukushima. Most involve sealed sources, medical devices, industrial radiography equipment, gauges. No reactor core. No pressure to evacuate. No plume. Just a discrete object that's dangerous if you're close to it, and basically harmless if you're not.

We've spent so much time preparing for the Big One that we've created structural ignorance about the incidents that actually happen.

What Is the Reference Accident Problem?

The Reference Accident problem occurs when emergency planners use catastrophic events as the template for all incidents in that category. It's like the old "Reference Man" problem in radiation protection, where dose limits were set based on a hypothetical 20-30-year-old Caucasian male. That standard underestimated risks to women and children.

Similarly, our Chernobyl/Fukushima-focused planning creates blind spots about the incidents that actually occur. Category IV transport incidents get treated with Category I reactor meltdown protocols. That's not just inefficient—it's dangerous.

The mental shortcut goes like this:

  • Radiation detected = potential meltdown scenario

  • Potential meltdown = evacuation required

  • Evacuation = only safe response

Wrong. Dead wrong, sometimes literally.

When Does the Wrong Template Kill People?

During Fukushima, over 50 elderly and hospitalized patients died during the rushed evacuation of the 20km zone. Hypothermia. Dehydration. Medical disruption.

Let that sink in. The response killed more people than the radiation would have.

Studies published after the Fukushima incident by Naito and colleagues in Radioprotection demonstrated that "the loss of life from the evacuation itself exceeded the theoretical radiation exposure risk had people sheltered in place" (Naito, W., et al., 2020, Radioprotection, 55(4), 297–307, https://doi.org/10.1051/radiopro/2020086).

This happens because when every radiological incident gets mentally mapped onto "reactor meltdown," evacuation becomes the automatic response. Never mind that for a sealed source in a transport accident, sheltering in place might be far safer.

Never mind that spontaneous "shadow evacuations"—where people outside the danger zone flee because they've seen the news—clog roads and create their own casualties.

We've trained everyone that "radiation emergency" equals "run." Sometimes running is not the best option. Sometimes running kills you.

What Is the IAEA's Graded Approach?

The International Atomic Energy Agency created GSR Part 7, which establishes five different Emergency Preparedness Categories—each requiring different responses. The IAEA figured out decades ago that not all radiological incidents are the same.

Here's the breakdown:

  • Category I: Nuclear power plants (the Chernobyls and Fukushimas)

  • Category II: Research reactors and nuclear fuel facilities

  • Category III: Industrial irradiation facilities

  • Category IV: Transport incidents and mobile sources

  • Category V: Small portable sources

Each category requires different communication strategies. Different protective actions. Different levels of public alarm.

A Category I event? Yes, talk about plumes and evacuation zones and potassium iodide. That's appropriate.

A Category IV event? You need to tell people: "Don't touch the strange metal cylinder. Call this number." That's it. (OK, an oversimplified risk communication message here, but still, you get the point.)

But watch what happens in real incidents.

Why Did Media Panic Over a 6mm Capsule in Australia?

In 2023, when a 6mm capsule of Cesium-137 fell off a truck in Western Australia, global media went into full nuclear alert mode. "Radioactive Capsule Lost!" The headlines screamed apocalypse.

The risk was real—high dose rate within one meter, but zero risk beyond five meters. It was a sealed source. No contamination. No cloud. No exclusion zone needed.

The operational response was brilliant. They found the capsule using specialized detection equipment at 70 kilometres per hour. Professional. Effective. Textbook Category IV response.

But the narrative control? Much harder when the global media frame is already "nuclear crisis."

This happens repeatedly with truck hijackings in Mexico. Since 2013, multiple trucks carrying radiography sources have been stolen. In almost every case, the thieves wanted the truck, not the radiation source. They had no idea what they'd stolen.

But report these as "nuclear security events" and suddenly you've triggered the Dirty Bomb script. You've granted common thieves the stature of nuclear terrorists. You've terrified millions of people unnecessarily.

The actual risk? Limited to the unfortunate thieves themselves and perhaps some contaminated soil where they ditched the source.

Treating every Category IV incident with Category I language doesn't just cause panic. It causes the wrong kind of response, wastes resources, and destroys trust when people eventually realize the response was disproportionate.

What Communication Lessons Did We Learn Wrong?

Both Chernobyl and Fukushima featured catastrophic communication failures, and we've learned the wrong lessons from both. This is where things get really problematic.

From Chernobyl's Soviet silence, we learned "transparency is paramount." Good lesson. But this has mutated into a demand for instantaneous omniscience. Any delay—even delays necessary to verify information—now gets interpreted as a cover-up.

We've overcorrected.

From Fukushima's SPEEDI controversy, where dispersion model data was withheld, then revealed that evacuees had fled into the plume's path, we learned "release everything immediately."

But this creates the data deluge problem. Raw microsievert readings released to a panicked public primed by Fukushima imagery lead to misinterpretation. The challenge isn't secrecy anymore. It's contextualization.

The real lesson should be: prepare different communication strategies for different scenarios. Stop using the reactor meltdown playbook for the lost gauge.

How Should We Communicate Different Types of Incidents?

After 25 years working in emergency and risk communication, I can tell you the first challenge in any radiological incident is always fighting against the Chernobyl/Fukushima mental model.

It's the same pattern I've seen in pandemic communication, where every outbreak gets filtered through the "Spanish Flu killed 50 million" lens, even when you're dealing with something far more contained.

When you're planning communication for a transport incident, you need to inoculate against the catastrophe comparison.

Something like this works:

"We are responding to a transport incident involving a sealed industrial source. Unlike reactor accidents you may know from history, this source is solid, sealed, and cannot explode or create a cloud."

You lead with what it's NOT, because that's what people are already imagining.

Forget Saying "It's Safe"

An already distrustful public won't believe you. Instead, provide context people can actually use.

Don't say "0.1 millisieverts." That means nothing to most people.

Say "equivalent to the background radiation you'd experience in Denver" or "a fraction of the dose from a routine medical X-ray."

Give people the tools to assess their own risk rather than demanding they trust your reassurance.

Stop Using the Wrong Stock Photos

Please, let's stop illustrating stories about medical source incidents with stock photos of cooling towers and gas masks. Every time a news outlet does this, they're teaching the public that all radiation incidents are Chernobyl/Fukushima-class events.

Visual communication matters. Words matter. Context matters.

What Needs to Change in Emergency Preparedness?

The EPR2025 conference has been incredibly valuable. Important work is being done. But we need to stop treating the Big Two like sacred texts.

Yes, Chernobyl and Fukushima taught us crucial lessons about transparency, preparedness, and the psychosocial dimensions of radiological risk.

But they're teaching us to fight the last war—the worst war—when most of our battles are skirmishes.

Here's what needs to happen:

  1. Develop emergency communication strategies that match the graded approach in our operational planning

  2. Create public education that teaches the difference between a reactor accident and a transport spill—the same way people understand the difference between a house fire and a wildfire

  3. Recognize that psychosocial fallout is often more deadly than the radiation itself—the panic, the stigma, the unnecessary evacuations

We're not preparing the public for the emergencies that actually happen. We're preparing them for the apocalypse.

And that preparation is making us less safe, not more.

Time to step out of Chernobyl's and Fukushima's shadows.

Frequently Asked Questions

What makes Chernobyl and Fukushima different from most radiological incidents?

Chernobyl and Fukushima were Category I nuclear power plant accidents with reactor core damage, widespread contamination, and large-scale evacuations. Most radiological incidents involve sealed sources from medical devices, industrial equipment, or transport accidents—no reactor, no contamination spread, and very limited danger zones.

How many people died from the Fukushima evacuation versus radiation exposure?

Over 50 elderly and hospitalized patients died during the rushed evacuation of the 20km zone around Fukushima from hypothermia, dehydration, and medical disruption. Studies showed this exceeded the theoretical radiation exposure risk had people sheltered in place.

What are the IAEA's Emergency Preparedness Categories?

The IAEA established five categories in GSR Part 7: Category I (nuclear power plants), Category II (research reactors), Category III (industrial irradiation facilities), Category IV (transport incidents), and Category V (small portable sources). Each requires different response protocols and communication strategies.

Why do media reports use cooling tower images for non-reactor incidents?

This happens because cooling towers and hazmat suits have become visual shorthand for "radiation emergency" in media libraries. But this practice reinforces the Chernobyl/Fukushima mental model and teaches the public that all radiation incidents are catastrophic, which isn't accurate.

How should authorities communicate about sealed source incidents?

Lead with what it's NOT: explain that unlike reactor accidents, sealed sources are solid objects that cannot explode or create contamination clouds. Provide context-appropriate comparisons for radiation levels (like "equivalent to a chest X-ray" or "background radiation in Denver") rather than raw millisievert numbers.

What is the availability heuristic and how does it affect risk perception?

The availability heuristic is a psychological phenomenon where people judge the probability of events based on how easily examples come to mind. Decades of media coverage of Chernobyl and Fukushima make these catastrophes mentally "available," causing people to overestimate the likelihood of similar disasters.

What went wrong with communication during Fukushima?

Two main problems: First, SPEEDI dispersion model data was initially withheld, then released showing evacuees had fled into the plume's path. Second, the rush to demonstrate transparency led to releasing raw data that panicked publics couldn't properly interpret without context.

How can emergency planners avoid the Reference Accident trap?

Develop communication templates specific to each IAEA category. Train responders to identify incident type before defaulting to worst-case protocols. Create public education materials that distinguish between incident categories. And critically, challenge the assumption that every radiation detection requires evacuation.

Why does overcorrection on transparency create problems?

After Chernobyl's Soviet silence, there's now pressure for instantaneous disclosure of all information. But this creates expectations of immediate certainty when data is still being verified, and leads to releasing uncontextualized technical data that can mislead rather than inform the public.

What should be the priority in radiological incident communication?

Match the communication strategy to the actual incident category. Prevent the Chernobyl/Fukushima mental model from taking over. Provide context that helps people assess their actual risk rather than imagined worst-case scenarios. Remember that psychosocial harm from panic can exceed actual radiological risk.

References and Further Reading

  • Live stream of the EPR2025 conference (including archives of previous sessions): https://www.iaea.org/events/epr2025

  • Naito, W., Uesaka, M., Kuroda, Y., Kono, T., Sakoda, A., & Yoshida, H. (2020). Examples of practical activities related to public understanding of radiation risk following the Fukushima nuclear accident. Radioprotection, 55(4), 297–307. https://doi.org/10.1051/radiopro/2020086

  • Efendić, E. (2021). How do People Judge Risk? Availability may Upstage Affect in the Construction of Risk Judgments. Risk Analysis, 41(11), 2003–2015. https://doi.org/10.1111/risa.13729

  • Nuclear Communicator's Toolbox | International Atomic Energy Agency. (2017, June). Iaea.org. https://www.iaea.org/resources/nuclear-communicators-toolbox

  • International Atomic Energy Agency. (2015). Preparedness and Response for a Nuclear or Radiological Emergency: General Safety Requirements Part 7. IAEA Safety Standards Series No. GSR Part 7. Vienna: IAEA.

About the Author: Philippe Borremans is a crisis communication expert and founder of RiskComms, with 25 years of experience in emergency and risk communication across pandemics, corporate crises, and all-hazards preparedness. He specializes in developing advanced crisis communication frameworks including the Universal Adaptive Crisis Communication (UACC) Framework and AI-Augmented Crisis Decision Matrix (ACDM).

Reply

or to participate

Keep Reading

No posts found