Trending Topics

When the machine thinks for us

AI, cognitive friction and the future of fire service leadership

Brainstorming

Vector illustration.

alashi/Getty Images

Technology has always changed the landscape of the fire service. Despite the waves that motorized apparatus, SCBA, CAD, drones or really anything new we introduce, nothing will ever impact our decision-making landscape quite like artificial intelligence (AI). Predictive dispatching, automated reporting and data-driven deployment models are now routine in many departments — all tools help us manage risk, improve efficiency and stretch our limited resources. But they also raise an important question: What happens to our judgment when the tools start thinking for us?

That’s not a theoretical concern anymore. Researchers at MIT’s Media Lab recently found that people who rely heavily on AI tools to complete complex tasks show less brain activity in areas responsible for creativity, memory and focus. In simple terms, the more the system did the thinking, the less the human brain stayed engaged.

| WATCH NEXT: The art of washing fire apparatus

If that’s true in an office setting, imagine what it means for people who make life-or-death decisions under stress.

Cognitive offloading in the command post

Our fire officers are trained to think through complexity (or at least we should be). We should understand how to synthesize incomplete information, balance risk and make sense of chaos through action. Those skills come from repetition, experience and mental friction.

But the “friction” is disappearing fast. AI-based systems are built to make things frictionless. They remove uncertainty, shorten analysis time, and package decisions neatly into dashboards or recommendations. It feels efficient, but there’s a hidden cost.

Psychologists call this cognitive offloading — letting a device or algorithm do the thinking for you. Researchers Risko and Gilbert describe it as “delegating mental processes to external systems.” We’ve all done it: a GPS replaces your mental map, a calendar reminds you of appointments, and so on.

But now, AI can suggest your next move at a fire scene, interpret an EKG, predict call volume and even draft post-incident analyses from radio traffic. The problem isn’t that the data is wrong, it’s that our brains stop wrestling with the problem. Over time, that can dull the very judgment that experience is supposed to sharpen (Risko & Gilbert, 2016).

The cost of convenience

Digital tools reward speed, not depth or cognition. They’re designed to keep us moving, not thinking. You’ve probably seen it on the apparatus floor: younger officers who can run a simulation faster than anyone but freeze when something doesn’t fit the template. That’s not a skill issue; it’s a cognitive one. They’ve grown up in systems that value efficiency over curiosity — YouTube over hands-on mistakes and actual learning.

The Organization for Economic Co-operation and Development (OECD) found that heavy digital use and “frictionless” learning environments corresponded with lower problem-solving scores among students worldwide (OECD, 2024). That’s the same pattern we risk in our profession if we stop demanding analysis and let software do the reasoning for us. Additionally, technology writer Vinson Vara warned that AI tools tend to “homogenize thought,” producing average, consensus-driven answers optimized for agreement, not creativity (Vara, 2025).

In our terms, that’s a department where everyone’s plans look sharp on paper, but no one questions the assumptions underneath — nobody is checking to work to actually see if reality meets the vision.

When the computer’s plan doesn’t match the street

Imagine this: A battalion chief is preparing for a downtown event. The AI-driven deployment software crunches five years of data — weather, traffic, call volume, demographics — and recommends a resource plan. On paper, it’s solid. But a veteran officer knows the neighborhood and feels uneasy. There’s a new construction site, a festival overlap and a lane closure the data doesn’t reflect. The model doesn’t know that, but the chief does.

If that officer ignores their gut because “the system says it’s fine,” the department has crossed a line. Technology has gone from being a partner to being the pilot. That shift — from sense-making to system-following — is subtle, but it’s the beginning of decision atrophy. We lose our edge one “click accept” at a time.

Friction is a feature, not a flaw

Fire service leaders before us have spent years teaching that friction — the hard work of analyzing, debating and second-guessing — is essential. We built experience through mistakes that left scars and impressions in our limbic systems. It’s how we build intuition and learn how to maintain discipline under stress. AI removes friction. It’s fast, confident and sometimes too persuasive.

Our challenge as the current and future fire service leaders is to put the friction back. We can do that by designing systems that require human review before implementation. Every AI-generated recommendation should have a checkbox for explanation: Why are we doing this? What did the machine miss? Create more eyes on a problem for perspectives.

We can build tabletop drills where officers run “AI versus instinct” comparisons to see what’s overlooked. And we can make it a point to occasionally go “manual” — navigate without GPS, run operations without predictive models, or analyze incidents without software prompts. This is like mental fitness training — and there will be a time when we all need those muscles. Muscles only stay strong if they’re used.

The human role in a machine world

AI isn’t the end of the world, and it certainly is not the enemy. In fact, it can be one of the most valuable tools we’ve ever had. The National Institute of Standards and Technology (NIST) has shown how artificial intelligence can strengthen readiness when governed by transparency and human oversight (NIST, 2023). Predictive analytics can flag at-risk buildings, guide prevention efforts and help identify gaps before tragedy strikes. Automated maintenance and reporting save countless hours and reduce human error. Adaptive training platforms can tailor drills to individual performance. But those same systems can also reinforce bias, overlook emerging hazards, or create a false sense of certainty.

The danger isn’t in the technology itself — it’s in our willingness to let it replace thinking. Every fire chief and company officer has a responsibility to maintain the “human in the loop.” AI should support decision-making, not substitute for it. The IC or human decision-maker— not the computer code — owns the outcome and the consequences.

Building cognitive fitness in the fire service

So how do we keep our minds sharp while adopting smarter tools? Start by treating mental readiness like physical fitness.

Here are a few operational habits that help:

  • Human-in-the-loop governance: Gen X and older members probably remember the movie War Games. Keeping the human front-and-center to require a deliberate review of any AI-derived plan is critical before any implementation. Make the officer justify the decision as if there were no computer involved. Show your work.
  • “What-if” debriefs: After every AI-assisted decision, hold a quick discussion: What did the model miss? What would we have done differently without it? Fact-checking is essential.
  • Manual proficiency drills: Periodically practice basic operational or planning tasks without software like map reading, resource tracking, or narrative report writing to maintain independence.

These aren’t anti-technology moves. They’re leadership safeguards for everyone. They keep our people mentally strong, engaged, and ready when systems fail or assumptions fall apart.

Learn the basics of artificial intelligence, plus how to use a chatbot at your department to streamline work

Staying accountable

As AI becomes more embedded in emergency operations, questions of accountability grow. Who’s responsible if an AI-generated recommendation leads to a poor outcome? The vendor? The agency? The incident commander? There’s only one right answer: The human in charge. Every policy that integrates AI should make that explicit. Technology can inform the decision, but it cannot own it. That distinction preserves both trust and professionalism — two things that define the fire service at its best.

Leading the next era of command

The fire chief of tomorrow will spend less time collecting information and more time interpreting it. And really, let’s be honest, that reality is already here for agencies existing in North America today. AI will deliver instant analysis from drones, sensors, CAD feeds and other sources. The job will be to connect the dots — to see context, risk and opportunity through a human lens. Our value as leaders won’t come from processing speed; it will come from our human judgment. From empathy and a greater understanding. From the ability to challenge the machine when the easy answer doesn’t fit the street reality. To do that, we have to think about thinking.

We need to teach our future officers how to question automation bias, recognize overconfidence in algorithms, and maintain moral clarity when data feels certain but the situation isn’t. This is metacognition for command — the skill that keeps leaders human in an increasingly digital fireground.

The real question

AI can process information faster than any of us. It can spot trends, predict outcomes, and even mimic the way we write or talk. But it can’t lead. It can’t feel the weight of accountability, read a firefighter’s expression, or hear the silence on the radio that tells you something’s wrong. The real question isn’t whether AI can think faster — it’s whether we can keep thinking deeply enough to lead it.

| WATCH: Chad Crouse talks AI — We can’t ‘outsource all the thinking’

Brian Schaeffer retired as fire chief of the Spokane (Washington) Fire Department in 2024. His professional life has spanned over 30 years, serving in fire departments in the Midwest and Northwest. Schaeffer serves on numerous local, state and national public safety and health-related committees. In addition, he frequently lectures on innovation, leadership and contemporary urban issues such as the unhoused, social determinants of health, and multicultural communities.