Trending Topics

Turning data into decisions: How Henderson Fire is scaling QA/QI with AI

The department is using AI to evaluate both narrative and structured ePCR data

638175296_1232920305691095_755078775388090101_n.jpg

Photo/Henderson Fire Department/Facebook

By Deputy Chief Richard Johnson

Fire departments have never struggled with a lack of data. The real challenge has always been turning that data into meaningful, actionable information that supports better decision-making.

At the Henderson (Nevada) Fire Department (HFD), we respond to approximately 45,000 incidents annually, each generating an electronic patient care report (ePCR). That translates into tens of thousands of narratives, clinical decisions and protocol applications every year.

| MORE: AI notetakers in the firehouse: Policy, public records and practical use

Historically, reviewing that volume in a meaningful way required significant time from support personnel and still only captured a fraction of what was happening in the system. Like many agencies, we relied on targeted chart reviews of high-risk calls, specific protocols or random sampling. While effective in identifying isolated issues, that approach left most of our calls unreviewed.

At the same time, expectations around clinical performance, documentation accuracy and protocol compliance continue to increase. We recognized that our existing process had three inherent limitations: It could not scale to meet the volume of data we were generating, it lacked consistency across reviewers, which included lead medics (56-hour schedule) and medical services officers (38-hour schedule), and it limited our ability to identify system-wide trends.

AI assistance

We were not looking to replace our QA/QI process; we were looking to strengthen it. AI offered a way to evaluate 100% of our charts against established EMS protocols and HFD EMS Task Standards not just a subset.

From the beginning, we understood this was not simply a technology project. Success depended on aligning the right stakeholders early, including our EMS Division where the majority of in-depth reviews are conducted by medical services officers, along with lead medics who perform preliminary QA/QI while working 56-hour shift schedules, fire department chief officers, and our technology partners.

Those early conversations centered on a critical question: How do we use AI to support our providers while also identifying areas that need attention? That framing proved essential. It shaped how the system was introduced and how it was ultimately implemented.

EMS documentation at scale

Our focus has been on leveraging AI capabilities within First Due to analyze EMS documentation at scale. The system evaluates both narrative and structured ePCR data against our SNHD EMS protocols and HFD EMS Task Standards, identifying potential protocol adherence or deviation, recognizing key clinical indicators such as airway management or trauma triage decisions, and flagging documentation gaps or inconsistencies. Just as important, it highlights cases that warrant further QA/QI review.

The goal is not to generate findings that lead to discipline, but to illuminate patterns, prioritize review efforts, and provide timely, actionable feedback. This approach allows us to move away from random sampling and toward a more targeted, data-driven review process.

While the technology is powerful, implementation success depends far more on building trust than deploying software. We made a deliberate effort to be transparent about what the system does, and what it does not do. It does not replace clinical judgment, it does not automatically trigger discipline, and it is not designed to punish providers. Instead, it is a decision-support tool aimed at improving quality and consistency. That clarity helped providers understand how AI can support them and improve outcomes over time.

Early lessons learned

As with any new initiative, there have been important lessons along the way. One of the most significant is that AI is only as good as the data it receives. Inconsistent or incomplete documentation leads to inconsistent outputs, reinforcing the need to continue improving documentation practices across the organization. We also learned quickly that this requires ongoing refinement. Protocol mapping, alert thresholds and reporting outputs must be continuously evaluated and adjusted. Departments considering similar tools should set realistic expectations — AI is not perfect, and it is not a one-time implementation.

As we continue rolling out the system, we recognize that adjustments will be necessary. When too many cases are flagged, it becomes difficult to distinguish what truly requires attention. Fine-tuning the system to prioritize meaningful findings is critical to making it usable. Throughout the process, one principle has remained constant: Human oversight is essential. AI enhances clinical review, but it does not replace it. Every flagged case still requires context, experience and judgment from trained personnel.

Perhaps the most important lesson is that culture matters more than technology. The success or failure of AI implementation depends largely on how it is introduced and perceived. Leadership messaging, transparency and trust all play a critical role. If providers believe the system is punitive, adoption will fail regardless of how advanced the technology is.

Although we are still early in implementation, we are already seeing the direction this is heading. We expect increased visibility into system-wide trends, more consistent identification of documentation gaps, and an improved ability to prioritize high-impact reviews. We also anticipate identifying training needs earlier, allowing us to take a more proactive approach. Most importantly, we are shifting from a reactive QA/QI model to a proactive, data-driven one. Instead of asking what we missed, we can begin asking what we are seeing early and how we can act on it.

Tips for implementation

For departments considering a similar path, a few principles stand out:

  • Start with the problem you are trying to solve, not the technology itself.
  • Engage stakeholders early and ensure that clinical, operational and frontline perspectives are represented.
  • Be intentional about building trust and communicating clearly.
  • Pilot the system before scaling, and plan for ongoing refinement.
  • Most importantly, keep humans in the loop. AI should support decision-making, not replace it.

Better tools to succeed

AI isn’t going to solve every problem in the fire service, but it gives us a tool we have never had before. For HFD, this is just the beginning. As these tools continue to evolve, we will continue evaluating how AI can improve service delivery to the community.

The objective remains the same: Improve patient care, support our providers and reduce organizational risk. AI is not about replacing the firefighter or paramedic, it is about giving them better tools to succeed.

The fire service scientist details her newest adventure, the Science Alliance, plus how to make data accessible at the station level

ABOUT THE AUTHOR
Richard Johnson is deputy chief for the City of Henderson (Nevada) Fire Department.

FireRescue1 contributors include fire service professionals, trainers and thought leaders who share their expertise to address critical issues facing today’s firefighters. From tactics and training to leadership and innovation, these guest authors bring valuable insights to inspire and support the fire service community.

Interested in expert-driven resources delivered for free directly to your inbox? Subscribe for free to any of our newsletters.

You can also connect with us on YouTube, Instagram, X, Facebook and LinkedIn.