By Jim Brown
Artificial intelligence (AI) has brought about significant advancements in text generation, data analysis, image creation and multimodal processing, powered largely by large language models (LLMs) — neural networks trained on enormous libraries of text to predict and generate human‑like language. You may know LLMs more casually as “AI systems” or by specific tools like ChatGPT, which is a product built on an LLM.
Integrating these new tools means acquiring new skills like structured prompt engineering, AI-assisted verification and real-time decision-making, while revisiting some old ones, like good old-fashioned editing and fact-checking.
| HOT TOPIC: Will AI ever take our jobs?
What follows is an attempt to capture updated skills by exploring the use of advanced priming and prompting, the importance of real-time fact-checking, the significance of editing AI-generated content, preparing and presenting data, and the role of an AI strategist in guiding the development and implementation of AI strategies — and how all this applies to the fire service. But first, some important definitions.
Prime and prompt
Priming is the process of providing context to the model (e.g., ChatGPT) before giving it a prompt. This can focus the model and generate more relevant responses. With the advent of structured prompts and multimodal inputs (e.g., text, image, document analysis), priming has evolved beyond text-based context setting. For example, if you are asking a question about a specific topic, you might prime the model by providing some background information on that topic before asking your question.
There are many ways to prime your chat for specific fields of expertise or styles. One way is to provide it with examples of the type of language or content you want it to generate. For more advanced use, explicit role assignments (e.g., “Act as a fire chief evaluating incident reports”) can lead to more precise outputs. Another way is to provide it with examples, data or information related to the field or style you want it to emulate (e.g., “ … in the style of an after-action review”). This is sometimes refereed to as retrieval-augmented generation (RAG).
A prompt is a string of text that is given to the model as input. The model then generates a response based on the prompt and the prime. A prompt can be anything from a simple question to a more complex statement or scenario. The more detail you provide in your prompt the more specific the response will be. Technically this is referred to as expository writing, or the type of descriptive, explanatory information you might find in a textbook or scientific article.
Your prompt will likely fall into one of a few categories: Zero Shot, Chain of Thought, and Least to Most, any of which can include a priming statement:
- Zero Shot: Simply ask the LLM what you want it to produce as the output. Again, this can be a simple request or a combination of related requests.
- Chain of Thought: Chain of Thought refers to prompting the model to explain or reason through its answer step by step. In practice, this approach can also be used to build outputs iteratively through a sequence of prompts in the same conversation. Because the LLM can remember your “chat” up to its token limit, it will continue to build on the original prompt. (LLMs with “reasoning” are using a type of Chain of Thought prompting internally.)
- Least to Most: This is a practical user strategy where the LLM break down your Chain of Thought responses in a step-by-step approach. This allows you to get more detail for each of the points created by the LLM. It starts with high-level content followed by progressively requesting more detailed elaboration.
Let’s put it all together with some examples.
The Zero Shot prompt example we use in one of the classes I team-teach at the National Fire Academy combines a short prime and a longer prompt:
“Assume you are a fire department training officer. Create a one-hour lesson plan for new firefighters based on NFPA 1852. Include: Main points and sub-points; time frames for each main point; references for each main point; a five-question multiple choice quiz with answer key; an outline for a slide presentation; a manipulative drill; a skills check off sheet with pass/fail criteria and signature block for the student and the evaluator for the manipulative drill.”
The AI will create your training materials. The results will vary each time you make the request, and for each AI chat interface you use.
Keep in mind, your phrasing matters. Asking the AI to “create a 1-hour lesson plan for new firefighters based on NFPA 1852” is different from asking it to “create a 1-hour lesson plan based on NFPA 1852 for new firefighters.” The AI may read your prompt literally, looking for a version of NFPA 1852 specifically for new firefighters. Remember, everything AI creates needs to be edited and fact-checked. It does make mistakes and can “hallucinate,” which means generating outputs that are incorrect or even fabricated while still appearing coherent and plausible.
Using a Chain of Thought prompt, start with priming statement, then ask the LLM to build the lesson plan. From there, make additional requests for the rest of the lesson materials:
“Assume you are a fire department training officer. I need a one-hour lesson plan for new firefighters based on NFPA 1852. Let’s work through this step by step. First, identify the key objectives and topics covered by NFPA 1852 that are essential for new firefighter training. Then, organize those into main points and sub-points. Next, assign time frames for each section based on instructional priorities. After that, list references for each point using NFPA 1852 and any supporting materials. Once that’s complete, generate a five-question multiple choice quiz with an answer key. Then, provide an outline for a supporting slide presentation. Finally, describe a manipulative drill and develop a skills check-off sheet with pass/fail criteria and signature blocks for the student and evaluator.”
What’s the difference between the two prompts? Zero Shot may give you a complete but shallow output. Chain of Thought increases the likelihood of structured, logically sound results by asking the model to “think before writing.”
Using Least to Most progressive prompting, start with a general request, then gradually ask the AI to expand on specific parts. For example, begin with a prompt for main lesson points, then follow up with additional prompts for time frames, references, and supporting materials. This helps control output quality and gives you more opportunities to guide the AI’s direction.
Priming statement: “I am a fire department training officer creating a one-hour lesson plan for new firefighters. The lesson will be based on NFPA 1852.”
Then issue prompts progressively:
- Initial prompt: “List the main points and sub-points that should be included in this lesson plan.”
- Follow-up prompt: “Now assign estimated time frames for each main point to fit a one-hour time block.”
- Next prompt: “Add references for each main point using NFPA 1852 and any relevant materials.”
- Next prompt: “Create a five-question multiple choice quiz based on the lesson content, with an answer key.”
- Next prompt: “Provide an outline for a slide presentation to accompany the lesson.”
- Next prompt: “Describe a manipulative drill to reinforce the key concepts.”
- Final prompt: “Develop a skills check-off sheet for the manipulative drill with pass/fail criteria and signature blocks for the student and evaluator.”
Experiment with different prompts to determine the approach that works best for your request.
Let’s now turn to other important factors impacting AI use.
Fact-checking and trust verification
Although the rate at which ChatGPT provides false information (hallucinations) isn’t bad, and in fact is similar to humans, it does still wander off topic in longer chats. Due to the possibility of these hallucinations, it is important to verify the information that ChatGPT provides before using it. Recent advancements in real-time web browsing capabilities allow for quicker source verification.
Here are some techniques to help you fact-check ChatGPT’s statements:
- Use AI verification tools like Perplexity AI, SciSpace or Elicit for scientific fact-checking.
- Cross-reference AI-generated data with official sources, such as government reports, research papers and regulatory websites.
- Monitor AI-generated citations using AI citation tracking tools.
Bottom line: We must verify AI-generated citations and sources. These verification steps are critical not only for accuracy but also for building trust in AI-generated content within the fire service and beyond.
Editing and ethical AI use
As AI-generated content becomes more common, ensuring quality and ethical use is essential for maintaining professional standards. New AI-driven semantic editing tools ensure that content remains grammatically cohesive and logically structured.
- Style adaptation: AI can now adjust writing to match formal, casual or industry-specific tones with improved accuracy.
- AI disclosure compliance: Ethical AI use requires disclosure of AI-assisted content. Tools now support watermarking and metadata embedding for AI-generated documents.
- Editing: Human editing is still important. You have a unique way of expressing your ideas and AI can’t always capture that. Make sure your message/voice stays your own.
The American Psychological Association (APA) and other governing bodies have updated guidelines for citing AI-generated content, reinforcing the need for transparency.
AI-assisted data analysis and decision-making
AI is transforming how we handle data, offering faster insights and reducing manual effort through automation.
- Automated data cleaning: AI can now clean, structure and format large datasets in real time, reducing manual workload.
- Predictive AI models: AI-based forecasting in fire risk assessment has significantly improved, allowing for more precise resource allocation.
- Explainable AI (XAI): AI is now capable of justifying its decisions, making results easier to interpret and audit.
When used thoughtfully, these tools can support more efficient, evidence-based decision-making.
Data presentation and visualization
Beyond analysis, AI is enhancing how information is shared, making it easier to interpret complex data and communicate findings visually. Here’s how:
- AI-powered dashboards: New integrations allow real-time AI-enhanced dashboards in Power BI, Tableau, and Python-based tools.
- Interactive visualizations: AI can now generate custom visual explanations based on user queries, beyond traditional charts and graphs.
- Enhanced AI image generation: Tools like MidJourney, DALL-E and Stable Diffusion assist in creating customized training and public outreach materials.
These capabilities not only support internal decision-making but also improve public education, training and outreach efforts.
AI strategy and governance
Do you have an AI strategy? An AI strategy requires ethical considerations, regulatory compliance and risk mitigation. It reasonably incorporates the governance of accessible advances in AI into daily operations and long-term planning processes. (Note: Governance is the term we use to describe our policies and practices around AI implementation and use.) Here are some factors to consider in your AI strategy:
- AI risk mitigation: Understanding AI biases, legal implications, and governance frameworks is essential.
- AI policy creation: The rise of regulations like the EU AI Act and emerging US standards means organizations need AI compliance guidelines.
- Incident-based AI documentation: Fire departments should document AI-driven decision-making in for transparency and accountability.
- Data security: Keep public and private data separate to prevent unauthorized access and use.
Establishing a sound AI strategy with clear policies will position departments to take full advantage of emerging tools while maintaining operational integrity and public trust.
Emerging AI tools in fire and emergency services
New AI-driven technologies are quickly being integrated into fire and emergency services, reshaping everything from detection to post-incident analysis. Some examples:
- AI for real-time fire detection: Satellite and AI-powered tools enhance wildfire early detection.
- AI-driven dispatch: AI-optimized CAD systems improve response efficiency and accuracy.
- Post-incident AI analysis: AI models analyze pre-arrival and on-scene imagery, incident reports, and voice logs for improved after-action reviews.
- AI narrative incident report completion: Some software now offers report completion from your narrative.
- AI assisted inspections: Technology is being developed to assist, and eventually replace, fire safety inspectors. It is currently expensive, but we will see prices come down quickly.
- Shadow AI: AI is being introduced into existing software, and you may no know. Take the time to question your vendors about their AI use.
- Physical AI: Robotics and sensor technology is advancing rapidly.
As these tools mature and become more accessible, departments must stay informed, evaluate their use cases critically and plan for thoughtful integration.
Next steps
To prepare for a future where AI is embedded in every level of service delivery, fire departments should take actionable steps today.
- Expand AI strategist role: With AI playing a larger role in decision-making, a dedicated AI strategist or team is now a necessity rather than an option.
- Integrate AI literacy training: Departments should include AI prompt engineering, fact-checking and risk assessment in officer training programs. All employees should be trained on appropriate and ethical use of AI.
- Develop an AI policy playbook: This should include ethical guidelines, AI usage transparency, and compliance with emerging regulations.
- Participate in organizations that promote AI governance: Engaging with national and international AI policy groups helps departments stay ahead of regulatory changes and adopt best practices.
- Don’t re-invent the wheel! Leverage existing frameworks, tools and case studies from other fire departments and public safety organizations already navigating AI adoption.
By proactively defining roles, training needs and policy frameworks, departments can harness AI’s potential while minimizing its risks.
In the future
As the AI landscape continues to evolve, adapting to rapidly developing AI capabilities will be crucial. Real-time AI governance, strategic AI deployment, and human-AI collaboration will define the most valuable skills moving forward. The role of the AI strategist will continue to grow in importance as organizations balance AI adoption with responsible implementation.
FireRescue1 is using generative AI to create some content that is edited and fact-checked by our editors.
ABOUT THE AUTHOR
Jim Brown is a retired division chief from Monterey, California, now living in Hawaii. He is a California-certified chief officer and master instructor and serves as a contract instructor teaching “Analytical Tools for Decision-Making” at the National Fire Academy. Brown is a member of the IAFC Technology Council’s AI subcommittee. He has a bachelor’s degree in fire science from Columbia Southern University.