By Oren Bersagel-Briese
Artificial intelligence has entered local fire department operations and is outpacing our policies in a way that can pose a risk to any agency. In response, the Castle Rock (Colo.) Fire and Rescue Department sought to develop an AI usage policy before the usage outpaced governance.
Playing catch-up
Often, we are slow to adopt, understand or embrace emerging technologies. However, as OpenAI’s ChatGPT entered its fourth year of public availability, it became increasingly clear that our policy development hesitation could put our agency at risk. What began as members independently finding ways to integrate AI into their work lives quickly escalated to an area of organizational concern when we learned that some people were potentially using the software to assist them in writing fire and medical report narratives.
| WEBINAR: How to build firefighter health facilities
Broadly speaking, firefighters don’t find the administrative side of the profession to be the most rewarding. So, it seems obvious, in retrospect, that there would be a natural evolution of using this software to help shortcut time and error in those aspects of the job. But not having a way to manage, control or provide expectations was an administrative gap that needed to be quickly addressed.
The bigger administrative challenge was to create a governance document that was applicable throughout all town departments, pliable enough to evolve without needing constant updates, and still provide a clear direction for employees. We immediately engaged with Castle Rock’s IT department and worked to help draft a town-wide policy that had the opportunity for each department to incorporate specific needs.
Building our policy
Once the decision was made that the organization would manage the usage of AI chatbots, we needed to provide the required resources to the members. The town focused on the ChatGPT platform and mandated the use of a paid business subscription, as ChatGPT does not use anything from the paid memberships in future model training.
The policy also clearly states that personally identifiable information (PII) or protected health information (PHI) is strictly forbidden from being used in the chatbot. We are used to protecting PHI, but it is important to consider how entering information that might seem benign can turn into PII. For example, we had to determine whether members could enter addresses, basic call information, and incident outcomes, especially if they could eventually be connected in a way that points to a specific individual. Our current RMS vendor has AI report-writing integrations that members are able to use, but they cannot use any non-native chatbots to assist with incident or patient report writing.
Understanding that AI is being integrated into nearly all software, it was also important not to handcuff our processes in a way that was unrealistic. As such, the town’s policy includes language that automatically allows the use of AI that has been internally integrated into previously approved software systems (RMS, Microsoft Office, etc.), as well as accepting widely used and ubiquitous integrations like Google.
With intention, our policy hints at — but does not fully define — situations where a chatbot can be used. There’s simply no way to capture all use cases, but we allow it in places like grant writing, personnel evaluations, project management, data analysis and daily communications. Another helpful tool is connecting the chatbot to a shared folder — in our case, SharePoint — that contains all SOGs, directives and memos. This allows users to conduct conversational searches of all documents at the same time.
Lastly, the policy emphasizes that the end product is the responsibility of the user. Because most chatbots use predictive modeling, they can have the ability to provide inaccurate information. Any chatbot product should be thoroughly reviewed by the employee, including ensuring that the source information is valid.
Take early action
As our department embraces this technology, we have seen the efficiency and communication benefits of AI, but there has also been a learning curve that we expect to continue well into the future. There are a few simple tips to help users minimize that curve, including taking the free introductory courses through the chatbot company, and filling out the personalization settings in the system. For example, you can tell ChatGPT to always use the Oxford comma and to download all word processing documents in Microsoft Word without any automatic formatting.
As AI continues to evolve, fire departments that engage in early, collaborative and open-ended policy development will be best positioned to guide its use in ways that protect both the agency’s personnel and the organizations that they serve. It isn’t a technical exercise but rather a leadership responsibility where the fire department must play an active role.
ABOUT THE AUTHOR
Oren Bersagel-Briese is a 31-year member of the fire service, currently serving as the deputy chief of operations with the Castle Rock (Colorado) Fire and Rescue Department. He is one of the founders of the Denver 9/11 Memorial Stair Climb as well as a member of the NFFF 9/11 Stair Climb Steering Committee.