SPD mulls AI policy after a sergeant was found using ChatGPT to write reports
The Office of Police Accountability recommended that the Seattle Police Department draft an artificial intelligence policy after discovering that a sergeant used ChatGPT, Grammarly and other AI programs to write emails and reports. A complaint filed by another sergeant alleged that he saw Sergeant Jamin Dobson type a paragraph into ChatGPT and copy the output into Blue Team, SPD’s record-management software.
Two other officers also told the OPA that Dobson talked to them about using ChatGPT, encouraging them to use it to write force reports and help them study for the sergeant’s exam.
ChatGPT is a large language model that uses deep learning to generate text based on a user’s prompts. The manufacturers of tools that SPD uses have already begun incorporating AI into their platforms. Bodycam producer Axon introduced software called Draft One that summarizes audio from body-worn cameras, and Cellebrite, an Israeli phone-cracking software, recently added generative AI to its evidence-management platform.
While this technology has the potential to lighten the load in a paperwork-heavy field like policing, groups like the Electronic Frontier Foundation and the American Civil Liberties Union have warned about the risks of using AI, highlighting its tendency to “hallucinate” facts from thin air.
Last year, the King County Prosecuting Attorney’s Office issued a memo forbidding officers in its jurisdiction from submitting police reports drafted with AI. Chief Deputy Prosecutor Daniel Clark noted concerns that generative AI wasn’t mature or reliable enough to use in the criminal legal system: “AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.”
In his OPA interview, Dobson claimed that he only uses ChatGPT to interpret the SPD policy manual and create generic drafts without sensitive information. He said he added the details manually after the fact and proofread the final copy.
Dobson said that he attempted to comply with the city AI policy but didn’t feel the need to label the report as AI-generated—a requirement of that policy—because “it did not substantially generate the text.”
The OPA also interviewed Captain James Britt, who heads SPD’s technology and innovation section. Britt said the King County prosecutor’s memo would not prohibit using generative AI in force reports and emails, as they are not investigative records. He also noted that the city’s AI policy was lax and that the department had no specific AI policy.
OPA could not sustain allegations that Dobson’s use of ChatGPT violated the city’s policy on generative AI because it could not independently verify that he entered any confidential information into the software or prove that his use crossed the threshold of “substantive use,” which would require it to be labeled as AI-generated.
The same complaint about Dobson’s AI use alleged he stole time by marking himself as working on his time card when video evidence showed he was not at work. Dobson claimed he was sick that day and mistakenly marked himself as on duty. After receiving notice of the internal affairs investigation, he corrected his time card to use a sick day, and OPA did not sustain the findings.
It’s noteworthy that in 2017, Dobson submitted a police report containing false information that the other officer on the scene could not account for. Dobson was never interviewed because he resigned from SPD in 2018 and returned a few years later. The OPA director wrote that while he could not “conclusively determine that [Dobson] was deliberately dishonest,” he was “greatly troubled.”
Dobson isn’t the only one who was investigated recently for allegedly abusing AI. SPD fired a student officer for using AI to complete an assignment at the academy. The officer thought he could get off the hook by saying he gave his homework to his wife to look over, and she used ChatGPT.
The OPA issued a management action, recommending an AI policy outlining “a framework for AI, detailing whether AI use is permitted, the conditions under which AI may be used, approved AI programs, the nature of the information that may be entered into these programs, the permissible uses of AI-generated content, approved devices for AI use, source attribution guidelines, and any other pertinent policy considerations.”
Hired in 2010, Jamin Dobson made $165,029 in 2023.