Friday, December 20, 2024

Technology: (Gone wrong?): What's wrong with police forces using artificial intelligence technology to wrote crime reports? Law prof. Cassandra Burke Robertson has lots of concerns about its accuracy in police reporting, and tells us about them on 'The Daily,' (Case Western Reserve) noting that, " AI’s predictive nature can generate plausible but potentially inaccurate text, which is problematic in criminal investigations."... Police officers have been impressed by the results, drafting reports in as little as 10 seconds. Yet legal experts are raising concerns over accuracy, transparency, and potential bias — challenges that could significantly shape the future of AI both in policing and in the courtroom."


PASSAGE OF THE DAY: "Axon’s Draft One has built-in safeguards that require officers to review and sign off on each report before submission, according to Smith. The system also includes controls, like placeholders for key information that officers must edit, ensuring that no critical details are missed. And beyond the officer’s review, the report undergoes multiple levels of human oversight by supervisors, report clerks, and others to ensure it meets agency standards before it’s finalized. Even so, some members of law enforcement — like Keith Olsen, a retired New York detective and president and CEO of consulting firm KO Solutions & Strategies, which advises police associations — don’t see the benefits of using AI for police reports.“It seems to be trying to solve a problem that just doesn’t exist,” Olsen said. “It doesn’t take that long to write a police report, and I think it’s going to miss the officer’s perspective, and the officer still has to add stuff, delete stuff. I don’t think there’s a saving-of-time claim. And if you get a clever defense attorney, I can see all kinds of problems with it.”

——————————————————————

PASSAGE TWO OF THE DAY: "New Jersey-based lawyer Adam Rosenblum said “hallucinations” — instances when AI generates inaccurate or false information — that could distort context are another issue. Courts might need new standards to require detailed, transparent documentation of the AI’s decision-making process before allowing the reports into evidence,” he said. Such measures, he added, could help safeguard due process rights in cases where AI-generated reports come into play."

----------------------------------------------------------

STORY: "Police departments across U.S. are starting to use artificial intelligence to write crime reports," by Barbara Booth, published by The Daily, on November 26, 2024. (Barbara Booth is an award-winning writer and editor whose work covers a wide range of business and social topics, including health care, work/life issues, international business and personal finance.)

SUB-HEADING: "Law's Cassandra Burke Robertson expresses concerns about AI in police reporting." (CNBCCassandra Burke Robertson, the John Deaver Drinko-Baker Hostetler Professor of Law, expressed concerns about AI in police reporting, noting that AI’s predictive nature can generate plausible but potentially inaccurate text, which is problematic in criminal investigations. She emphasized the need for transparency and thorough vetting to ensure reliability, particularly in legal contexts.)

KEY POINTS

  • An increasing number of companies are stepping up to help police departments ease the burden of administrative tasks with AI tools.
  • Axon, widely recognized for its Taser devices and body cameras, was among the first companies to introduce AI specifically for the common police work of report writing and its AI is being tested in California, Colorado and Indiana.
  • Police officers have been impressed by the results, drafting reports in as little as 10 seconds. Yet legal experts are raising concerns over accuracy, transparency, and potential bias — challenges that could significantly shape the future of AI both in policing and in the courtroom.
  • -------------


GIST: "With law enforcement focused on reducing crime rates and budget pressures, while recruiting and retaining staff, technology companies are having some early success selling artificial intelligence tools to police departments, especially to ease the burden of administrative work.  


Axion,  widely recognized for its Taser devices and body cameras, was among the first companies to introduce AI specifically for the most common police task: report writing. Its tool, Draft One, generates police narratives directly from Axon’s bodycam audio. Currently, the AI is being piloted by 75 officers across several police departments, including Fort Collins, Colorado; Lafayette, Indiana; and East Palo Alto, California.


Axon CEO Rick Smith said it is restricted to drafting reports for only minor incidents so agencies can get comfortable with the tool before expanding to more complex cases. Early feedback, he added, indicates that Draft One reduces report-writing time by more than 60%, potentially cutting the average time for report completion from 23 minutes to just 8 minutes. 

“The hours saved comes out to about 45 hours per police officer per month,” said Sergeant Robert Younger of the Fort Collins Police Department, an early adopter of the tool. “When I first tested it myself, I was absolutely floored, because the draft report was incredibly accurate. There weren’t any suppositions or guesses about what somebody was thinking or feeling or looked like or anything like that. The information it provided in that draft report was a very well-written, balanced report, chronological in order, based on facts, with an intro and an outcome,” he said, adding that the draft was produced in under 10 seconds.

Lawyers are concerned about AI reports in court

Yet as AI gains traction in police work, legal experts are raising concerns over accuracy, transparency, and potential bias — challenges that could significantly shape the future of AI both in policing and in the courtroom. Much of the impact, however, depends on how heavily these tools are relied upon and the ways in which they are implemented.

“For all of the potential issues that AI technology creates in terms of admissibility of evidence, in terms of being completely transparent, in terms of trying to mitigate the biases that can be introduced into the system, I just don’t know that it’s worth it,” said Utah State Senator Stephanie Pitcher, a defense attorney with Parker & McConkie.

Though Pitcher and other experts agree that AI in police reporting can offer benefits, it must be used with clear protocols and careful oversight to ensure accuracy.


“If the police officer is going to rely on artificial intelligence [to draft the report], that report should be reviewed,” said New York trial attorney David Schwartz. “The police officer should have to sign off and attest that the facts are truthful to the best of that police officer’s knowledge. So, if you have all that, it should be admissible. But it could create many, many problems for the police officer and the prosecution at trial [during cross-examination].”

Axon’s Draft One has built-in safeguards that require officers to review and sign off on each report before submission, according to Smith. The system also includes controls, like placeholders for key information that officers must edit, ensuring that no critical details are missed. And beyond the officer’s review, the report undergoes multiple levels of human oversight by supervisors, report clerks, and others to ensure it meets agency standards before it’s finalized.

Even so, some members of law enforcement — like Keith Olsen, a retired New York detective and president and CEO of consulting firm KO Solutions & Strategies, which advises police associations — don’t see the benefits of using AI for police reports.

“It seems to be trying to solve a problem that just doesn’t exist,” Olsen said. “It doesn’t take that long to write a police report, and I think it’s going to miss the officer’s perspective, and the officer still has to add stuff, delete stuff. I don’t think there’s a saving-of-time claim. And if you get a clever defense attorney, I can see all kinds of problems with it.”

Axon competitors Truleo and 365 Labs are positioning their AI tools as quality-focused aids for officers rather than time savers.

Truleo, which launched its AI technology for auto-generated narratives in July, captures real-time recorded voice notes from the officer in the field rather than relying on bodycam footage like Axon. “We believe dictation and conversational AI is the fastest, most ethical, responsible way to generate police reports. Not just converting a body camera video to a report. That’s just nonsense. Studies show it doesn’t save officers any time,” said Truleo CEO Anthony Tassone.

365Labs, meanwhile, uses AI primarily for grammar and error correction, with CEO Mohit Vij noting that human judgment remains essential for reports involving complex interactions. “If it’s burglary or assault, these are serious matters,” said Vij. “It takes time to write police reports, and some who join the police force are there because they want to serve the communities and writing is not their strength. So, we focus on the formulation of sentences and grammar.”

Accuracy in criminal investigations

Cassandra Burke Robertson, director of the Center for Professional Ethics at Case Western Reserve University School of Law, has reservations about AI in police reporting, especially when it comes to accuracy.

“Generative AI programs are essentially predictive text tools. They can generate plausible text quickly, but the most plausible explanation is often not the correct explanation, especially in criminal investigations,” she said, highlighting the need for transparency in AI-generated reports.

Still, she says “I don’t think the genie is going back into the bottle. AI tools are useful and will be part of life going forward, but I would want more than just a simple reassurance that the reports are fully vetted and checked.”

In the courtroom, AI-generated police reports could introduce additional complications, especially when they rely solely on video footage rather than officer dictation. Schwartz believes that while AI reports could be admissible, they open the door for intense cross-examination. “If there’s any discrepancy between what the officer recalls and what the AI report shows, it’s an opportunity for the defense to question the report’s reliability,” he said.

This potential for inconsistency could create a perception of laziness or lack of diligence if officers rely too heavily on AI and don’t conduct thorough reviews.

New Jersey-based lawyer Adam Rosenblum said “hallucinations” — instances when AI generates inaccurate or false information — that could distort context are another issue. Courts might need new standards to require detailed, transparent documentation of the AI’s decision-making process before allowing the reports into evidence,” he said. Such measures, he added, could help safeguard due process rights in cases where AI-generated reports come into play.

Axon and Truleo both confirmed their auto-generated reports include a disclaimer. 

“I think it’s probably a uniform opinion from many attorneys that if we’re overcomplicating something or introducing potential challenges to the inadmissibility, it’s just not worth it,” sair Pitcher.

But Sergeant Younger at Fort Collins remains optimistic: “The thing that’s crucial to understand with anything that involves AI is that it’s a process,” he said. “I’ve had officers tell me this makes the difference between deciding whether or not to continue in law enforcement, because the one thing they were not counting on when they became a cop was the incredibly huge amounts of administrative functions, and that’s not necessarily what they signed up for.”

The entire story ca be read at: 

https://thedaily.case.edu/laws-cassandra-burke-robertson-expresses-concerns-about-ai-in-police-reporting/

PUBLISHER'S NOTE:  I am monitoring this case/issue/resource. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher: The Charles Smith Blog.

SEE BREAKDOWN OF  SOME OF THE ON-GOING INTERNATIONAL CASES (OUTSIDE OF THE CONTINENTAL USA) THAT I AM FOLLOWING ON THIS BLOG,  AT THE LINK BELOW:  HL:


https://www.blogger.com/blog/post/edit/120008354894645705/4704913685758792985


———————————————————————————————


FINAL WORD:  (Applicable to all of our wrongful conviction cases):  "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices."

Lawyer Radha Natarajan:

Executive Director: New England Innocence Project;


—————————————————————————————————

FINAL, FINAL WORD: "Since its inception, the Innocence Project has pushed the criminal legal system to confront and correct the laws and policies that cause and contribute to wrongful convictions.   They never shied away from the hard cases — the ones involving eyewitness identifications, confessions, and bite marks. Instead, in the course of presenting scientific evidence of innocence, they've exposed the unreliability of evidence that was, for centuries, deemed untouchable." So true!

Christina Swarns: Executive Director: The Innocence Project;