Tuesday, March 4, 2025

Technology: Lawyers, judges and litigants beware! Reuters (Reporter Sara Merken), reports that Artificial Intelligence's 'penchant' for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, noting that: "Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said. AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets. Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to "even an unintentional misstatement" produced through AI."


QUOTE OF THE DAY:  "The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI to enhance legal work. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Perlman said."

—————————————————————————————

QUOTE TWO OF THE DAY: "Harry Surden, a law professor at the University of Colorado's law school who studies AI and the law, said he recommends lawyers spend time learning "the strengths and weaknesses of the tools."  He said the mounting examples show a "lack of AI literacy" in the profession, but the technology itself is not the problem. "Lawyers have always made mistakes in their filings before AI," he said. "This is not new."

---------------------------------------------------------

STORY: "AI 'hallucinations' in court papers spell trouble for lawyers," by Reporter Sara Merken, published by Reuters, on February 18 2025.  (Sara Merken reports on the business of law, including legal innovation and law firms in New York and nationally.,)

DIST: "U.S. personal injury law firm Morgan & Morgan sent an urgent email  this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired.

A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart (WMT.N)


One of the lawyers admitted in court filings last week that he used an AI program that "hallucinated" the cases and apologized for what he called an inadvertent mistake.


AI's penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, Reuters found.


The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk.


A Morgan & Morgan spokesperson did not respond to a request for comment. Walmart declined to comment. The judge has not yet ruled whether to discipline the lawyers in the Walmart case, which involved an allegedly defective hoverboard toy.


Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. 


Sixty-three percent of lawyers surveyed by Reuters' parent company Thomson Reuters last year  by Reuters' parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly.


Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said.


 AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets.


Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to "even an unintentional misstatement" produced through AI.


The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI to enhance legal work.


"When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Perlman said.


'LACK OF AI LITERACY'

In one of the earliest court rebukes over attorneys' use of AI, a federal judge in Manhattan in June 2023 fined two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline.



A different New York federal judge last year considered imposing sanctions in a case involving Michael Cohen, the former lawyer and fixer for Donald Trump, who said he mistakenly gave his own attorney fake case citations that the attorney submitted in Cohen's criminal tax and campaign finance case.


Cohen, who used Google's AI chatbot Bard, and his lawyer were not sanctioned, but the judge called the episode "embarrassing."


In November, a Texas federal judge ordered a lawyer who cited nonexistent cases and quotations in a wrongful termination lawsuit to pay a $2,000 penalty and attend a course about generative AI in the legal field.


A federal judge in Minnesota last month said a misinformation expert had destroyed his credibility with the court after he admitted to unintentionally citing fake, AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris.


Harry Surden, a law professor at the University of Colorado's law school who studies AI and the law, said he recommends lawyers spend time learning "the strengths and weaknesses of the tools."


 He said the mounting examples show a "lack of AI literacy" in the profession, but the technology itself is not the problem.


"Lawyers have always made mistakes in their filings before AI," he said. "This is not new."


The entire  story  can be read at:


https://www.reuters.com/technology/artificial-intelligence/ai-hallucinations-court-papers-spell-trouble-lawyers-2025-02-18/

PUBLISHER'S NOTE:  I am monitoring this case/issue/resource. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher: The Charles Smith Blog.

SEE BREAKDOWN OF  SOME OF THE ON-GOING INTERNATIONAL CASES (OUTSIDE OF THE CONTINENTAL USA) THAT I AM FOLLOWING ON THIS BLOG,  AT THE LINK BELOW:  HL:


https://www.blogger.com/blog/post/edit/120008354894645705/4704913685758792985


———————————————————————————————


FINAL WORD:  (Applicable to all of our wrongful conviction cases):  "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices."

Lawyer Radha Natarajan:

Executive Director: New England Innocence Project;


—————————————————————————————————


FINAL, FINAL WORD: "Since its inception, the Innocence Project has pushed the criminal legal system to confront and correct the laws and policies that cause and contribute to wrongful convictions.   They never shied away from the hard cases — the ones involving eyewitness identifications, confessions, and bite marks. Instead, in the course of presenting scientific evidence of innocence, they've exposed the unreliability of evidence that was, for centuries, deemed untouchable." So true!


Christina Swarns: Executive Director: The Innocence Project;

----------------------------------------------------------------