Sunday, July 23, 2023

From our 'Technology (or humans?) Gone Wrong' department: (Part 2): 'Business Insider' (Senior Reporter Grace Dean) reports that a New York law firm was fined $5,000 after one of its lawyers used ChatGPT to write a court brief riddled with fake case references..."P. Kevin Castel, US district judge for the Southern District of New York, wrote in a sanctions order on Thursday that suspicions over the use of artificial intelligence arose after both Avianca and the court itself had been unable to locate several of the cases cited in the filing. Condon & Forsyth, the law firm representing Avianca, said that its lawyers "we were able to recognize right away that the cases were not real," Schwartz admitted in an affidavit on May 24 that he had used ChatGPT "to supplement the legal research performed" and find cases because he had been "unaware of the possibility that its content could be fake." "I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic," Schwartz wrote in a declaration on June 6. "I deeply regret my decision to use ChatGPT for legal research, and it is certainly not something I will ever do again."


PASSAGE OF THE  DAY: "ChatGPT was released by OpenAI in November and has since exploded in popularity. People have been using the AI chatbot for personal, professional, and academic purposes including writing lettersdrafting work emails, and summarizing research for college assignments, and some studies suggest that generative AI could have huge effects on the legal industry, including the automation of jobs. In some cases, however, generative AI has been show (sic)  to "hallucinate," or make up information and repeatedly insist that it is correct. Castel, the judge, criticized Levidow, Levidow & Oberman for not "coming clean about their actions" quickly enough. He said that the firm and its lawyers "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question." Castel fined Levidow, Levidow & Oberman $5,000, and ordered the law firm to send letters to each judge falsely identified as an author of one of the fake opinions."


--------------------------------------------------------------------


KEY POINTS: 
  • A law firm was fined $5,000 after one of its lawyers used ChatGPT to write a court brief.
  • The document had included references to some cases and opinions that didn't exist.
  • The lawyer said he had "no idea" ChatGPT could fabricate information.

STORY: " "A law firm was fined $5,000 after one of its lawyers used ChatGPT to write a court brief riddled with fake case references," by Senior Business Reporter (London office) Reporter Grace Dean, published by 'Business Insider' on June 23, 2023.


GIST: "A law firm was fined $5,000 after a court found that one of its lawyers had used ChatGPT to write a court brief which included false citations.


The initial lawsuit was filed last year on behalf of a passenger who claimed he was injured by a metal serving cart during an Avianca flight.


Steven Schwartz of New York law firm Levidow, Levidow & Oberman, P.C., which is respresenting the passenger, had fed prompts to the AI chatbot including "show me specific holdings in federal cases where the statute of limitations was tolled due to bankruptcy of the airline" as part of his research, court filings show.


Schwartz included references to a number of fake cases and opinions ChatGPT generated in an affirmation in opposition filed on March 1 this year, the court documents show.


 Although fellow Levidow, Levidow & Oberman attorney Peter LoDuca had signed and filed the affirmation in opposition, Schwartz said that he had been the one to research and write the brief.


P. Kevin Castel, US district judge for the Southern District of New York, wrote in a sanctions order on Thursday that suspicions over the use of artificial intelligence arose after both Avianca and the court itself had been unable to locate several of the cases cited in the filing. 


Condon & Forsyth, the law firm representing Avianca, said that its lawyers "we were able to recognize right away that the cases were not real,"


Schwartz admitted in an affidavit on May 24 that he had used ChatGPT "to supplement the legal research performed" and find cases because he had been "unaware of the possibility that its content could be fake."


"I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic," Schwartz wrote in a declaration on June 6. "I deeply regret my decision to use ChatGPT for legal research, and it is certainly not something I will ever do again."


ChatGPT was released by OpenAI in November and has since exploded in popularity. People have been using the AI chatbot for personal, professional, and academic purposes including writing lettersdrafting work emails, and summarizing research for college assignments, and some studies suggest that generative AI could have huge effects on the legal industry, including the automation of jobs.


In some cases, however, generative AI has been show (sic)  to "hallucinate," or make up information and repeatedly insist that it is correct.


Castel, the judge, criticized Levidow, Levidow & Oberman for not "coming clean about their actions" quickly enough.


He said that the firm and its lawyers "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question."



Castel fined Levidow, Levidow & Oberman $5,000, and ordered the law firm to send letters to each judge falsely identified as an author of one of the fake opinions.


"Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance," Castel wrote. "But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings."


In a statement sent to Insider, Levidow, Levidow & Oberman said it had "reviewed the Court's order and fully intend to comply with it," but added that "we respectfully disagree with the finding that anyone at our firm acted in bad faith. We have already apologized to the Court and our client."


"We continue to believe that in the face of what even the Court acknowledged was an unprecedented situation, we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth." Attorneys for LoDuca declined to comment beyond Levidow, Levidow & Oberman's statement.


Separately, the judge dismissed the lawsuit against Avianca."


The entire story can be read at:


https://www.businessinsider.com/chatgpt-generative-ai-law-firm-fined-fake-cases-citations-legal-2023-6

PUBLISHER'S NOTE: I am monitoring this case/issue/resource. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com. Harold Levy: Publisher: The Charles Smith Blog;

SEE BREAKDOWN OF SOME OF THE ON-GOING INTERNATIONAL CASES (OUTSIDE OF THE CONTINENTAL USA) THAT I AM FOLLOWING ON THIS BLOG, AT THE LINK BELOW: HL

https://www.blogger.com/blog/post/edit/120008354894645705/47049136857587929

FINAL WORD: (Applicable to all of our wrongful conviction cases): "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices.

Lawyer Radha Natarajan;

Executive Director: New England Innocence Project;

—————————————————————————————————


FINAL, FINAL WORD: "Since its inception, the Innocence Project has pushed the criminal legal system to confront and correct the laws and policies that cause and contribute to wrongful convictions. They never shied away from the hard cases — the ones involving eyewitness identifications, confessions, and bite marks. Instead, in the course of presenting scientific evidence of innocence, they've exposed the unreliability of evidence that was, for centuries, deemed untouchable." So true!


Christina Swarns: Executive Director: The Innocence Project;


------------------------------------------------------------------


YET ANOTHER FINAL WORD:


David Hammond, one of Broadwater’s attorneys who sought his exoneration, told the Syracuse Post-Standard, “Sprinkle some junk science onto a faulty identification, and it’s the perfect recipe for a wrongful conviction.”


https://deadline.com/2021/11/alice-sebold-lucky-rape-conviction-overturned-anthony-broadwater-1234880143/

-------------------------------------------------------------