Monday, July 24, 2023

From our 'Technology (or humans?) Gone Wrong?' department: Part 2: In light of the scandal involving a New York law firm found to have submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question, 'Rhode Island Lawyers Weekly' runs an important post by Pat Murphy, headed; "AI presents efficiencies, perils for legal practices: Object lesson: N,Y. lawyers burned by ChatGPT filing."..."Some practitioners are dumbfounded that the sanctioned attorneys placed so much faith in a new technology, failing to take the simple step of checking the ChatGPT work product for errors before filing the document in court. “When I first read about it, my impression was a mix of amusement and horror that it was allowed to happen,” says Timothy V. Fisher, a member of the AI and machine learning practice group at Pierce Atwood in Boston. John W. Weaver, who chairs the AI practice group at McLane Middleton in Woburn, Massachusetts, agrees. “From the court’s description of what occurred, the onus is clearly on the human beings. This is not a failure of technology,” says Weaver, who doesn’t advise attorneys to use ChatGPT — at least for now. “We’re overestimating what the technology will do in the next two years and underestimating what it is going to do in the next 10,” Weaver says. “Once we get over the first few bumps and we have a few iterations to the development cycle, these will become much more reliable. To me, the holy grail for attorneys is having some sort of AI akin to ChatGPT that lives in your server and has access to your documents.”


PASSAGE OF THE DAY: "Schwartz would later testify that he used ChatGPT to draft the brief, operating under the belief that the website “could not possibly be fabricating cases on its own.” LoDuca filed the memorandum in court. Above Loduca’s signature line, the memo stated: “I declare under penalty of perjury that the foregoing is true and correct.” LoDuca later testified at the hearing on sanctions that he did not review any authorities cited in the brief prepared by Schwartz, stating that he “was basically looking for a flow, [to] make sure there was nothing untoward or no large grammatical errors." Craig R. Smith, a partner at the intellectual property firm Lando & Anastasi in Boston, finds that incredible. “I can’t even imagine having a brief that would go out the door where someone wouldn’t be checking those citations and making sure they’re accurate,” he says. Adds DeCarvalho: “If during your research you pull up a case that seems to be absolutely on point to an argument you’re trying to make, if you don’t Shepardize that case and determine that that holding is still the law of the land, that’s tantamount to malpractice.”

------------------------------------------------------------

STORY: "AI presents efficiencies, perils for legal practices" Object lesson: N.Y. lawyers burned by ChatGPT filing," by Author Pat Murphy, published by 'Rhode Island Lawyers Weekly' on July 14, 2023;


GIST: As members of the legal profession figure out how best to tap into the potential of artificial intelligence in managing their practices, the recent misadventure of two New York personal injury lawyers sends a message that couldn’t be clearer: HANDLE WITH CARE.


Last month, a federal judge handed down a $5,000 penalty against a New York City personal injury firm and two of its lawyers as part of an order of Rule 11 sanctions. 


The fine was handed down for the attorneys’ filing of a ChatGPT-generated pleading that included six phony case citations.


“Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question,” U.S. District Court Judge P. Kevin Castel wrote in his June 22 order in Mata v. Avianca, Inc.


Kas R. DeCarvalho, a lawyer in Johnston, says the legal profession is just starting to feel the impact of the AI revolution affecting other sectors of the economy.


“What happened in New York is certainly a cautionary tale to the rest of us practicing law,” says the Pannone Lopes partner who represents technology companies and other businesses in a range of matters including artificial intelligence and cyber law. “The speed of development overall can’t be overstated. It’s absolutely exponential. We couldn’t have anticipated ChatGPT 24 months ago.”


To a large degree, attorneys should approach AI like they approach other technology tools they’ve become accustomed to using in their practices, DeCarvalho says.


“The overriding requirement to provide reasonable, researched legal advice to our clients is still very much in place,” he says.


Some practitioners are dumbfounded that the sanctioned attorneys placed so much faith in a new technology, failing to take the simple step of checking the ChatGPT work product for errors before filing the document in court.


“When I first read about it, my impression was a mix of amusement and horror that it was allowed to happen,” says Timothy V. Fisher, a member of the AI and machine learning practice group at Pierce Atwood in Boston.


John W. Weaver, who chairs the AI practice group at McLane Middleton in Woburn, Massachusetts, agrees.


“From the court’s description of what occurred, the onus is clearly on the human beings. This is not a failure of technology,” says Weaver, who doesn’t advise attorneys to use ChatGPT — at least for now.


“We’re overestimating what the technology will do in the next two years and underestimating what it is going to do in the next 10,” Weaver says. “Once we get over the first few bumps and we have a few iterations to the development cycle, these will become much more reliable. To me, the holy grail for attorneys is having some sort of AI akin to ChatGPT that lives in your server and has access to your documents.”


Boston family law attorney Jared D. Spinelli sees the ethical issues raised in the New York case as falling squarely within the Massachusetts Rules of Professional Conduct. Spinelli points to Rule 3.3, which addresses a lawyer’s obligation of candor before a tribunal. Rule 3.3(a)(1) states that a lawyer shall not knowingly “make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer.”


“The minute you realize that you either made a false statement or there’s a suspicion that something you said was false, you need to take immediate measures to correct yourself,” Spinelli says.


The lesson from Mata is that attorneys need to remind themselves to fact- and cite-check as if the AI program was an associate or paralegal, he adds.


What due diligence?

In February 2022, the plaintiff in Mata sued Avianca Airlines in New York state court, alleging that he sustained injuries when a metal serving cart struck his left knee during a flight from El Salvador to John F. Kennedy Airport. As the case involved injury on an international flight, the defendant removed the case to federal court pursuant to the Montreal Convention.


As later found by Judge Castel, while attorney LoDuca entered an appearance following removal, attorney Schwartz had filed the original complaint and continued to do the substantive work in the case.


Avianca filed a motion to dismiss, alleging that the plaintiff’s claims were time-barred under the Montreal Convention. In response, Schwartz drafted a memorandum in opposition, making the argument that, under the convention, the statute of limitations had been tolled by the airline’s filing for bankruptcy.


Schwartz would later testify that he used ChatGPT to draft the brief, operating under the belief that the website “could not possibly be fabricating cases on its own.”


The speed of development overall can’t be overstated. It’s absolutely exponential. We couldn’t have anticipated ChatGPT

24 months ago.

— Kas R. DeCarvalho, Johnston


LoDuca filed the memorandum in court. Above Loduca’s signature line, the memo stated: “I declare under penalty of perjury that the foregoing is true and correct.”


LoDuca later testified at the hearing on sanctions that he did not review any authorities cited in the brief prepared by Schwartz, stating that he “was basically looking for a flow, [to] make sure there was nothing untoward or no large grammatical errors.”


Craig R. Smith, a partner at the intellectual property firm Lando & Anastasi in Boston, finds that incredible.


“I can’t even imagine having a brief that would go out the door where someone wouldn’t be checking those citations and making sure they’re accurate,” he says.


Adds DeCarvalho: “If during your research you pull up a case that seems to be absolutely on point to an argument you’re trying to make, if you don’t Shepardize that case and determine that that holding is still the law of the land, that’s tantamount to malpractice.”


After LoDuca filed the plaintiff’s memorandum of law, the defendant airline alerted the court that the plaintiff’s pleading cited case law that could not be located or that failed to stand for the proposition for which they were cited.


For example, the plaintiff’s memo cited “Varghese v. China Southern Airlines Co., Ltd., 925 F.3rd 1339 (11th Cir. 2019)” — a case that was later established not to exist — for the proposition that the stay entered in the defendant’s bankruptcy case tolled the Montreal Convention’s limitations period.


Schwartz in an affidavit would describe an exchange he had with ChatGPT about the 11th Circuit cite once the defendant raised its concerns with the court.


“I asked ChatGPT directly whether one of the cases it cited, ‘Varghese v. China Southern Airlines Co. Ltd.’ … was a real case,” Schwartz wrote. “Based on what I was beginning to realize about ChatGPT, I highly suspected that it was not. However, ChatGPT again responded that Varghese ‘does indeed exist’ and even told me that it was available on Westlaw and LexisNexis, contrary to what the Court and defendant’s counsel were saying.”


‘Hallucinating’ AI models?

Under Federal Rule of Civil Procedure 11(b)(2), by “presenting to the court a pleading, written motion, or other paper,” an attorney “certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances … the claims, defenses, and other legal contentions are warranted by existing law.”


In his June 22 sanctions order, the judge found that attorneys LoDuca and Schwartz had both acted with “subjective bad faith,” finding them jointly and severally liable with their law firm for the $5,000 penalty issued for their violations of Rule 11(b)(2).


DeCarvalho says he’s surprised the sanctions weren’t more severe.


“My personal opinion is that they got off light with a $5,000 fine,” DeCarvalho says. “They’ll be lucky if they don’t get sued by their client.”


Fisher has his own theory as to how ChatGPT got it so wrong in Mata.


“It’s a generative model, so its purpose is to ‘make things up’ and to do so by mimicking textual and semantic patterns it has seen in training data,” Fisher says. “But it’s a well-known phenomenon of these large AI models that they will ‘hallucinate,’ which means they will very confidently give an answer to a question that you ask them. What you get back may look plausible at first, but it’s actually wildly inaccurate.”


Smith offers another theory.


“It may just be a natural failing of these early systems where they may not understand when looking at a legal case the meaning of citations and what they’re referring to,” Smith says. “There’s the chance that [the program is] not making the proper association between a particular case and a particular quote to understand that it can only use that quote when referring to that specific case.”


Client confidentiality

As an attorney for clients in the business of developing AI products, Fisher says he sees the promise of the technology.


“I would love to leverage [AI] in my own practice to gain some efficiencies,” Fisher says. “But the technology’s clearly not ‘there’ yet.”


For Smith, though there are drawbacks, current AI systems are already a valuable tool.


“They can be helpful in doing things that are critical to every lawyer’s job: reviewing documents, synthesizing data, and being able to provide information culled out of a large data set,” he says. “Those are things that lawyers are going to continue to rely upon because AI has the benefit of being able to do some of these tasks incredibly fast and efficiently.”


Apart from lawyers needing to see further advances in the technology before embracing AI without reservation, the biggest concern may be the question of protecting confidential client information necessary for AI programs to process in order to generate usable solutions.


“Confidentiality is a huge issue,” Fisher says. “A lot of these AI models that you hear about in the news are open source, in other words [maintained by] a third party. There are a couple issues you have with that. You’re divulging confidential information outside of your firm and your client.”


The second problem, Fisher says, is “you don’t necessarily know whether they’re going to use your data to train their model. That model may then answer someone else’s question based on your confidential data.”


For that reason, Fisher says he expects law firms to try to install in-house programs that provide closed systems that protect client information.


“You can take an [AI] model that’s already pre-trained, move it in-house, or rent server space that only you have access to and that has all the necessary securities in place,” he says. “Then you can finetune that model to the tasks that you’d like it to perform.”


Smith likewise foresees in-house AI models becoming the standard in the legal profession.


“That way you could train an AI model based on the briefs you’ve already written, the research you have already done, and the work that you have already performed for many years,” Smith says. “Then, when you’re using and interacting with that AI system, you know that the information that you’re pulling from is information that you already consider reliable because it’s part of your own data set. That would be a fantastic way of leveraging all the data that the firm has already created.""


The entire post can be read at:


https://rilawyersweekly.com/blog/2023/07/14/ai-presents-efficiencies-perils-for-legal-practices/


PUBLISHER'S NOTE: I am monitoring this case/issue/resource. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com. Harold Levy: Publisher: The Charles Smith Blog;

SEE BREAKDOWN OF SOME OF THE ON-GOING INTERNATIONAL CASES (OUTSIDE OF THE CONTINENTAL USA) THAT I AM FOLLOWING ON THIS BLOG, AT THE LINK BELOW: HL

https://www.blogger.com/blog/post/edit/120008354894645705/47049136857587929

FINAL WORD: (Applicable to all of our wrongful conviction cases): "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices.

Lawyer Radha Natarajan;

Executive Director: New England Innocence Project;

—————————————————————————————————


FINAL, FINAL WORD: "Since its inception, the Innocence Project has pushed the criminal legal system to confront and correct the laws and policies that cause and contribute to wrongful convictions. They never shied away from the hard cases — the ones involving eyewitness identifications, confessions, and bite marks. Instead, in the course of presenting scientific evidence of innocence, they've exposed the unreliability of evidence that was, for centuries, deemed untouchable." So true!


Christina Swarns: Executive Director: The Innocence Project;


------------------------------------------------------------------


YET ANOTHER FINAL WORD:


David Hammond, one of Broadwater’s attorneys who sought his exoneration, told the Syracuse Post-Standard, “Sprinkle some junk science onto a faulty identification, and it’s the perfect recipe for a wrongful conviction.”


https://deadline.com/2021/11/alice-sebold-lucky-rape-conviction-overturned-anthony-broadwater-1234880143/

-------------------------------------------------------------