What Happens When Your Lawyer Uses Chat-GPT?

Have you ever wondered to yourself, maybe while writing an essay or preparing for a seminar, “should I ask Chat-GPT how to answer this”?

It’s a temptation that lawyer Steven A. Schwartz of the firm Levidow, Levidow & Oberman certainly did not escape.

Roberto Mata, after being injured by a metal serving cart that struck his knee during a flight to Kennedy International Airport in New York, issued a lawsuit, with Schwartz as his legal representation, against the airline Avianca.

In response to Avianca asking a Manhattan federal judge to discard the lawsuit, a 10-page brief was submitted to the court by Schwartz, defending Mata’s claim.

However, despite over half a dozen cases being cited in the brief, no one in the courtroom, including Schwartz himself, could find any of the mentioned decisions or quotations.

Schwartz had relied on generative AI chatbot Chat-GPT, which he believed to be a legitimate search engine, to source several cases to bolster Mata’s lawsuit against Aviance. 

Chat-GPT had fabricated every single one. 

The fabricated cases included: Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines. Regarding the latter case, Avianca’s lawyers wrote to the judge that they had “not been able to locate this case by caption or citation, nor any case bearing any resemblance to it.”

At the mercy of the court, Schwartz explained how he had never used Chat-GPT in his life prior to his preparation for the trial and was “therefore unaware of the possibility that its content could be false”.

The most chilling part? Schwartz explained how, after asking the bot to verify if the cases were real, Chat-GPT repeatedly reassured him that they were. 

According to a copy of the exchange submitted to the judge:

Schwartz: “Is Varghese a real case?” 

Chat-GPT: “Yes, [citation] is a real case.”

Schwartz: “What is your source?” 

Chat-GPT: “I apologise for the confusion earlier, [citation]”.

Schwartz: “Are the other cases you provided fake?” 

Chat-GPT: “No, the other cases I provided are real and can be found in reputable legal databases.”

With lawyers currently debating the value and dangers of AI, setting out its frameworks for years to come in current legislation, there is much debate surrounding whether or not the rise of Chat-GPT is indicative of computers coming to replace human interaction and labour.

The case of Schwartz, describing himself to be “duped” by the chatbot, should provide us with some solace: the inconsistencies within current AI technology and its capacity for making errors demonstrate a clear need for human oversight to verify any information produced by the bot and its source.

By Angela Mattoo

Previous
Previous

Marital Rape Criminalisation in India - Feminist Conundrums

Next
Next

UK Faces International Scrutiny: UN Condemns Poverty Levels as a Violation of International Law