ChatGPT is known for making up fake information, and a lawyer in the United States has learned it the hard way. Steven A. Schwartz relied on ChatGPT's responses and included case examples that were claimed to have happened before; however, all cases were actually made up by the chatbot.
The New York Times revealed today that lawyers suing Colombian airline Avianca filed a document full of prior cases that ChatGPT made up. US District Judge Kevin Castel stated, "Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," after opposing counsel called attention to the nonexistent cases. He then scheduled a hearing while he thought about issuing sanctions on the plaintiff's lawyers.
Roberto Mata was represented by Steven A. Schwartz, a lawyer with more than 30 years of experience in the state, in his lawsuit against airline Avianca for an alleged event in which a service trolley allegedly hit his knee and injured him. After Avianca's attorneys requested a federal court judge to throw it out, Mr. Schwartz, a lawyer with the firm Levidow, Levidow & Oberman, drafted a brief that was intended to use precedent to demonstrate why the case should proceed.
The airline's legal counsel, however, expressed concern about the brief in a letter to the judge, stating that they were unable to locate some of the mentioned cases. The judge stated in an order that he had been presented with an "unprecedented circumstance" and that he had requested Mr. Schwartz and one of his coworkers, Peter Loduca, to explain why they should not be punished.
Schwartz didn't know about ChatGPT's made-up answers
Schwartz claimed that he was "unaware of the possibility that its content could be false." The lawyer even gave the judge screenshots of his conversations with ChatGPT in which he inquired about the validity of one of the cases. The chatbot approved and said the incident is real. Even more so, it acknowledged that the instances might be located in "reputable legal databases." However, none of them were found, as they were all made up by OpenAI's ChatGPT.
This is not the first incident, as ChatGPT's answers are not always right. Taro Kono, Japan's digital minister, stated that he was recently wrongly identified as Fumio Kishida, Japan's prime minister—the very person he had faced off against and lost to in a leadership election held in 2021—during a conversation with OpenAI's lauded chatbot.
"ChatGPT gave the incorrect response when I asked him who Kono Taro is, so you need to be careful," I said. According to Kono, the Japanese prime minister, Bloomberg.
Thank you for being a Ghacks reader. The post ChatGPT's made-up cases puts a lawyer's career in danger appeared first on gHacks Technology News.
0 Commentaires