USA: Lawyers filed fake legal case created by ChatGPT

A federal judge on Thursday imposed $5,000 fines on two lawyers and a law firm in an unprecedented instance in which ChatGPT was held liable for their filing of fictitious legal research in an aviation injury claim.

Judge P. Kevin Castel said they acted in bad faith. But he acknowledged their apology and the steps taken to remedy the harm by explaining why harsher sanctions were not necessary to ensure that they or others would not again allow artificial intelligence tools to prompt them to produce false legal stories for use in their arguments.

“Technological advances are commonplace, and there is nothing inherently improper with respect to using a trusted artificial intelligence tool to provide support,” Castel wrote. “But the existing rules impose on lawyers a verification role to ensure the accuracy of their statements.”

The judge said the attorneys and their firm, Levidow, Levidow & Oberman, P.C., “abandoned their responsibilities when they submitted nonexistent court opinions with false statements and references created by the artificial intelligence tool ChatGPT, and then continued to support the false opinions after court orders called their existence into question.”

In a statement, the law firm said it would abide by Castel’s order, but added: “We respectfully disagree with the finding that someone at our firm acted in bad faith. We have already offered apologies to the court and to our client. We continue to believe that, in the face of what even the court acknowledged was an unprecedented situation, we made a good faith error in not believing that a technology unit could be making up cases without any basis in fact or reality.”

The firm indicated it was weighing whether to appeal.

Castel said the bad faith resulted from the lawyers’ failures to adequately respond to the judge and their legal adversaries when it was noted that six legal cases listed to support their March 1 written arguments did not exist.

The judge cited “shifting and contradictory explanations” offered by attorney Steven A. Schwartz. He said attorney Peter LoDuca lied when he said he was on vacation and was dishonest about confirming the veracity of statements submitted to Castel.

At a hearing this month, Schwartz said he used the AI-powered chatbot to help him find legal precedents to support a client’s case against Colombian airline Avianca for an injury he suffered on a 2019 flight.

Microsoft has invested approximately $1 billion in OpenAI, the company behind ChatGPT.

The chatbot, which generates trial-like responses to requests made by its users, suggested several cases involving aviation mishaps that Schwartz had been unable to find through the usual methods used at his law firm. Several of those cases were not real, misidentified judges or involved airlines that did not exist.

In a separate written opinion, the judge dismissed the underlying aviation claim, saying the matter was time-barred.

Attorneys for Schwartz and LoDuca did not respond to a request for comment at this time.


Read More:

Categorized in:

Tagged in: