|Translate This News In|
What transpired when a US attorney utilised ChatGPT to draught a court document? The artificial intelligence programme generated fictitious cases and decisions, leaving the lawyer looking pretty embarrassed.
Steven Schwartz, a lawyer from New York, expressed regret to the judge this week for filing a brief that was filled with fabrications produced by the OpenAI chatbot.
“I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic,” Schwartz wrote in a court document.
A man who is suing Colombian airline Avianca made the error in a legal case being considered by the federal court in Manhattan.
On a flight from El Salvador to New York in August 2019, Roberto Mata says that a metal serving dish struck his leg and caused an injury.
Following the airline’s attorneys’ request for the case to be dismissed, Schwartz responded by claiming to cite more than six decisions as evidence for why the lawsuit should go forward.
These cases included Shaboon v. Egyptair, Varghese v. China Southern Airlines, and Petersen v. Iran Air. Even outdated internal citations and quotes were present in the Varghese case.
There was one significant issue, though: neither Avianca’s counsel nor the presiding judge, P. Kevin Castel, could locate the cases.
Schwartz had to acknowledge that everything had been made up by ChatGPT.
Judge Castel stated in a letter from last month that “the court is presented with an unprecedented circumstance.”
Schwartz and his legal counsel were summoned by the judge to appear before him in order to potentially receive punishment.
Prior to the hearing on Tuesday, Schwartz stated in a document that he wished to “sincerely apologise” to the court for his “deeply regrettable mistake.”
He claimed that ChatGPT was a tool his college-educated children had introduced him to, and that this was the first time he had ever used it for work-related purposes.
“At the time I conducted the legal study for this matter, I thought ChatGPT was a trustworthy search engine. I now realise it was inaccurate,” he wrote.
Schwartz continued, “It was never my intention to mislead the court.”
Since its late-year inception, ChatGPT has gained widespread attention for its capacity to generate human-like content from straightforward prompts, such as essays, poems, and chats.
Legislators are scrambling to try to determine how to govern these bots as generative AI material has proliferated as a result.
When contacted about Schwartz’s gaffe, an OpenAI official did not answer right away.
The New York Times broke the story in its initial report.
Levidow, Levidow & Oberman and Schwartz, according to Schwartz, were “publicly ridiculed” in the media coverage.
The fact that these pieces would be available for years to come was “deeply embarrassing on a personal and professional level,” he added.
In his closing statement, Schwartz said, “This matter has been an eye-opening experience for me and I can assure the court that I will never commit an error like this again.”