Steven Schwartz said he had never used ChatGPT as a source of legal research before and was therefore “unaware of the possibility that its content could be false.”
Steven Schwartz, a US attorney with more than 30 years of experience, used ChatGPTthe artificial intelligence (AI) chatbot prototype developed by the American company OpenAI, for the investigation of a court case.
Acting as the legal representative for Roberto Mata, who has sued an airline for injuries he says he suffered from a service cart, Schwartz decided to look at previous similar cases to show that the lawsuit should go forward.
However, Judge Kevin Castel held that the court was faced with an “unprecedented circumstance” as six of the sample legal cases brought by the attorney appeared to be “bogus court decisions with bogus citations and bogus internal subpoenas“.
When demanding explanations about what happened, it was discovered that the investigation had not been explicitly prepared by Schwartz himself, but instead enlisted the help of ChatGPT. During a hearing, the lawyer assured that he had never used this technology before and, therefore, “I was not aware of the possibility that its content could be false“.
Likewise, Schwartz said that he “very much regrets having used generative artificial intelligence to complement legal research.” […] and will never do so in the future without absolute verification of its authenticity.” At the same time, it accepted responsibility for not confirming the sources provided by the chatbot.
As a result of his actions, Schwartz was ordered to show the cause for which he should not be punished “for the use of a false and fraudulent notarial certification“. The pertinent hearing will be held on June 8.