A New York lawyer is facing his own court hearing after his business utilized the artificial intelligence platform ChatGPT for legal research.
In a lawsuit filed against the airline Avianca by Roberto Mata, the plaintiff claims he was wounded when a metal service cart hit his knee on a flight to New York.
When Avianca urged the Judge P. Kevin Castel to dismiss the case, Mr. Mata’s lawyers opposed forcefully, filing a 10-page brief citing more than a half-dozen pertinent judicial judgments. Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and, of course, Varghese v. China Southern Airlines, with its sophisticated explanation of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
No one can find the decisions cited in the brief.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in an order demanding Mata’s legal team to explain.
Several documents revealed that the study had not been prepared by Peter LoDuca, the plaintiff’s lawyer, but by a colleague at the same legal firm. Steven Schwartz, an attorney for over 30 years, used ChatGPT to search for similar historical cases.
Schwartz clarified in his written statement that LoDuca was not a member of the research and had no awareness of how it was carried out. He went on to say that he “greatly regrets” relying on the chatbot, which he had never used before for legal research and was “unaware that its content could be false.”
He has pledged that in the future, he will never utilize AI to “supplement” his legal studies “without absolute verification of its authenticity.”
Screenshots attached to his filing appear to show Schwartz and ChatGPT conversing.
“Is varghese a real case,” one message asks, referring to Varghese v. China Southern Airlines Co Ltd, one of the instances that no other lawyer could locate. ChatGPT responds that it is, prompting “S” to inquire, “What is your source?” ChatGPT responds again after “double checking” that the case is legitimate and can be located on legal reference databases such as LexisNexis and Westlaw.
Since its inception in November 2022, ChatGPT, developed by OpenAI, has been used by millions of individuals. It can respond to questions in natural, human-like language and replicate various writing styles. Its database is the internet as it was in 2021.
Concerns have been raised about the possible hazards of artificial intelligence (AI), such as the spread of misinformation and bias.
This isn’t ChatGPT’s first legal foray
In relation to using AI tools in the legal world, DoNotPay, the New York-based startup behind the app that claims to be “the world’s first robot lawyer” but isn’t actually a lawyer, is riding OpenAI’s GPT-4 wave. The app’s founder and CEO Joshua Browder announced in March that it is “working on using GPT-4 to generate ‘one click lawsuits’ to sue robocallers for $1,500.”
The new feature would automatically transcribe the spam phone call and generate a 1000-word lawsuit.
Curiously, in Browder’s accompanying video demo of the announcement, the chatbot gives a fair warning: “As an AI language model, I am not an attorney and cannot provide legal advice. However, I can provide you with a general outline of a complaint for a violation of the Telephone Consumer Protection Act (TCPA) based on the context I have learned from other cases. Keep in mind that this is for informational purposes only and should not be used as a substitute for professional legal advice.”
After GPT-3.5 was released late last year, Browder boldly announced that DoNotPay will provide the first robot lawyer to defend a client in a live situation. The AI bot would listen to the case and generate comments, which the client would hear through AirPods and repeat in front of the court.
Browder had to pull the plug on the endeavor when he was warned by multiple state prosecutors, with one saying that the illegal practice of law is a misdemeanor in various jurisdictions, punishable by up to six months in county jail.
A few weeks later, a lawsuit was filed against DoNotPay by Jonathan Faridian of Yolo County in San Francisco Superior Court. Faridian was seeking damages over violations of California’s unfair competition law, claiming that he would not have subscribed if he knew DoNotPay was not genuinely a lawyer.
The complainant claimed that it engaged the San Francisco-based DoNotPay to create demand letters, a small claims court filing, and LLC operating agreements and received “substandard and poorly done” results.
“Providing legal services to the public, without being a lawyer or even supervised by a lawyer is reckless and dangerous. And it has real world consequences for the customers it hurts,” Faridian argued.
Information for this briefing was found via The New York Times, BBC, and the sources mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.