Another law firm has come under fire after it was discovered a US lawyer cited a fictitious case generated by ChatGPT, prompting concern from AI experts that the technology is being used in the wrong way.
US media outlet Fox News reported that, Jae Lee, a lawyer at New York firm JSL Law Offices, was found to have used OpenAI’s ChatGPT to research past cases for a medical malpractice lawsuit.
However, Lee included an AI-generated case that claimed a US doctor had carried out a botched abortion. Lee was unable to provide evidence of the case when questioned.
Lee was called to attend a grievance panel at the 2nd US Circuit Court of Appeals last week, which concluded her conduct fell “well below the basic obligations of counsel”.
The case follows several similar reported mishaps from lawyers who have used ChatGPT for legal research.
Is AI appropriate for the legal industry?
Jaeger Glucina, managing director and chief of staff at Luminance, an AI platform for the legal industry, told Verdict that AI hallucinations mean the legal industry should treat the the implementation of GenAI with caution.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“This infamous case is a perfect example why, instead of a source of fact, ChatGPT and other generalist models should be thought of as a well-read friend who can converse on a wide range of subjects but not an expert in any particular field,” Glucina said.
Luminance specialises in providing a legal co-pilot for lawyers, giving lawyer’s AI tools to automate the generation, negotiation and analysis of contracts.
Despite ChatGPT potential for legal industry, it falls short of meeting the levels of accuracy and reliability demanded by the legal field, Glucina said.
“In 2024, we will see lawyers placing their trust in specialised AI that has been intensively trained over verified data – this will be the true blueprint for an AI-enabled future,” she told Verdict.
Simon Thompson, head of AI, machine learning and data science at digital consultancy GFT, told Verdict that AI systems should only be utilised in industries and applications that they have been specifically designed for, rather than ones they only appear capable of doing.
“Without proper oversight, deploying AI prematurely or inappropriately could lead to catastrophic failures, much like approving a new pharmaceutical drug without adequate clinical trials,” Thompson said.
Thompson believes that AI and large language models (LLMs) have an issue with overconfidence.
“Models like these assume that they know the answer often when they do not,” Thompson said.
“In an attempt to answer every question an LLM is asked, it often strays towards providing misinformation or inaccurate responses, which can prove to be quite harmful in the long run,” he added.