Mistake-filled legal briefs show the limits of relying on AI tools at work

Be Well-Working Well-Avoiding AI Pitfalls
Photo credit AP News/AP Illustration / Peter Hamlin

NEW YORK (AP) — Judges around the world are dealing with a growing problem: legal briefs that were generated with the help of artificial intelligence and submitted with errors such as citations to cases that don’t exist, according to attorneys and court documents.

The trend serves as a cautionary tale for people who are learning to use AI tools at work. Many employers want to hire workers who can use the technology to help with tasks such as conducting research and drafting reports. As teachers, accountants and marketing professionals begin engaging with AI chatbots and assistants to generate ideas and improve productivity, they're also discovering the programs can make mistakes.

A French data scientist and lawyer, Damien Charlotin, has catalogued at least 490 court filings in the past six months that contained “hallucinations,” which are AI responses that contain false or misleading information. The pace is accelerating as more people use AI, he said.

“Even the more sophisticated player can have an issue with this,” Charlotin said. “AI can be a boon. It’s wonderful, but also there are these pitfalls.”

Charlotin, a senior research fellow at HEC Paris, a business school located just outside France's capital city, created a database to track cases in which a judge ruled that generative AI produced hallucinated content such as fabricated case law and false quotes. The majority of rulings are from U.S. cases in which plaintiffs represented themselves without an attorney, he said. While most judges issued warnings about the errors, some levied fines.

But even high-profile companies have submitted problematic legal documents. A federal judge in Colorado ruled that a lawyer for MyPillow Inc., filed a brief containing nearly 30 defective citations as part of a defamation case against the company and founder Michael Lindell.

The legal profession isn’t the only one wrestling with AI’s foibles. The AI overviews that appear at the top of web search result pages frequently contain errors.

And AI tools also raise privacy concerns. Workers in all industries need to be cautious about the details they upload or put into prompts to ensure they're safeguarding the confidential information of employers and clients.

Legal and workplace experts share their experiences with AI’s mistakes and describe perils to avoid.

Think of AI as an assistant

Don’t trust AI to make big decisions for you. Some AI users treat the tool as an intern to whom you assign tasks and whose completed work you expect to check.

“Think about AI as augmenting your workflow,” said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. It can act as an assistant for tasks such as drafting an email or researching a travel itinerary, but don't think of it as a substitute that can do all of the work, she said.

When preparing for a meeting, Flynn experimented with an in-house AI tool, asking it to suggest discussion questions based on an article she shared with the team.

“Some of the questions it proposed weren’t the right context really for our organization, so I was able to give it some of that feedback ... and it came back with five very thoughtful questions,” she said.

Check for accuracy

Flynn also has found problems in the output of the AI tool, which still is in a pilot stage. She once asked it to compile information on work her organization had done in various states. But the AI tool was treating completed work and funding proposals as the same thing.

“In that case, our AI tool was not able to identify the difference between something that had been proposed and something that had been completed,” Flynn said.

Luckily, she had the institutional knowledge to recognize the errors. “If you’re new in an organization, ask coworkers if the results look accurate to them,” Flynn suggested.

While AI can help with brainstorming, relying on it to provide factual information is risky. Take the time to check the accuracy of what AI generates, even if it's tempting to skip that step.

“People are making an assumption because it sounds so plausible that it’s right, and it’s convenient,” Justin Daniels, an Atlanta-based attorney and shareholder with the law firm Baker Donelson, said. “Having to go back and check all the cites, or when I look at a contract that AI has summarized, I have to go back and read what the contract says, that’s a little inconvenient and time-consuming, but that’s what you have to do. As much as you think the AI can substitute for that, it can’t.”

Be careful with notetakers

Featured Image Photo Credit: AP News/AP Illustration / Peter Hamlin