A lawyer is facing potential sanctions for using chatGPT to create inaccurate legal documents in support of a court case.

In response to an airline claiming that they have no liability to Roberto Mata, who sued an airline on the basis that he was injured by a serving cart, his lawyers submitted a 10-page brief summarising several relevant court decisions.

But the opposing side, and for that matter the judge, couldn’t find the details of the cited decisions. Because they didn’t exist. The cases were fictional. The lawyer had gotten them from ChatGPT, probably quite innocently. To be fair, he’d even specifically asked the system to confirm that they were real cases, which it confidently did. But this was one more example of these large language models to “hallucinate”, to produce fluent bullshit that sounds legitimate, is formally referenced and so on, but simply isn’t real.

Lawyer Steven Schwartz claims to have not known about this aspect of ChatGPT, which seems like something society is really going to have to address big-time once this technology is fully integrated within standard office software, assuming these issues persist.

Lawyers are not the only courtroom officials leveraging this new technology even now. There’s a couple of reports of judges using it to aid their decision making process.

Firstly, a ChatGPT user from Colombia:

A judge in Colombia has caused a stir by admitting he used the artificial intelligence tool ChatGPT when deciding whether an autistic child’s insurance should cover all of the costs of his medical treatment.

His decision wasn’t entirely based on ChatGPT output, and the verdict doesn’t seem to have been all that contentious, but the fact that the tool was used to support it has raised some concern.

Judge Padilla thinks the tech can make the legal system more efficient, in which case it’s in line with a 2022 law that requires public lawyers to use technologies that facilitate this. The whole “use AI to make important things cheaper” is one of the moves that particularly worries me. It doesn’t feel like efficiency - which let’s face it almost certainly means or proxies for cost - should be the top priority for court decisions.

Secondly one from India:

An Indian judge used ChatGPT to make a decision on the bail plea of a man accused of murder…A bench of Justice Anoop Chitkara at the Punjab and Haryana Court in northern Chandigarh city on Monday sought the AI tool’s help while hearing the bail application of Jaswinder Singh, accused of rioting, criminal intimidation, criminal conspiracy and murder.

These of course are just a couple of cases that happen to have made it to the news for one reason or another. No doubt chatbots are being used by many more professionals in many more courtrooms. It’s probably inevitably going to become more widespread over time, short of legislation prohibiting it which seems unlikely (and potentially not desirable at least at some future point).

They are after all a potentially useful tool for many professions. And no-one really quibbles with, for example, allowing lawyers to search the web no matter how much misinformation that contains. But people tend to have an appreciation of the limits of a web search; I doubt many lawyers just paste the first result from a Google unchecked into their court submissions.

The new chatbots that deliver what appears to be a single statement of undisputed truth in a confident manner are likely more dangerous to use in these cases without a thorough understanding of how they work and their limitations, which potentially only a low proportion of users may have.

Of course in a perfect world, we’d like to imagine that professionals take care to understand the tools they choose to use. But at some point, particularly with extremely complicated systems subject to extremely high amounts of hype that appear to make one’s life easier at no cost, it’s probably an unrealistic expectation.