Here's one for the AI sceptics - and anyone else cursed with basic common sense

TLDR: lawyers in England used AI to create fake cases to provide precedents to support their client’s case. Fakery wasn’t necessary - client’s case was strong and defence was weak anyway. And the judge saw through it.

More detail and analysis here from an excellent legal commentator

I don’t think it’s paywalled

I don’t think the suggestion is they did it to support their case. Rather they didn’t realise that the LLM had hallucinated.

Which is the easiest thing in the world to avoid, but you know, we’re still in the infancy of using these tools & people are making basic errors.

1 Like

I have to object to the above assertion. The lawyers did not create fake cases in that they did not create fake case histories. Instead, the AI (not the lawyers) created fake citations to nonexistent cases. As the article points out, the citations appear legitimate at a glance (in that the format is correct), but when you attempt to look them up, they don’t exist. There is no indication that the lawyers intentionally tried to mislead the Court with the fake citations. It would seem that they simply failed to fact-check the AI’s output and/or failed to recognize that an AI model is not concerned with accuracy and/or truthfulness. This has been a repeating issue for lawyers using AI. There are many reports of this happening in various jurisdictions around the world. Any output from an AI model needs to be fact-checked thoroughly.

Of course, even with a good citation, any good lawyer should actually confirm that the case supports their position. I have personally seen multiple instances where a case is cited and the case either supports the opposing position or is completely unrelated to the issue at hand (no AI involved). In every instance, it is clear that the lawyer didn’t actually read the case through before including the citation. That’s just poor lawyering. AI has simply made it easier for those same lazy lawyers to hide their laziness behind what looks like good work to those who don’t know what to look for.

2 Likes

We’re seeing the same inventiveness in coding tools. The AI assistant sometimes just make up a reference to a non-existent external library to be included at build. Weirdly, this has happened with enough frequency and with reference to the same made-up libraries that malware actors have provided code at those referenced locations.

So, now the AI generated code will build without errors mentioning missing dependencies and happily include the malware actor’s code.

4 Likes

Got a source for further reading?

It’s called “slopsquatting” in this article:

This article refers to the same research:

(Both link to the original research paper if you want to dive in deep)

5 Likes

Thanks @rob for the links!

And there’s also this

Wife files for divorce after ChatGPT ‘reads’ Greek coffee cup and predicts affair

"Her husband recounted the episode on Greek morning show To Proino, saying his wife often follows new trends.

He explained that she made Greek coffee, photographed their cups, and decided it would be fun to have ChatGPT “read” the images.

According to the AI’s interpretation, his cup revealed fantasies about a mysterious woman with the initial “E” and a destined relationship with her.

Despite his protests and dismissal of the reading as nonsense, his wife asked him to leave, told their children about the divorce, and served legal papers within days.

The husband’s lawyer stated that AI-based claims have no legal weight, while traditional coffee readers pointed out that proper tasseography includes reading the foam and saucer, not just the grounds."

2 Likes