Stop using AI for important stuff. Sullivan & Cromwell law firm apologizes for AI 'hallucinations' in court filing
This would be funny if it wasn't happening all the time. The client in the case should sue for every dollar they've spent with S&C in the past 3 years. And then some.
April 21 (Reuters) - Sullivan & Cromwell, a premier Wall Street law firm, apologized to a federal judge for submitting a court filing with inaccurate citations and other errors generated by artificial intelligence.
The unstated undercurrent... "Please don't disbar us for being stupid!"
This is from X. (The NYT article is behind a paywall - as they all are)
- Fictitious Case Names: The filing included names of legal cases that do not exist
- Fabricated Quotes: The document contained direct quotes that were never actually spoken or written
- Non-existent Statutes: The AI incorrectly analyzed or entirely invented provisions within the U.S. Bankruptcy Code
Are you using AI to do your work? God help you, because the AI will not.
I don't actually know if the partners charge that $2000 per hour, but I wouldn't be surprised. This is how S&C describe themselves:
Sullivan & Cromwell LLP provides the highest quality legal advice and representation to clients around the world. The results the Firm achieves have set it apart for more than 140 years and have become a model for the modern practice of law.
They might need to rethink that "modern practice of law" part.
Large Language Models are NOT Truth models. They do not know what is true. They know what sounds good.

A few months back, I experimented with three different LLM chatbots. A simple request: Tell me about Carl Bussjaeger.
ReplyDeleteNone of them were accurate, but ChatGPT took the cake: It told me that I died in WW2, then went on to become an award-winning novelist.
-blink-
A few days ago, something I read prompted to hold my nose and go to Google to play with their AI. It told me that I wrote a trilogy which I -- Carl Bussjaeger myself -- call "Capitalist Space."
I never wrote a trilogy; I shelved the third novel before the first draft was completed. I never even heard the fictional universe called "Capitalist Space" be anyone much less titled it that myself. To the extent I called it anything, it was mere the Net Assets universe. (Yes, there were also a couple of anthologies worth of related short stories.)
I'm looking forward to someone putting an LLM in charge of a municipal water system, When it hallucinates appropriate chlorine and/or fluoride levels, or bacteria count, people will die.
Then we can put one in charge of a regional power grid and enjoy the fireworks.
After that, with companies laying off workers and letting AI do it, we can watch the *AI provider companies* -- none of which are making any profit and are burning through millions to billions a year -- excrement show when the AIs are turned off and the customer companies race against bankruptcy to rehire the workers they screwed over.
It doesn't take much research into the AI experiences to see this "hallucination" problem and it seems that if the AI spent maybe 10% of its time checking to see if these were real things in the source material it wouldn't happen as often.
ReplyDeleteI see people talk about AI needing Asimov's three laws of robotics, but it just doesn't seem like enough to me. AI needs an ethical system, more like the 10 commandments than the three laws.
The 3 laws are a utopian pipe dream. The first implementation of robotics and AI are really happening in military applications. (The new Air Force hotness is the replacement for the A-10 Warthog, the A-11 Thunderhog which will have some AI features.)
DeleteThe problem with LLMs is that they do not understand the words True and False. They only know patterns that seem correct. There are people who don't believe you can fix that by doing "more of the same," and some of them have become less accurate as time goes by.
DeleteThe last time I checked, the big LLMs couldn't tell you whether or not a given number was prime accurately, and the error rate was increasing. Wolfram-Alpha can tell you that.