Large Language Models are pathological liars in a box. Unlicensed law clerk fired after ChatGPT hallucinations found in filing - Ars Technica
Last month, a recent law school graduate lost his job after using ChatGPT to help draft a court filing that ended up being riddled with errors.
The consequences arrived after a court in Utah ordered sanctions after the filing included the first fake citation ever discovered in the state hallucinated by artificial intelligence.
Also problematic, the Utah court found that the filing included "multiple" mis-cited cases, in addition to "at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT."
So much for that education he paid tens (hundreds?) of thousands of dollars for.
Oh, and Pixy Misa at Ambient Irony is who called LLM "a pathological liar in a box."
Language Models like ChatGPT are just that, language models. They are not truth models or fact models. They really only know when the language is right.
No comments:
Post a Comment
Comment Moderation is in place. Your comment will be visible as soon as I can get to it. Unless it is SPAM, and then it will never see the light of day.
Be Nice. Personal Attacks WILL be deleted. And I reserve the right to delete stuff that annoys me.