01 June 2023

AI Can't Tell Fact from Fiction

ChatGPT and the rest are language models, NOT truth models. AI Makes S**t Up | Power Line

A guy who writes about the governors of South Dakota asked ChatGPT to write something. It came up with Chet Taylor. Dates in office, alma matter, etc.

Remarkably, however, Crawford H. “Chet” Taylor is entirely a figment of ChatGPT’s imagination.

Not only was he never governor of South Dakota, he never existed.

That isn't the only instance. It seems some lawyers thought using ChatGPT to write a brief was a good idea.

Some lawyers in New York relied on AI, in the form of ChatGPT, to help them write a brief opposing a motion to dismiss based on the statute of limitations. Chat GPT made up cases, complete with quotes and citations, to support the lawyers’ position. The presiding judge was not amused.

Hat tip to Pixy Misa at Ambient Irony - Daily News Stuff 30 May 2023: Euphemistic Eucalypt Edition

And Chet Taylor looks good too. He's just not real. And since ChatGPT can't sustain a hallucination long enough to form a coherent short story, just for a couple of paragraphs, so it's utterly valueless.

Nvidia still has a valuation greater than AMD and Intel combined, but at least one corner of that market cap is built on sand.

3 comments:

  1. People can't tell fact from fiction, by and large. They also lie, make stuff up, and have biases. Why does it seem to surprise people that an AI trained on human output, and designed to function like a human mind, DOES?

    Everything that an AI does is 'made up' for values of made up, since it has no way to discriminate good input from bad. No one ever smacked it for lying, or using swear words, or talking bad about someone. It's like a clever toddler, completely self centered and amoral.

    I also think the denunciations and comments like "so it's utterly valueless" are REALLY REALLY wrong. Like IBM's Watson saying he thought the world wide marked for computers was "5 at most" wrong. Hard to see where the journey ends when you've only taken the first step.

    nick

    ReplyDelete
    Replies
    1. People asked to create some biographies of people who were governor of a particular state, don't usually make shit up, including fake college experience. Yes people lie, but they usually understand that they ARE lying. AI doesn't understand the difference between true and false.

      Delete
  2. And yet we have numerous examples in the press of people doing exactly that, fabricating jobs on their resumes', claiming an ethnicity they are not, or affecting an accent they don't actually have, including some prominent politicians... who don't seem to know they are lying.

    We think, and hope, that AI isn't self-aware, but without self awareness, you can't have the sort of self judgements that let a person make a moral judgement. And without life experience, not just reading about stuff, it is hard to tell fact from fiction. My 12yo is very widely read, yet gets really basic things wrong because she just doesn't have the experiences to know that they aren't possible, or real.

    It's very early days for real AI, and like any new technology, a degree of caution seems prudent. And since it's computer based, history tells us that at some point the curve will go exponential. Could we have predicted quant based financial strategies, and the algos front running the market from knowing about ballistic calculating machines in WWII? Or tweets of a photo causing market disruption, like we saw in the last week?

    It's not something that keeps me awake at night, but I've read enough scifi to know it might not be all skittles and beer when the machines take the form of a mind.
    n

    ReplyDelete

Comment Moderation is in place. Your comment will be visible as soon as I can get to it. Unless it is SPAM, and then it will never see the light of day.

Be Nice. Personal Attacks WILL be deleted. And I reserve the right to delete stuff that annoys me.