When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • Terrasque
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    The fix is not that hard, it’s a matter of reputation on having the chatbot answer “I don’t know” when the confidence on an answer isn’t high enough.

    This has been tried, it’s helping but it’s not enough by itself. It’s one of the mitigation steps I was thinking of. And companies do work very hard to reduce hallucinations, just look at Microsoft’s newest thing.

    From that article:

    “Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water,” said Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech. “It’s an essential component of how the technology works.”

    Text-generating models hallucinate because they don’t actually “know” anything. They’re statistical systems that identify patterns in a series of words and predict which words come next based on the countless examples they are trained on.

    It follows that a model’s responses aren’t answers, but merely predictions of how a question would be answered were it present in the training set. As a consequence, models tend to play fast and loose with the truth. One study found that OpenAI’s ChatGPT gets medical questions wrong half the time.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      The Hidrogen from water thing is simply wrong. If that is supposed to mean that hallucinations are just part of a generative LLM technology that cannot be solved.

      They are not inherent of the technology. They are a product of lack of control over the stadistical output. Prioritizing any answer before no answer.

      As with any statistics you have a confidence on how true something is based on your data. It’s just a matter of putting the threshold higher or lower.

      If you ask an easy question “What is the capital of France?” You wont ever get an hallucination. Because all models will have that answer provided with very high confidence. You just have to make so if that level of confidence is not reached it just default to a “I don’t know answer”. But, once again, this will make the chatbots seem very dumb as they will answer with lots of “I don’t know”.

      The problem here is the amount of data and the efficiency of the model. In order to get an usable general purpose model with a confidence threshold high enough to not hallucinate, by todays efficiency with the models it would need to be an humongous model, too big and with too much training data even for big tech. So we can go that big, we can try to improve efficiency (which is being proven very hard for general models) or we do both. Time will tell, but I’m quite confident that we will reach a general use model without hallucinations sooner or later.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 minutes ago

        This article is an example where statistical confidence doesn’t help. The model has lots of data so it likely has high confidence, but it didn’t have any understanding of the nature of the relation in the data.

      • Terrasque
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 minutes ago

        As with any statistics you have a confidence on how true something is based on your data. It’s just a matter of putting the threshold higher or lower.

        You just have to make so if that level of confidence is not reached it just default to a “I don’t know answer”. But, once again, this will make the chatbots seem very dumb as they will answer with lots of “I don’t know”.

        I think you misunderstand how LLM’s work, it doesn’t have a confidence, it’s not like it looks at it’s data and say “hmm, yes, most say Paris is the capital of France, so that’s the answer”. It “just” puts weight on the next token depending on it’s internal statistics, and then one of those tokens are picked, and the process start anew.

        Teaching the model to say “I don’t know” helps a bit, and was lauded as “The Solution” a year or two ago but turns out it didn’t really help that much. Then you got Grounded approach, RAG, CoT, and so on, all with the goal to make the LLM more reliable. None of them solves the problem, because as the PhD said it’s inherent in how LLM’s work.

        And no, local llm’s aren’t better, they’re actually much worse, and the big companies are throwing billions on trying to solve this. And no, it’s not because “that makes the llm look dumb” that they haven’t solved it.

        Early on I was looking into making a business of providing local AI to businesses, especially RAG. But no model I tried - even with the documents being part of the context - came close to reliable enough. They all hallucinated too much. I still check this out now and then just out of own interest, and while it’s become a lot better it’s still a big issue. Which is why you see it on the news again and again.

        This is the single biggest hurdle for the big companies to turn their AI’s from a curiosity and something assisting a human into a full fledged autonomous / knowledge system they can sell to customers, you bet your dangleberries they try everything they can to solve this.

        And if you think you have the solution that every researcher and developer and machine learning engineer have missed, then please go prove it and collect some fat checks.