A new study from Columbia Journalism Review showed that AI search engines and chatbots, such as OpenAI’s ChatGPT Search, Perplexity, Deepseek Search, Microsoft Copilot, Grok and Google’s Gemini, are just wrong, way too often.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    13 days ago

    I miss the days when Google would just give a snippet of a Wikipedia article at the top and you just click the “read more” button. It may not have been exactly what you were looking for but at least it wasn’t blatantly wrong. Nowadays you have to almost scroll down to the bottom just to find something relevant.

    • bdullanw@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      13 days ago

      i almost think this is getting worse as the internet grows, there’s so much more information out there now and it’s easier and easier to push content further. i’m not surprised it’s more and more difficult to filter through the bs

      • njordomir@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 days ago

        To add to this. While there is “more information” that information is increasingly locked down and unsearchable. Things that used to be easy to find are now hidden in the walled gardens of sites like Facebook, X (Formerly Twitter), etc. Google Search and similar engines basically only searche ads now as everything else is locked down. It’s an internet full of data… that we can’t easily access.

  • criitz@reddthat.com
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    edit-2
    13 days ago

    When LLMs are wrong they are only confidently wrong. They don’t know any other way to be wrong.

    • 4am@lemm.ee
      link
      fedilink
      English
      arrow-up
      25
      ·
      13 days ago

      They do not know wright from wrong, they only know probability of the next word.

      LLMs are a brute forcing of the immigration of intelligence. They do not think, they are not intelligent.

      But I mean people today believe that 5G vaccines made the frogs gay.

    • kubica@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      13 days ago

      We only notice when they are wrong, but they can also be right just by accident.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      13 days ago

      This does seem to be exactly the problem. It is solvable, but I haven’t seen any that do it. They should be able to calculate a confidence value based on number of corresponding sources, quality ranking of sources, and how much interpolation of data is being done vs. Straightforward regurgitation of facts.

      • TaviRider@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        13 days ago

        I haven’t seen any evidence that this is solvable. You can feed in more training data, but that doesn’t mean generative AI technology is capable of using that in the way you describe.

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        13 days ago

        I’ve been saying this for a while. They need to train it to be able to say “I don’t know”. They need to add questions to the dataset without enough information to solve so that it can understand what is/isn’t facts vs hallucinating

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      13 days ago

      They are in the end BS generation machines that are trained so much they accidentally happen to be right often enough.

  • venotic@kbin.melroy.org
    link
    fedilink
    arrow-up
    10
    arrow-down
    4
    ·
    13 days ago

    Then again, so has the search engines themselves been proven to be wrong, inaccurate and just plain irrelevant. I’ve asked questions in Google before about things I need to know in general about my state out of curiosity and it’s results always pull up different states that do not apply to mine.

    • TheFogan@programming.dev
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      13 days ago

      well that’s common, but the big thing is, you can see what you are working with. Big difference in at least knowing you need to try a different site when say

      Google: Law about X in state1

      Top result: Law about X in state3: It’s illegal

      Result 2 pages in: here’s a list of each page and whether law X is legal in your state… (State 1 legal)

      Versus chatgpt

      Is X legal in state1?

      Chatgpt: No

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      13 days ago

      Yeah because you’re not supposed to ask search engines questions, you’re supposed to use keywords.

  • barraformat@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 days ago

    Always ask AI for sources and validate them. You can also request AI to use only certain sources of your liking. Never go blind to those answers.

  • Riddick3001@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    13 days ago

    Nah, it’s just the ghost in the machine.

    Tip: always add “True” string to the algorithm/s