• casmael@lemm.ee
      link
      fedilink
      arrow-up
      30
      ·
      8 months ago

      Wow it’s so realistic and smart and easy to use I can feel my knowledge being revolutionised

    • A_Very_Big_Fan@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      8 months ago

      Tbf I’m sure this is an unpaid version of some online LLM, you can only expect so much lol.

      When I use GPT3.5 for things like finding specific quotes from famous books, it’s excellent… but asking it to play chess gives you blatantly illegal moves. Then GPT4 kicks my ass in chess.

  • huntrss@feddit.de
    link
    fedilink
    arrow-up
    100
    ·
    edit-2
    8 months ago

    It’s so human how - instead of admitting its error - it’s pulling this bs right out of its ass 🤣

      • Duranie@literature.cafe
        link
        fedilink
        arrow-up
        28
        ·
        8 months ago

        Growing up in an environment where mistakes were unacceptable sets the stage. Our willingness and ability to understand that that’s fucked up and change our attitudes about mistakes takes more growth.

        For some people it’s easier to dig in their heels and double down.

        • darthfabulous42069@lemm.ee
          link
          fedilink
          arrow-up
          11
          ·
          edit-2
          8 months ago

          🤔🤔🤔 I guess I can empathize. People are always traumatized by whatever their parents tell them. What a shame.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 months ago

      More like large guessing models. They have no thought process, they just produce words.

      • TotallynotJessica@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        8 months ago

        They don’t even guess. Guessing would imply them understanding what you’re talking about. They only think about the language, not the concepts. It’s the practical embodiment of the Chinese room thought experiment. They generate a response based on the symbols, but not the ideas the symbols represent.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          8 months ago

          I’m equating probability with guessing here, but yes there is a nuanced difference.

  • megopie@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    32
    ·
    8 months ago

    Yah, people don’t seem to get that LLM can not consider the meaning or logic of the answers they give. They’re just assembling bits of language in patterns that are likely to come next based on their training data.

    The technology of LLMs is fundamentally incapable of considering choices or doing critical thinking. Maybe new types of models will be able to do that but those models don’t exist yet.

    • CurlyMoustache@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      8 months ago

      A grown man I work with, he’s in his 50s, tells me he asks ChatGPT stuff all the time, and I can’t for the life of me figure out why. It is a copycat designed to beat the Turing test. It is not a search engine or Wikipedia, it just gambles it can pass the Turing test after every prompt you give it.

      • megopie@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        People want functioning web searching back, but rather than address issues in the industry breaking an otherwise functional concept, they want a new fancy technology to make the problem go away.

      • qGuevon@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        It works well if you know what to use it for. Ever had something you wanted to Google, but couldn’t figure out the keywords? Ever saw someone use a specific technique of something, which you could describe, but wouldn’t be able to find unless someone on a forum asked the same question? That’s were chatgpt shines.

        Also for code it’s pretty sweet

        But yeah, it’s not a wiki or hard knowledge retriever, but it might help connect the dots

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      There are techniques to make these kinds of errors less common already today. For example, you can ask it to think through its answers step by step using first principals. If you and an LLM to do that it will write out the letters line by line which gives it enough context to correctly answer using the improved probability the context window gives it. You can even ask it to write programs to answer questions so it could write a quick script to do it programmatically.

      The main reason you don’t see AIs doing this today is that producing all that extra context is slow and expensive and it’s unnecessary a lot of the time for most prompts. As the technology gets faster and cheaper and the use cases get more complex these techniques will be used more and more often.

      While the technology does have fundamental flaws, that doesn’t mean there aren’t ways to work with those flaws to avoid the problems they have when using the raw output.

  • Miss Brainfarts@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    31
    ·
    8 months ago

    The funniest thing is that even when the answer is correct, asking an LLM to explain its reasoning step by step can produce the dumbest results

  • Wilzax@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    8 months ago

    The letter n appears twice in the letter m. The count is correct, the reasoning is not