• chaogomu@kbin.social
    link
    fedilink
    arrow-up
    115
    arrow-down
    3
    ·
    1 year ago

    The thing is, the LLM doesn’t actually know anything, and lies about it.

    So you go to How Stuff Works now, and you get bullshit lies instead of real information, you’ll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn’t correct, because it can’t correct. It doesn’t actually know what it’s saying.

    See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.

    There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.

    And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.

    • Nonameuser678@aussie.zone
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      I’ve graded papers from students who obviously used chatGPT to write them. They were a pass at best. Zero critical synthesis of ideas and application of them to the topic. I’m sure chatGPT has its uses but people really overhype its writing ability. There’s more to writing than putting words in the right places.

      • And009@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It could be AI sport when we actually have an general purpose AI. That based on people working on llm and gpt, would take between 6 years and never happening.

        It’s not easy to create a super ai who’s realistically smarter than humans in every aspect.

    • Blackmist@feddit.uk
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Literally predictive text but for whole articles.

      It doesn’t know the limits of it’s knowledge or indeed know anything. It just “knows” what an answer smells like. It even “knows” what excuses are supposed to look like when you call it out.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.

      Some areas I can see it being really useful are:

      • generating believable text - scams, placeholder text, and general structure
      • distilling existing information - especially if it can actually cite sources, but even then I’d take it with a grain of salt
      • trolling people/deep fakes

      That’s about it.

    • gapbetweenus@feddit.de
      link
      fedilink
      arrow-up
      6
      arrow-down
      6
      ·
      1 year ago

      The thing is, the LLM doesn’t actually know anything, and lies about it.

      Just like your average human journalist. If you ever read an article from not specialist journal on a topic you are familiar with - you know. This seems actually where LLM are very similar to how human brain works - if we don’t know something, we come up with some bullshit.

      • Ech@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Even medium human writers can comprehend their work as a whole, though. There is a cohesiveness even to the bullshit. The LLM is just putting words down that match the prompt. It’s rng driven, readable Lorum Ipsum.

        If the results were still edited afterwards, there may be some merit to the output, but any company going full LLM isn’t looking for quality. They want to use it to churn out endless content that they simply can’t get from even a team of humans. More than could be edited even if they kept editors on staff.

        • gapbetweenus@feddit.de
          link
          fedilink
          arrow-up
          3
          arrow-down
          7
          ·
          1 year ago

          Even medium human writers can comprehend their work as a whole, though

          Sure, but a lot of humans are rather bad writers.

          but any company going full LLM isn’t looking for quality.

          That is true for 24h news cycle of online media, regardless LLM.

          • Ech@lemm.ee
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            Sure, but a lot of humans are rather bad writers.

            Bad writing is still a step above rng junk, imo.

            but any company going full LLM isn’t looking for quality.

            That is true for 24h news cycle of online media, regardless LLM.

            Yes, that was my point. Setting up your company to put out more content than can possibly be processed by humans is a glaring sign of their values - ie quantity far above quality.

            • gapbetweenus@feddit.de
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              Bad writing is still a step above rng junk, imo.

              I’v read writing worse than GTP. I had to help someone write an essay - and I just wrote it for him in the end, because he absolutely lacked the skills to write a long meaningful text. At at the same time - genius of a percussionist.

              • Ech@lemm.ee
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                Do you think that person was signing up for jobs writing for blogs or content farms?

                • gapbetweenus@feddit.de
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  1 year ago

                  Have you read some low quality journalism? The whole yellow press can be replaced with GTP and no one would ever see a difference.

                  • Ech@lemm.ee
                    link
                    fedilink
                    arrow-up
                    4
                    ·
                    1 year ago

                    Ok, so do you wanna talk about your terrible writing partner in school? Or “yellow press”? Or maybe the topic of the article, which isn’t journalism in the slightest? Or how about my point, which was, again, that even bad writers have context, as opposed to an LLM which is just filling in the arbitrary patterns it’s programmed to delineate. Readability is not what I’m talking about.

      • tryptaminev 🇵🇸 🇺🇦 🇪🇺@feddit.de
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        So modern journalists were redundant all along?

        But yeah, the quality of what is passing as journalism now is often ridiculous. But the only way to combat this is by having editors that are knowledgable about topics. But it seemed editors were the first people laid off, when internet articles became a thing.

        • gapbetweenus@feddit.de
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          So modern journalists were redundant all along?

          24 hours news cycle of online media creates junk journalism on new level. Good journalism needs time and can’t spit out news articles every minute of the day. Editors won’t help, because it’s just not possible to do good journalism on that scale. But jeh - in general with AI, the jobs will shift more to editing. Which will be extremely soul-draining, going though tons of AI generated bullshit