I asked ChatGPT about cognitive dissonance in the context of climate change. I thought the answer was interesting.

    • cerement@slrpnk.net
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      another metaphor to throw into the mix – “it’s just a fancy version of the predictive text on your phone”

    • Solar Bear@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      LLMs can be really good at answering simple to moderate questions on a subject if you prime it with a bunch of relevant material first, or even better, finetune it on that material if you have the time, power, and access to do so. There’s a really good case to make for using them in search. But general models are pretty spotty at answering anything but the most basic questions on the most common topics.

      • silence7@slrpnk.netM
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        eg: LLMs are good at answering simple to moderate questions if you do the work of figuring out what truthful relevant material is in the first place. But you can’t really do that if you don’t already have the background knowledge to write the answer yourself.

        • Solar Bear@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Well, yes and no. You don’t have to be an expert in a subject to be able to find trustworthy sources. And sometimes all you’re really lacking is just the correct vocabulary to describe a concept.

          You can feed the algorithm known trusted info, have it process that info looking for specific things and return it, and then you can use that to reference back to the source material to find exactly where it came from. This would dramatically improve time spent looking stuff up without having to hunt for exact keywords, as long as it was used correctly and verified afterwards. It’s a tool to speed things up, not a replacement for a human brain.

          Unfortunately, people keep trying to use it as exactly that. And it doesn’t help that the people leading the hype are trying sell it as exactly that.

          • silence7@slrpnk.netM
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            If you don’t have basic competence in a subject, you can’t really tell which sources are reliable. And since everybody starts out ignorant, there are always going to be a lot of people who can’t do that.

            You need to bootstrap with some basic knowledge and then have an epistemic model in order to get anywhere, and that’s pretty much impossible with the current generation of machine learning tools.

            • Solar Bear@slrpnk.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That’s not true though. If it were, nobody would be able to reliably learn something new without directly instructional education from an experienced person. Basic media literacy is often all you need to get started.

              • silence7@slrpnk.netM
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                A huge chunk of the population doesn’t have basic media literacy though, and for that population, trying to use an LLM probably makes things worse.