• QHC@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    10
    ·
    11 months ago

    It’s just a natural extension of the concept that entities have some kind of ownership of their creation and thus some say over how it’s used. We already do this for humans and human-based organizations, so why would a program not need to follow the same rules?

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      31
      arrow-down
      6
      ·
      11 months ago

      Because we don’t already do this. In fact, the raw knowledge contained in a copyrighted work is explicitly not copyrighted and can be done with as people please. Only the specific expression of that knowledge can be copyrighted.

      An AI model doesn’t contain the copyrighted works that went into training it. It only contains the concepts that were learned from it.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        15
        ·
        11 months ago

        There’s no learning of concepts. That’s why models hallucinate so frequently. They don’t “know” anything, they’re doing a lot of math based on what they’ve seen before and essentially taking the best guess at what the next word is.

        • MickeySwitcherooney@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          11 months ago

          There very much is learning of concepts. This is completely provable. You can give it problems it has never seen before and it will come up with good solutions.