In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

    • Big P@feddit.uk
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      You wouldn’t be saying that if it was your content that was being ripped off

        • Niello@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          if you read a copyrighted material without paying and then forgot most of it a month later with vague recollection of what you’ve read the fact is you still accessed and used the copyrighted material without paying.

          Now let’s go a step further, you write something that is inspired by that copyrighted material and what you wrote become successful to some degree with eyes on it, but you refuse to admit that’s where you got the idea from because you only have a vague recollection. The fact is you got the idea from the copyrighted material.

            • Niello@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Except the illegally obtaining the copyrighted material part, which is the main point. And definitely not on this scale.

        • Kichae@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          That’s, uh, exactly how they work? They need large amounts of training data, and that data isn’t being generated in house.

          It’s being stolen, scraped from the internet.

          • Chozo@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            If it was publicly available on the internet, then it wasn’t stolen. OpenAI hasn’t been hacking into restricted content that isn’t meant for public consumption. You’re allowed to download anything you see online (technically, if you’re seeing it, you’ve already downloaded it). And you’re allowed to study anything you see online. Even for personal use. Even for profit. Taking inspiration from something isn’t a crime. That’s allowed. If it wasn’t, the internet wouldn’t function at a fundamental level.

            • HeartyBeast@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I don’t think you understand how copyright works. Something appearing on the internet doesn’t give you automatic full commercial rights to it.

              • Chozo@kbin.social
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                An AI has just as much right to web scrape as you do. It’s not a violation of copyright to do so.

    • Ferk@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Note that what the EU is requesting is for OpenAI to disclose information, nobody says (yet?) that they can’t use copyrighted material, what they are asking is for OpenAI to be transparent with sharing the training method, and what material is being used.

      The problem seems to be that OpenAI doesn’t want to be “Open” anymore.

      In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

      Of couse, disclosing openly what materials are being used for training might leave them open for lawsuits, but whether or not it’s legal to use copyrighted material for training is something that is still in the air, so it’s a risk either way, whether they disclose it or not.

    • PabloDiscobar@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Your first comment and it is to support OpenAI.

      edit:

      Haaaa, OpenAI, this famous hippies led, non-profit firm.

      2015–2018: Non-profit beginnings

      2019: Transition from non-profit

      Funded by Musk and Amazon. The friends of humanity.

      • Chozo@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        With replies like this, it’s no wonder he was hesitant to post in the first place.

        There’s no need for the hostility and finger pointing.

  • chemical_cutthroat@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright? Now, if the information that they have used is locked behind paywalls and obtained illegally, then sure, fire ze missiles, but if it is readily accessible and not being reprinted wholesale by the AI, then it doesn’t seem that different from any of the other millions of ways we use data in every day life. Just because a machine learned it instead of a human, I don’t believe that it makes it inherently wrong. I am very open to discussion on this, and if anyone has a counter-argument, I’d love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.

    Edit: I asked GPT-4 what it thought about this, and here is what it said:

    As an AI developed by OpenAI, I don’t access any copyrighted databases, subscription sites, confidential information, or proprietary databases for my learning or response generation. I was trained on a diverse range of internet text, which includes publicly available data in multiple languages and formats. The training also involves learning from large-scale datasets that contain parts of many kinds of texts (books, websites, scientific articles, etc.). However, I don’t know specifics about which documents were in my training set or have access to any specific documents or sources.

    Your question seems to pertain to how copyright law applies to AI. This is a complex and evolving area of law and policy, largely because AI systems like me do not “read” or “use” material in the same way a human would. A human might read a book and then consciously recall information from it when writing or speaking. In contrast, AI models do not have consciousness or awareness, they analyze statistical patterns in the data they were trained on and use these patterns to generate responses.

    When a human uses copyrighted material to create something new, the law often considers this to be a derivative work. It is typically necessary to obtain permission from the copyright holder to create derivative works. However, AI doesn’t use or recall copyrighted material directly; instead, it learns patterns from a huge amount of text data to predict or generate text based on a given prompt.

    Dissemination of copyrighted material, such as through forum posts or Q&A platforms, without proper licensing or within the bounds of fair use, can be a violation of copyright law. AI doesn’t ‘disseminate’ copyrighted material; it generates responses based on patterns it has learned.

    That said, the legal interpretation of how copyright law applies to machine learning and AI is still evolving, and there can be a lot of complexities and nuances. Therefore, the information here should not be taken as legal advice.

    As of my last update in September 2021, this was the case. However, I recommend checking the most recent legal standpoints and opinions, as this area is still evolving rapidly.

  • stravanasu@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think it’s a basic requirement that the data upon which a large language model is trained be publicly disclosed. It’s the same as the requirement of writing the ingredients in packaged food. Or in knowing where your lawyer got their degree from. You want to know where what you’re using is coming from.

  • LegendOfZelda@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I disagree with the “they’re violating copyright by training on our stuff” argument, but I’ve turned against generative AI because now automation is taking art from us, and we’re still slaving away at work, when automation was supposed to free up time for us to pursue art.

  • bedrooms@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Read the whole thing. The reason OpenAI is opposing the law is not necessarily copyright infringement.

    One provision in the current draft requires creators of foundation models to disclose details about their system’s design (including “computing power required, training time, and other relevant information related to the size and power of the model”)

    This is the more likely problem.

    • jcrm@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Given their name is “OpenAI” and they were founded on the idea of being transparent with those exact things, I’m less impressed that that’s what they’re upset about. The keep saying they’re “protecting” us by not releasing us, which just isn’t true. They’re protecting their profits and valuation.

  • StarServal@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    This is one of those cases where copywrite law works opposite as intended; in that it should drive innovation. Here we have an example of innovation, but copywrite holders want to (justifiably) shut it down.

    • cmhe@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I think this is actually a case where copyright works correctly. It is protecting individuals of getting their work, they provided for free in many cases, ‘stolen’ by a more powerful party to make money from it without paying the creators of their work.