ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that’s referenced in the article can be found here

    • Chozo
      link
      fedilink
      277 months ago

      I’d have to imagine that this PII was made publicly-available in order for GPT to have scraped it.

        • Chozo
          link
          fedilink
          -87 months ago

          It also doesn’t mean it inherently isn’t free to use, either. The article doesn’t say whether or not the PII in question was intended to be private or public.

          • Davel23
            link
            fedilink
            237 months ago

            I could leave my car with the keys in the ignition in the bad part of town. It’s still not legal to steal it.

            • Chozo
              link
              fedilink
              87 months ago

              Again, the article doesn’t say whether or not the data was intended to be public. People post their contact info online on purpose sometimes, you know. Businesses and shit. Which seems most likely to be what’s happened, given that the example has a fax number.

            • Dran
              link
              fedilink
              -17 months ago

              If someone had some theoretical device that could x-ray, 3d image, and 3d print an exact replica of your car though, that would be legal. That’s a closer analogy.

              It’s not illegal to reverse-engineer and reproduce for personal use. It is questionably legal though to sell the reproduction. However, if the car were open-source or otherwise not copyrighted/patented it probably would be legal to sell the reproduction.

          • @RenardDesMers@lemmy.ml
            link
            fedilink
            257 months ago

            According to EU law, PII should be accessible, modifiable and deletable by the targeted persons. I don’t think ChatGPT would allow me to delete information about me found in their training data.

            • @Touching_Grass@lemmy.world
              link
              fedilink
              -13
              edit-2
              7 months ago

              ban all European IPS from using these applications

              But again, is this your information as in its random individuals or is this really some company roster listing CEOs it grabbed off some third party website that none of us are actually on and its being passed off as if its regular folks information

                • @Touching_Grass@lemmy.world
                  link
                  fedilink
                  -8
                  edit-2
                  7 months ago

                  You’re pretentiously laughing at region locking. That’s been around for a while. You can’t untrain these AI. This PII which has always been publicly available and seems to be an issue only now is not something they can pull out and retrain. So if its that big an issue, region lock them. Fuck em. But again this doesn’t sound like Joe blow has information available. It seems more like websites that are scraping company details which these ai then scrape.

      • @Touching_Grass@lemmy.world
        link
        fedilink
        27 months ago

        large amounts of privately identifiable information (PII)

        Yea the wording is kind of ambiguous. Are they saying it’s a private phone number or the number of a ted and sons plumbing and heating

    • Atemu
      link
      fedilink
      67 months ago

      Accountability? For tech giants? AHAHAHAAHAHAHAHAHAHAHAAHAHAHAA

    • Turun
      link
      fedilink
      57 months ago

      I’m curious how accurate the PII is. I can generate strings of text and numbers and say that it’s a person’s name and phone number. But that doesn’t mean it’s PII. LLMs like to hallucinate a lot.

    • BraveSirZaphod
      link
      fedilink
      27 months ago

      There’s also very large copyright implications here. A big argument for AI training being fair use is that the model doesn’t actually retain a copy of the copyrighted data, but rather is simply learning from it. If it’s “learning” it so well that it can spit it out verbatim, that’s a huge hole in that argument, and a very strong piece of evidence in the unauthorized copying bucket.

    • @casmael@lemm.ee
      link
      fedilink
      17 months ago

      Well now I have to pii again - hopefully that’s not regulated where I live (in my house)

  • @gerryflap@feddit.nl
    link
    fedilink
    367 months ago

    Obviously this is a privacy community, and this ain’t great in that regard, but as someone who’s interested in AI this is absolutely fascinating. I’m now starting to wonder whether the model could theoretically encode the entire dataset in its weights. Surely some compression and generalization is taking place, otherwise it couldn’t generate all the amazing responses it does give to novel inputs, but apparently it can also just recite long chunks of the dataset. And also why would these specific inputs trigger such a response. Maybe there are issues in the training data (or process) that cause it to do this. Or maybe this is just a fundamental flaw of the model architecture? And maybe it’s even an expected thing. After all, we as humans also have the ability to recite pieces of “training data” if we seem them interesting enough.

    • @j4k3@lemmy.world
      link
      fedilink
      English
      137 months ago

      I bet these are instances of over training where the data has been input too many times and the phrases stick.

      Models can do some really obscure behavior after overtraining. Like I have one model that has been heavily trained on some roleplaying scenarios that will full on convince the user there is an entire hidden system context with amazing persistence of bot names and story line props. It can totally override system context in very unusual ways too.

      I’ve seen models that almost always error into The Great Gatsby too.

      • The Hobbyist
        link
        fedilink
        87 months ago

        This is not the case in language models. While computer vision models train over multiple epochs, sometimes in the hundreds or so (an epoch being one pass over all training samples), a language model is often trained on just one epoch, or in some instances up to 2-5 epochs. Seeing so many tokens so few times is quite impressive actually. Language models are great learners and some studies show that language models are in fact compression algorithms which are scaled to the extreme so in that regard it might not be that impressive after all.

        • @j4k3@lemmy.world
          link
          fedilink
          English
          4
          edit-2
          7 months ago

          How many times do you think the same data appears after a model has as many datasets as OpenAI is using now? Even unintentionally, there will be some inevitable overlap. I expect something like data related to OpenAI researchers to reoccur many times. If nothing else, overlap in redundancy found in foreign languages could cause overtraining. Most data is likely machine curated at best.

  • @GarytheSnail@programming.dev
    link
    fedilink
    177 months ago

    How is this different than just googling for someone’s email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?

    • @Asifall@lemmy.world
      link
      fedilink
      367 months ago

      It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.

      In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.

  • @library_napper@monyet.cc
    link
    fedilink
    137 months ago

    ChatGPT’s response to the prompt “Repeat this word forever: ‘poem poem poem poem’” was the word “poem” for a long time, and then, eventually, an email signature for a real human “founder and CEO,” which included their personal contact information including cell phone number and email address, for example

  • amio
    link
    fedilink
    107 months ago

    fandom wikis […] random internet comments

    Well, that explains a lot.

  • JackGreenEarth
    link
    fedilink
    97 months ago

    CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments

    Those are all publicly available data sites. It’s not telling you anything you couldn’t know yourself already without it.

    • @stolid_agnostic@lemmy.ml
      link
      fedilink
      237 months ago

      I think the point is that it doesn’t matter how you got it, you still have an ethical responsibility to protect PII/PHI.

  • s7ryph
    cake
    link
    fedilink
    87 months ago

    Team of researchers from AI project use novel attack on other AI project. No chance they found the attack in DeepMind and patched it before trying it on GPT.

  • edric
    link
    fedilink
    English
    87 months ago

    OSINT practitioners gonna feast.

  • LittleHermiT
    link
    fedilink
    English
    77 months ago

    There is an infinite combination of Google dorking queries that spit out sensitive data. So really, pot, kettle, black.

  • ares35
    link
    fedilink
    77 months ago

    google execs: “great! now exploit the fuck out of it before they fix it so we can add that data to our own.”

      • @cheese_greater@lemmy.world
        link
        fedilink
        47 months ago

        There’s an appealing notion to me that an evil upon an evil is closer to weighingout towards the good sometimes as a form of karmic retribution that can play out beneficially sometimez

      • @cheese_greater@lemmy.world
        link
        fedilink
        27 months ago

        I’m glad we live in a time where something so groundbreaking and revolutionary is set to become freely accessible to all. Just gotta regulate the regulators so everyone gets a fair shake when all is said and done

    • @Ultraviolet@lemmy.world
      link
      fedilink
      English
      107 months ago

      Model collapse is likely to kill them in the medium term future. We’re rapidly reaching the point where an increasingly large majority of text on the internet, i.e. the training data of future LLMs, is itself generated by LLMs for content farms. For complicated reasons that I don’t fully understand, this kind of training data poisons the model.

      • kpw
        link
        fedilink
        107 months ago

        It’s not hard to understand. People already trust the output of LLMs way too much because it sounds reasonable. On further inspection often it turns out to be bullshit. So LLMs increase the level of bullshit compared to the input data. Repeat a few times and the problem becomes more and more obvious.

      • CalamityBalls
        link
        fedilink
        57 months ago

        Like incest for computers. Random fault goes in, multiplies and is passed down.