Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.

  • NaibofTabr
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    4
    ·
    9 months ago

    Machine learning is a surveillance technology.

      • Devjavu@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        14
        ·
        9 months ago

        I understood what you meant but first reading it it sounds like the tables are rather quite hungry and I think that is hilarious

      • NaibofTabr
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        3
        ·
        9 months ago

        It is overwhelmingly used to generate statistical models of human behavior.

    • NocturnalMorning@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      9 months ago

      I mean, can be used that way, can also be used to predict the stock market, or future climate. Just depends on the intent.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    9 months ago

    This is the best summary I could come up with:


    New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.

    “It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zürich in Switzerland who led the research.

    He adds that the same underlying capability could portend a new era of advertising, in which companies use information gathered from chatbots to build detailed profiles of users.

    The Zürich researchers tested language models developed by OpenAI, Google, Meta, and Anthropic.

    Anthropic referred to its privacy policy, which states that it does not harvest or “sell” personal information.

    “This certainly raises questions about how much information about ourselves we’re inadvertently leaking in situations where we might expect anonymity,” says Florian Tramèr, an assistant professor also at ETH Zürich who was not involved with the work but saw details presented at a conference last week.


    The original article contains 389 words, the summary contains 156 words. Saved 60%. I’m a bot and I’m open source!