Nobody wants to use AI to bug our phones, or to build a sprawling nerve system to track our vitals, because our phones are already bugged. Everything we do on them is recorded a dozen times over, by our wireless carriers, by the websites we visit and the apps we use, by the vendors and ad networks those companies are sending their data to, and in the marketplaces that sell that data. We built the eyes of the Greco decades ago.

But that data has remained relatively secure—or maybe more precisely, its potential energy has remained relatively buried—largely because it’s tedious to work with. It’s messy; it’s scattered across different sources and in different formats; combining it together is a pain, and most of us are simply not interesting enough to investigate. Data analysts who work at shadowy government agencies have lives too, and they do not want to write 595-line SQL queries either.

But AI doesn’t mind. And that’s the boring danger of what happens next: Not of AI becoming a superintelligent Sherlock Holmes finding impossible patterns in its enormous mind palace, but of it being a million monkeys at a million typewriters, doing the grunt work no person wanted to do. Because when prying questions are a prompt away—rather than 24 hours of work away—who wouldn’t get tempted to pry?

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    18 hours ago

    This was likely the case before ai as well. Collect the data, aggregate the data, we’ll find uses for it later.

    I actually had this conversation with a startup company in the 2000’s. Their user profile forms were a mess so they were looking for help to fix the, and secure the data. But the root cause is they were collecting a ton of unnecessary data with no validation, verifiability, or constraints

    Me: why are you collecting all this data?

    Them: we might need it later

    Me: so you don’t have a use now and you’re not making any effort to make the data clean enough to be useful. The best fix is to just stop collecting most of this

    Them: no

    • kungen@feddit.nu
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      That’s also why the NSA and such stores sooo much encrypted data. They most likely don’t have the power to break it yet, but they will, and/or utilize when flaws are found.

  • gressen@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    19 hours ago

    How can you argue that trivial data somehow implies private data?

    • freshcow@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      18 hours ago

      The term you’re looking for is “security through obscurity”. The effort require to create a coherent picture from that scattered information is more than its worth, so it doesn’t get done. The argument here being that AI changes that calculation because it removes the effort part.

      • gressen@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        18 hours ago

        Security through obscurity has been disproven long ago and AI sifting through your obscure data with ease is a great example how this approach doesn’t work.

    • Hond@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      Trivial looking data can be very relevant and private. Eg i saw a talk how just scraping the names of authors and the time when they publish articles on a news site can be used to make likely conclusions who is taking vacations “together”. Thats just two parameters of probably dozens or even hundreds of the data points we leave behind by using tech.