The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    106
    ·
    9 months ago

    I can’t wait until we find out AI trained on military secrets is leaking military secrets.

    • Jknaraa@lemmy.ml
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      3
      ·
      9 months ago

      I can’t wait until people find out that you don’t even need to train it on secrets, for it to “leak” secrets.

        • Jknaraa@lemmy.ml
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 months ago

          Language learning models are all about identifying patterns in how humans use words and copying them. Thing is that’s also how people tend to do things a lot of the time. If you give the LLM enough tertiary data it may be capable of ‘accidentally’ (read: randomly) outputting things you don’t want people to see.

    • AeonFelis@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 months ago

      In order for this to happen, someone will have to utilize that AI to make a cheatbot for War Thunder.

    • Bezerker03@lemmy.bezzie.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      9 months ago

      I mean even with chatgpt enterprise you prevent that.

      It’s only the consumer versions that train on your data and submissions.

      Otherwise no legal team in the world would consider chatgpt or copilot.

  • assassinatedbyCIA@lemmy.world
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    2
    ·
    9 months ago

    Capitalism gotta capital. AI has the potential to be revolutionary for humanity, but because of the way the world works it’s going to end up being a nightmare. There is no future under capitalism.

  • SGG@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    5
    ·
    9 months ago

    War, huh, yeah

    What is it good for?

    Massive quarterly profits, uhh

    War, huh, yeah

    What is it good for?

    Massive quarterly profits

    Say it again, y’all

    War, huh (good God)

    What is it good for?

    Massive quarterly profits, listen to me, oh

  • Everythingispenguins@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    3
    ·
    9 months ago

    Anonymous user: I have an army on the Smolensk Upland and I need to get it to the low counties. Create the best route to march them.

    Chat GPT:… Putin is that you again?

    Anonymous user: эн

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    edit-2
    9 months ago

    Literally no one is reading the article.

    The terms still prohibit use to cause harm.

    The change is that a general ban on military use has been removed in favor of a generalized ban on harm.

    So for example, the Army could use it to do their accounting, but not to generate a disinformation campaign against a hostile nation.

    If anyone actually really read the article, we could have a productive conversation around whether any military usage is truly harmless, the nuances of the usefulness of a military ban in a world where so much military labor is outsourced to private corporations which could ‘launder’ terms compliance, or the general inability of terms to preemptively prevent harmful use at all.

    Instead, we have people taking the headline only and discussing AI being put in charge of nukes.

    Lemmy seems to care a lot more about debating straw men arguments about how terrible AI is than engaging with reality.

    • diffusive@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Sure, it’s less bad. It’s not good though.

      If I did accounting (or even just cooking, really) for the Mafia would be less bad than actually going with a gun to tether or kill people but it would still be bad.

      Why? Because it still helps an organisation which core mission is hurting people.

      And it’s purely out of greed because ChatGPT doesn’t desperately need this application otherwise they will go bankrupt

        • NeatNit@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          I guess, but I never got hooked on any of the big social media sites, and the few I did (reddit mostly) I limited myself to rather non-political subjects like jokes and specific kinds of content. I’m new to Lemmy and this is most of what I’ve been seeing, which is why I said that.

          Obviously I know that this is what all social media looks like these days. I hoped Lemmy would have at least some noticeable vocal minority of balanced people, but nah.

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 months ago

      The point is that it’s a purposeful slow walk, the entire “non-profit” framing and these “limitations” are a very calculated marketing play to soften the justified fears of unregulated, for-profit ( I.e. Endless growth) AI development. It will find its way to full evil with 1000 small cuts, and with folks like you arguing for them at every step along the way, “IT’S JUST A SMALL CUT!!!”

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        It will find its way to full evil with 1000 small cuts, and with folks like you arguing for them at every step along the way, “IT’S JUST A SMALL CUT!!!”

        While I do think AI development isn’t going to be going in the direction you think it is, if you read it carefully you’ll notice that I’m actually not saying anything about whether it’s “a small cut” or not, I’m simply laying out the key nuance of the article that no one is reading.

        My point isn’t “OpenAI changing the scope of their military ban is a good thing” it’s “people should read the fucking article before commenting if we want to have productive discussion.”

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    9 months ago

    If you guys think that AI hasn’t already been in use in various militarys including America y’all are living in lala land.

  • ArmokGoB@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    9 months ago

    Finally, I can have it generate a picture of a flamethrower without it lecturing me like I’m a child making finger guns at school.

  • Alto@kbin.social
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    edit-2
    9 months ago

    So while this is obviously bad, did any of you actually think for a moment that this was stopping anything? If the military wants to use ChatGPT, they’re going to find a way whether or not OpenAI likes it. In their minds they may as well get paid for it.

    • NounsAndWords@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      9 months ago

      You mean the military with access to a massive trove of illegal surveillance (aka training data), and billions of dollars in dark money to spend, that is always on the bleeding edge of technological advancement?

      That military? Yeah, they’ve definitely been in on this one for a while.

    • yamanii@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      9 months ago

      Arms salesman are just as guilty, fuck off with this “Others would do it too!”, they are the ones doing it now, they deserve to at least getting shit for it. Sam Altman was always a snake.

    • bean@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      I can see them having their own GPT, using the model and their own data. Not using the tool to send secret info ‘out’ and back in to their own system.

  • LemmyIsFantastic@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    10
    ·
    9 months ago

    You would be stupid to believe this hasn’t been going on 10 years now.

    Fuck, just read govwin and you know it has.

    Nothing burger.

    • Linkerbaan@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      9 months ago

      The military has had Ai and Microsoft contracts but the military guys themselves suck massive balls at making good stuff. They only make expensive stuff.

      Remember the “best defense in the world with super Ai camera tracking” being wrecked by a thousand dudes with AK’s three months ago

    • TheDarkKnight@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      It’s not a nothing burger in the sense that this signals a distinct change at OpenAI’s new direction following the realignment of the board. Of course AI has been in military applications for a good while, that’s not news at all. I think the bigger message is that the supposed altruistic direction of OpenAI was either never a thing or never will be again.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      That would count as harm and be disallowed by the current policy.

      But a military application of using GPT to identify and filter misinformation would not be harm, and would have been prevented by the previous policy prohibiting any military use, but would be allowed under the current policy.

      Of course, it gets murkier if the military application of identifying misinformation later ends up with a drone strike on the misinformer. In theory they could submit a usage description of “identify misinformation” which appears to do no harm, but then take the identifications to cause harm.

      Which is part of why a broad ban on military use may have been more prudent than a ban only on harmful military usage.