Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
  • jonne
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    10 months ago

    The Israeli military is using AI to provide targets for their bombs. You could argue it’s not going great, except for the fact that Israel can just deny responsibility for bombing children by saying the computer did it.

    • Evkob@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      10 months ago

      I hadn’t heard about this so I did a quick web search to read up on the topic.

      Holy fuck, they named their war AI “The Gospel”??!! That’s supervillain-in-a-crappy-movie shit. How anyone can see Israel in a positive light throughout this conflict stuns me.

      • jonne
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        Imagine the headlines and hysteria if Russia did even half the shit Israel did.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      10 months ago

      But they aren’t using chatgpt or any other language model to do it. “AI” in instances like that means a system they’ve fed with some data that spits out a probability of some sort. E.g while it might take a human hours or days to scroll through satellite/drone footage of a small area to figure out the patterns where people move, a computer with some machine learning and image recognition can crunch through it in a fraction of the time to notice that a certain building has unusual traffic to it and mark it as suspect.

      And that’s where it should be handed off to humans to actually verify, but from what I’ve read, Israel doesn’t really care one bit and just attacks basically anything and everything.
      While claiming the computer said to do it…

    • SheeEttin@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Yeah, because they don’t actually care who they’re bombing. Men, women, children, dogs, Israeli hostages, they’ve probably even had some friendly fire on the IDF themselves. Doesn’t matter as long as they end up gaining land.