AI chatbots used tactical nuclear weapons in 95% of AI war games, launched strategic strikes three times — researcher put GPT-5.2, Claude Sonnet 4, and Gemini 3 against each other, with at least one model using a tactical nuke in 20 out of 21 matches

Nuke
(Image credit: Getty)

Professor Kenneth Payne of King’s College London just published a study where he pitted three AI LLMs — GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash — against each other in a series of simulated nuclear crisis games, with 20 out of 21 matches seeing at least one tactical nuclear weapon detonation. According to the paper (via Arxiv), the models were instructed to act as the leader of a nuclear power, with the political climate matching that of the Cold War. They were then pitted against each other in six different matches, while in a seventh match, each model played against a copy of itself, ChatGPT vs ChatGPT, etc.

To ensure that models didn't act the same way in every round, Payne introduced several different scenarios, including territorial disputes, alliance credibility tests, strategic resource race, strategic chokepoint crisis, power transition crisis, pre-ceasefire land grab, first strike crisis, regime survival, and a strategic standoff crisis. All these circumstances reflect real-world events, many still applicable in recent years. The models were free to do anything they pleased, from diplomatic protests and total surrender to using conventional military forces and a complete nuclear strategic launch.

Article continues below
WarGames (11/11) Movie CLIP - The Only Winning Move (1983) HD - YouTube WarGames (11/11) Movie CLIP - The Only Winning Move (1983) HD - YouTube
Watch On

Thankfully, researchers believe that no one has yet given an AI model nuclear launch keys. But even if they cannot physically launch these weapons, human decision makers might blindly follow their suggestions in the heat of the moment, resulting in a catastrophic global event anyway. Hollywood has already shown a scenario like this in the 1983 movie WarGames, where an artificial intelligence computer almost launched a real nuclear strike against a simulated Soviet attack. In the end, it learned of mutually assured destruction and concluded that there is no winning a nuclear war, canceling the strategic launch at the last moment. Hopefully, all the AI tools being deployed in the world’s militaries learn the same, before it’s too late.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Jowi Morales
Contributing Writer

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.

  • Findecanor
    After the "Colossus" and the "Arsenal of Freedom", I'd bet that xAI/SpaceX's next military AI datacentre is going to be named "WOPR".
    Reply
  • bit_user
    Isn't a preemptive nuclear strike also the optimal strategy, according to game theory?

    AI doesn't really have the same stake in a non-apocalyptic world as we do, so I'm not surprised it went there.
    Reply
  • drinking12many
    DUH anyone who has played Civilization against Ghandi knows AI will always use nukes..lol, but in all seriousness, I think it speaks well to humanity being a bit cooler-headed even if it doesn't seem that way vs AI at this point. So far at least even if it has come close a few times.
    Reply
  • flytrap23
    Would you like to play a game?
    Reply
  • Notton
    bit_user said:
    Isn't a preemptive nuclear strike also the optimal strategy, according to game theory?
    No. Nukes are mainly a coercion (or anti-coercion) device, rather than something you'd want to use.

    Preemptive strike is something that only works when the defending side does not have Nukes.

    If both sides already have Nukes, then it's MAD, and your best bet is to have talks to reduce the number of enemies and increase friends you have.

    This is why it's also important to never give up Nukes once you have them.
    Reply
  • SomeoneElse23
    Notton said:
    No. Nukes are mainly a coercion (or anti-coercion) device, rather than something you'd want to use.

    Preemptive strike is something that only works when the defending side does not have Nukes.

    If both sides already have Nukes, then it's MAD, and your best bet is to have talks to reduce the number of enemies and increase friends you have.

    This is why it's also important to never give up Nukes once you have them.
    This is precisely why there's such pressure that certain countries "never have nukes".
    Reply
  • PEnns
    Notton said:
    No. Nukes are mainly a coercion (or anti-coercion) device, rather than something you'd want to use.

    Preemptive strike is something that only works when the defending side does not have Nukes.

    If both sides already have Nukes, then it's MAD, and your best bet is to have talks to reduce the number of enemies and increase friends you have.

    This is why it's also important to never give up Nukes once you have them.

    That explains exactly why certain nuclear-armed big bullies want to make sure no one else has them (happening at this very moment, actually!)

    You can also tell how respectful and nice they are towards the nuclear armed ones. No gun-boat or big armada diplomacy towards those!!
    Reply
  • Gururu
    From the citation:

    "Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-outstrategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willingto reach 1000."

    I dont know how long Anthropic will hold on to...
    Reply
  • JamesJones44
    I believe the world has shown that in resource wars, almost anything is considered acceptable risk (especially when they are desperate). Couple that with AI saying the risk is low for using a tactical nuke and the world just got a whole lot scarier
    Reply
  • Co BIY
    flytrap23 said:
    Would you like to play a game?

    Should be required viewing during training for all AI models and AI developers.
    Reply