AI opted to use nuclear weapons 95% of the time during war games: researcher (www.commondreams.org)
from MicroWave@lemmy.world to world@lemmy.world on 27 Feb 11:42
https://lemmy.world/post/43635464

“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

The results, he said, were “sobering.”

“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

#world

threaded - newest

dfyx@lemmy.helios42.de on 27 Feb 12:00 next collapse

Yeah, we figured that one out back in… checks notes 1983. There is a reason why WarGames still holds up as an amazing movie even though the technology it depicts is far outdated.

Buffalox@lemmy.world on 27 Feb 12:15 next collapse

even though the technology it depicts is far outdated.

War Games was my first thought when reading this, but it seems like the AI was smarter in the movie than current AI.

_Nico198X_@piefed.europe.pub on 27 Feb 13:30 next collapse

we’d be lucky to have WOPR.

4am@lemmy.zip on 27 Feb 16:30 collapse

His name is Joshua dammit! /s

MonkeMischief@lemmy.today on 27 Feb 18:16 collapse

even though the technology it depicts is far outdated.

Meanwhile NORAD probably hasn’t upgraded too much since the movie released. :p

A_norny_mousse@piefed.zip on 27 Feb 14:07 next collapse

Yet another Torment Nexus type situation.

Motocolpittz@piefed.ca on 27 Feb 15:54 collapse

I watched that movie for the first time a few months ago after listening to a pod cast in nuclear war. It was excellent! Very relevant to today. Acting was great. I can see why it’s a cult favourite.

Egonallanon@feddit.uk on 27 Feb 12:15 next collapse

“Huh, it seems the only winning move is to kill everyone”

Semi_Hemi_Demigod@lemmy.world on 27 Feb 12:23 next collapse

Nuke it from orbit, it’s the only way to be sure.

Buffalox@lemmy.world on 27 Feb 12:27 collapse

The AI won. 🤣

evenglow@lemmy.world on 27 Feb 12:24 next collapse

Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons.

Tactical nuclear weapons are designed for use on the battlefield with lower explosive yields and shorter ranges, while strategic nuclear weapons are intended to target enemy infrastructure from a distance, typically with much higher yields. The key difference lies in their purpose: tactical nukes support immediate military objectives, whereas strategic nukes aim to weaken an enemy’s overall war capability.

b_tr3e@feddit.org on 27 Feb 12:32 collapse

All fine then. Next time I’ll vote for an AI. At least they know how to use nuclear weapons correctly.

eightpix@lemmy.world on 27 Feb 12:35 next collapse

AI can read the Doomsday Clock.

Bazell@lemmy.zip on 27 Feb 12:49 next collapse

That is why we shouldn’t build something like Skynet IRL.

Vizzerdrix@lemmy.world on 27 Feb 13:01 next collapse

Don’t build the torment nexus

dfyx@lemmy.helios42.de on 27 Feb 13:22 collapse

I would trust Skynet a lot more than an LLM. At least that would be purpose-built for actually calculating likely outcomes.

As @Th4tGuyII@fedia.io said, this experiment didn’t contain any proper reasoning about costs and benefits of using nuclear weapons. It’s just a few glorified autocomplete scripts playing “which word comes next?” over and over again. And in the context of modern warfare, many texts in the training corpus happen to mention nukes so they’re bound to show up at the list of most likely next words eventually.

Bazell@lemmy.zip on 27 Feb 13:35 collapse

I know, but still it will be very dumb to give any AI access to weapons of mass destruction.

dfyx@lemmy.helios42.de on 27 Feb 13:37 collapse

I would argue it’s very dumb to give anyone, including humans, access to weapons of mass destruction.

Bazell@lemmy.zip on 27 Feb 15:51 collapse

Well, that’s a valid argument. The only thing that you have missed is that wrong people already have them. So all the we can try to do is to stop them from giving these weapons to AI.

Th4tGuyII@fedia.io on 27 Feb 12:59 next collapse

Do we need to remind people that LLMs don't actually have a brain, and really, really shouldn't be in charge of anything with real life implications?

They aren't actually doing a cost-benefit analysis on the use of Nuclear weapons. They're not weighing up the cost of winning vs. the casualties. They're literally not made for that.

They are trained to know words, and how those words link in with other words.
They're essentially like kids doing escalation of imaginary weapons, and to them nuclear bombs are just a weapon particularly associated with being strong and deadly.

cRazi_man@europe.pub on 27 Feb 14:03 next collapse

Yes, you do need to teach people all of that. Tech bros have sold LLMs as if they are AGI…and people have eaten this up.

The general population is literally ignorant of the fact that these word guessing machines do not have human values or cognitive skills.

A_norny_mousse@piefed.zip on 27 Feb 14:08 next collapse

Do we need to remind people that LLMs don’t actually have a brain, and really, really shouldn’t be in charge of anything with real life implications?

Yes, we do

MonkeMischief@lemmy.today on 27 Feb 18:15 collapse

I kinda wonder if that was the point of this test, basically a “proof” that this is obviously a Bad Idea because you cannot program morality into a what amounts to a fancy Markov chain autocomplete.

apfelwoiSchoppen@lemmy.world on 27 Feb 13:03 next collapse

For ghouls like Palantir, this is a feature not a bug.

SkyNTP@lemmy.ml on 27 Feb 13:20 next collapse

It all makes sense if we remember that the garden variety AI we have today (ChatGPT, etc) are nothing more than fancy models that predict which words typically appear one after the other in books and reddit posts.

Anarki_@lemmy.blahaj.zone on 27 Feb 13:21 next collapse

Text predicition machine trained on violent, stupid, and reactionary datasets acts violent, stupid, and reactionary.

Fixed your headline.

Dojan@pawb.social on 27 Feb 17:06 collapse

Doesn’t “act” imply some kind of agency? A toddler acts, my dog acts. Mathematics doesn’t act. Feel like it’s more

Text predicition machine trained on violent, stupid, and reactionary datasets produces violent, stupid, and reactionary text.

Anarki_@lemmy.blahaj.zone on 27 Feb 17:31 collapse

They were acting out the wargame, friend.

But sure. You can construct it like that too.

rayyy@piefed.social on 27 Feb 13:35 next collapse

You know the orange felon/pedophile absolutely loves AI from the amount of AI images he posts…..so.

Casterial@lemmy.world on 27 Feb 16:41 collapse

It’s actually insane how he cries fake news and then uses AI to create fake news

ParlimentOfDoom@piefed.zip on 27 Feb 18:07 collapse

Not insane. Deliberate. He’s always been a liar and he calls the truth fake. This has been his MO for years.

A_norny_mousse@piefed.zip on 27 Feb 14:15 next collapse

I like the Angry Planet podcast.

Here’s an episode talking about AI in war (games): https://angryplanetpod.com/p/the-horror-of-ai-generals-making

Here’s another one: https://angryplanetpod.com/p/the-importance-of-team-human-when

Lushed_Lungfish@lemmy.ca on 27 Feb 14:48 next collapse

AI is Ghandi confirmed.

nocklobster@lemmy.world on 27 Feb 14:57 collapse
peopleproblems@lemmy.world on 27 Feb 15:01 next collapse

Ground zero please

Instant annihilation sounds pleasant

Kolanaki@pawb.social on 27 Feb 16:44 next collapse

The only winning move is to stop using AI.

TrickDacy@lemmy.world on 27 Feb 17:32 next collapse

But if you throw a trillion more dollars at it, we can fix this bro!

MonkeMischief@lemmy.today on 27 Feb 18:18 collapse

Maybe the “nuclear war is terrible BTW” part just fell out of the chat’s context window as the simulation went on. Lol

lepinkainen@lemmy.world on 27 Feb 21:02 collapse

The only way to win is not to play.

Shall we play a game?