from MicroWave@lemmy.world to world@lemmy.world on 27 Feb 11:42
https://lemmy.world/post/43635464
“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”
An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.
Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.
The results, he said, were “sobering.”
“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”
#world
threaded - newest
Yeah, we figured that one out back in… checks notes 1983. There is a reason why WarGames still holds up as an amazing movie even though the technology it depicts is far outdated.
War Games was my first thought when reading this, but it seems like the AI was smarter in the movie than current AI.
we’d be lucky to have WOPR.
His name is Joshua dammit! /s
Meanwhile NORAD probably hasn’t upgraded too much since the movie released. :p
Yet another Torment Nexus type situation.
I watched that movie for the first time a few months ago after listening to a pod cast in nuclear war. It was excellent! Very relevant to today. Acting was great. I can see why it’s a cult favourite.
“Huh, it seems the only winning move is to kill everyone”
Nuke it from orbit, it’s the only way to be sure.
The AI won. 🤣
All fine then. Next time I’ll vote for an AI. At least they know how to use nuclear weapons correctly.
AI can read the Doomsday Clock.
That is why we shouldn’t build something like Skynet IRL.
Don’t build the torment nexus
I would trust Skynet a lot more than an LLM. At least that would be purpose-built for actually calculating likely outcomes.
As @Th4tGuyII@fedia.io said, this experiment didn’t contain any proper reasoning about costs and benefits of using nuclear weapons. It’s just a few glorified autocomplete scripts playing “which word comes next?” over and over again. And in the context of modern warfare, many texts in the training corpus happen to mention nukes so they’re bound to show up at the list of most likely next words eventually.
I know, but still it will be very dumb to give any AI access to weapons of mass destruction.
I would argue it’s very dumb to give anyone, including humans, access to weapons of mass destruction.
Well, that’s a valid argument. The only thing that you have missed is that wrong people already have them. So all the we can try to do is to stop them from giving these weapons to AI.
Do we need to remind people that LLMs don't actually have a brain, and really, really shouldn't be in charge of anything with real life implications?
They aren't actually doing a cost-benefit analysis on the use of Nuclear weapons. They're not weighing up the cost of winning vs. the casualties. They're literally not made for that.
They are trained to know words, and how those words link in with other words.
They're essentially like kids doing escalation of imaginary weapons, and to them nuclear bombs are just a weapon particularly associated with being strong and deadly.
Yes, you do need to teach people all of that. Tech bros have sold LLMs as if they are AGI…and people have eaten this up.
The general population is literally ignorant of the fact that these word guessing machines do not have human values or cognitive skills.
Yes, we do
I kinda wonder if that was the point of this test, basically a “proof” that this is obviously a Bad Idea because you cannot program morality into a what amounts to a fancy Markov chain autocomplete.
For ghouls like Palantir, this is a feature not a bug.
It all makes sense if we remember that the garden variety AI we have today (ChatGPT, etc) are nothing more than fancy models that predict which words typically appear one after the other in books and reddit posts.
Fixed your headline.
Doesn’t “act” imply some kind of agency? A toddler acts, my dog acts. Mathematics doesn’t act. Feel like it’s more
They were acting out the wargame, friend.
But sure. You can construct it like that too.
You know the orange felon/pedophile absolutely loves AI from the amount of AI images he posts…..so.
It’s actually insane how he cries fake news and then uses AI to create fake news
Not insane. Deliberate. He’s always been a liar and he calls the truth fake. This has been his MO for years.
I like the Angry Planet podcast.
Here’s an episode talking about AI in war (games): https://angryplanetpod.com/p/the-horror-of-ai-generals-making
Here’s another one: https://angryplanetpod.com/p/the-importance-of-team-human-when
AI is Ghandi confirmed.
<img alt="" src="https://lemmy.world/pictrs/image/a0fa9b36-5546-4191-84a0-111fb070b588.jpeg">
Ground zero please
Instant annihilation sounds pleasant
The only winning move is to stop using AI.
But if you throw a trillion more dollars at it, we can fix this bro!
Maybe the “nuclear war is terrible BTW” part just fell out of the chat’s context window as the simulation went on. Lol
The only way to win is not to play.
Shall we play a game?