MCP 'design flaw' puts 200k servers at risk and Anthropic won't fix it (www.theregister.com)
from rimu@piefed.social to programming@programming.dev on 17 Apr 08:30
https://piefed.social/c/programming/p/1990030/mcp-design-flaw-puts-200k-servers-at-risk-and-anthropic-won-t-fix-it

A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic’s official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.

#programming

threaded - newest

kingofras@lemmy.world on 17 Apr 08:53 next collapse

lemmy.world/post/45703531

MonkderVierte@lemmy.zip on 17 Apr 09:41 next collapse

AI a security risk? Can’t be! 🙄

pennomi@lemmy.world on 17 Apr 12:52 next collapse

It’s worse even than that. The server software (released by Anthropic) that lets an AI connect to a web service has a critical arbitrary remote code execution bug. So if you even let an AI connect to you, you’ve now allowed anyone to access your whole server.

There is no excuse for this other than wild incompetence.

fluxx@mander.xyz on 17 Apr 13:44 collapse

Wait, but Mythos is the revolution in the software security world, it found 0-days in all popular OS’s, including FreeBSD. I’m sure it would have found critical bugs in their own code! /s

thingsiplay@lemmy.ml on 17 Apr 15:49 collapse

Ai isn’t a security risk, if you know how to use the tool. Just add the line “Make no mistake” to the prompt. Not even a “please” is needed.

Modern problems require modern solution.

ramble81@lemmy.zip on 17 Apr 13:49 collapse

I think the biggest thing that blows my mind about this whole AI rush is that we were finally starting to get security ingrained in people’s minds and have them understand the risks of data exfiltration and reputation damage, even holding companies responsible for data breaches and then…… throw everything out the window with security because AI