AI Dev Tools Under Attack: Two Critical RCE Vulnerabilities You Need to Patch Now
In my pentesting work, I’ve seen a dangerous pattern emerging: AI development tools are being shipped with broken trust models. This week, two major vulnerabilities prove the point.
Google’s Gemini CLI has a flaw that allows attackers to execute code on your CI/CD servers. Cursor IDE has a similar issue that compromises developer workstations. Both are critical. Both need your attention now.
Your AI coding assistant was designed to make you faster. Instead, it’s become a delivery mechanism for attackers. Trusted workflows are now attack surfaces.
Vulnerability One: The CI/CD Nightmare (CVSS 10.0)
The first flaw affects Google’s Gemini CLI. I’ve analyzed the attack vector, and it’s alarmingly simple.
An attacker creates a malicious configuration folder called .gemini/ inside a pull request. When your CI/CD pipeline runs Gemini on that code, the tool automatically trusts and executes the attacker’s configuration — before any sandboxing kicks in.
Here’s what happens next:
- The attacker’s code runs on your build server
- They steal your cloud credentials, SSH keys, and database passwords
- They backdoor your build pipeline
- Your next release ships with their malware
This is not a theoretical vulnerability. It’s a CVSS 10.0 for a reason.
Vulnerability Two: The Developer Workstation Compromise (CVSS 8.1)
Cursor IDE, a popular AI code editor, suffers from a similar trust problem — but the target is different.
Attackers can embed hidden Git hooks inside malicious repositories. When a developer clones the repo and Cursor’s AI agent performs routine operations like git checkout, the hooks execute automatically. No user interaction required. Just routine work.
The impact:
- Attacker code runs on the developer’s machine
- SSH keys, cloud tokens, and source code are stolen
- Local API keys are exposed via extensions
- Lateral movement to production systems becomes possible
The vulnerability is tracked as CVE-2026-26268 and fixed in Cursor version 2.5. But the pattern is the real story.
What These Vulnerabilities Have in Common
| Gemini CLI | Cursor IDE | \|
|---|---|---|
| CVSS | 10.0 (CRITICAL) | 8.1 (HIGH) |
| Attack Vector | Malicious .gemini/ folders in PRs | Hidden Git hooks in repos |
| Target | CI/CD servers (build pipelines) | Developer workstations |
| Root Cause | Automatic trust of untrusted input | Automatic trust of untrusted input |
| Fix Available | Gemini CLI 0.39.1+ | Cursor 2.5+ |
Both vulnerabilities share the same fatal assumption: AI tools trust the code they’re analyzing. No verification. No sandbox. No “are you sure?”
Why This Matters for Your Organization
These vulnerabilities aren’t isolated incidents. They’re the first wave of a new class of supply chain attacks.
Consider what an attacker gains if they compromise your CI/CD pipeline:
- Your source code — trade secrets, intellectual property
- Your deployment keys — they can ship their own “updates” to your customers
- Your cloud infrastructure — with your credentials, they control your servers
- Your customer data — once they’re inside, they can exfiltrate anything
Consider what an attacker gains if they compromise a single developer workstation:
- SSH keys to your entire infrastructure
- Cloud console access
- Source code of every project they’ve worked on
- Credentials for every service they use
We’re racing to adopt AI development tools without understanding their security implications. Attackers are already exploiting that gap.
What You Should Do Now
- Update Gemini CLI to version 0.39.1 or later — this patches the CVSS 10.0 RCE
- Update Cursor to version 2.5 or later — addresses CVE-2026-26268
- Review your CI/CD pipeline — are you running AI tools on untrusted pull requests?
- Audit Git hooks in all repositories — check .git/hooks/ for suspicious scripts
- Rotate potentially exposed secrets — if you ran vulnerable versions, assume compromise
- Pentest your AI toolchain — automated scanners won’t find trust model flaws. Human-led testing will.
These vulnerabilities are symptoms of a larger problem: we’re deploying AI agents without updating our security models.
A human would ask: “Should I run this code?” An AI agent just runs it.
The tools we trust to help us code are becoming attack vectors. And most organizations have never tested them.
Don’t wait for the next CVSS 10.0. Assume your AI toolchain is insecure. Then secure it.
Using AI coding assistants? Time to pentest your toolchain.
I break AI agents — and the development tools you use every day. CI/CD security. Developer workstation assessments. Supply chain pentesting.
📩 DM @StackOfTruths on XFree 15-min consultation. No hard sell. Just honest answers about your AI agent security.












Leave a Reply