AI for Cybersecurity Defenders: What They’re Not Telling You
“Hackers are already using AI. Are you?”
It’s a compelling message. And it’s not wrong. Attackers are using AI to find vulnerabilities faster, write malicious code, and scale their operations.
But the message leaves out something critical:
AI finds vulnerabilities. It also creates them.
Hackers are using AI. That’s true.
Defenders should use AI. That’s also true.
But AI-generated code has a 91.5% hallucination flaw rate. 60% leak API keys. And attackers are testing for these flaws right now.
What the “Use AI for Defense” Advocates Won’t Tell You
There’s a popular pitch going around: don’t wait for Mythos. Use DeepSeek. Run it locally. Save money. Find vulnerabilities faster.
Here’s what that pitch leaves out:
- 91.5% of AI-generated code has hallucination-related vulnerabilities
- 60% expose API keys or database credentials
- 40-62% of AI-generated code contains security flaws (2.74x more than human-written code)
- 35 CVEs from AI-generated code in March alone (up from 6 in January)
- Open-source models from unknown providers could contain backdoors
- Running AI on your own infrastructure doesn’t guarantee security — misconfigurations are common
AI can find vulnerabilities. That’s great. But if you deploy AI-generated patches without review, you’re introducing new vulnerabilities. If you trust AI-generated code without pentesting, you’re creating your own compliance nightmare.
The Truth: Attackers vs. Defenders
| Attackers | Defenders | \
|---|---|
| Use AI to find one exploitable hole | Need to find ALL holes |
| Don’t care about false positives | Need to triage and verify |
| Move fast, break things | Move carefully, follow compliance |
| Test their own code? No | AI-generated code needs testing |
The asymmetry is brutal. Attackers need one win. Defenders need zero failures.
What the AI Defense Pitch Gets Right
- ✅ AI can find vulnerabilities faster than humans
- ✅ Open-source models like DeepSeek are nearly as good as frontier models
- ✅ You don’t need Mythos-level access to start
- ✅ Running models locally or on your own infrastructure addresses privacy concerns
- ✅ Attackers are already using AI — defenders must catch up
What the Pitch Leaves Out
- ❌ AI-generated patches can introduce new vulnerabilities
- ❌ Open-source models from unknown providers could be backdoored
- ❌ Bypassing terms of service (using Claude with third-party models) is legally gray
- ❌ AI finds vulns — but doesn’t fix culture, process, or governance
- ❌ No mention of pentesting or validation requirements
- ❌ No warning about supply chain risks
What Your Organization Should Actually Do
- Use AI for vulnerability scanning — Yes, start today. DeepSeek, OpenRouter, or local models are fine.
- Don’t trust AI-generated code without review — 91.5% hallucination rate is real. Review everything.
- Pentest your AI-generated patches — Just because AI wrote it doesn’t mean it’s secure.
- Validate your open-source models — Where did they come from? Who trained them? Any backdoors?
- Don’t bypass terms of service for enterprise work — Legal risk is real. Know what you’re signing.
- Build governance, not just tools — AI finds vulns. Humans manage risk.
“Hackers are already using AI” is not an argument to rush. It’s an argument to be smart.
AI can find vulnerabilities. AI can also introduce them. The difference between attacker and defender isn’t the tool — it’s the process.
Attackers need one hole. Defenders need zero.
Don’t just use AI. Test what AI produces.
That’s where pentesting comes in.
Already using AI for defense? Good. Now pentest what it produces.
AI finds vulnerabilities. I find what AI misses — and what AI creates. Regular pentesting isn’t optional. It’s how you stay ahead of attackers who are already using the same tools.
📩 DM @StackOfTruths on XFree 15-min consultation. No hard sell. Just honest answers about your AI agent security.












Leave a Reply