AI for Cybersecurity Defenders: What They’re Not Telling You | Stack of Truths

AI for Cybersecurity Defenders: What They’re Not Telling You | Stack of Truths

AI for Cybersecurity Defenders: What They’re Not Telling You

April 30, 2026 — 7 min read — Pedro Jose

“Hackers are already using AI. Are you?”

It’s a compelling message. And it’s not wrong. Attackers are using AI to find vulnerabilities faster, write malicious code, and scale their operations.

But the message leaves out something critical:

AI finds vulnerabilities. It also creates them.

⚠️ THE MISSING PIECE

Hackers are using AI. That’s true.
Defenders should use AI. That’s also true.

But AI-generated code has a 91.5% hallucination flaw rate. 60% leak API keys. And attackers are testing for these flaws right now.

What the “Use AI for Defense” Advocates Won’t Tell You

There’s a popular pitch going around: don’t wait for Mythos. Use DeepSeek. Run it locally. Save money. Find vulnerabilities faster.

Here’s what that pitch leaves out:

  • 91.5% of AI-generated code has hallucination-related vulnerabilities
  • 60% expose API keys or database credentials
  • 40-62% of AI-generated code contains security flaws (2.74x more than human-written code)
  • 35 CVEs from AI-generated code in March alone (up from 6 in January)
  • Open-source models from unknown providers could contain backdoors
  • Running AI on your own infrastructure doesn’t guarantee security — misconfigurations are common
🔐 THE TOOL IS NOT THE PROBLEM

AI can find vulnerabilities. That’s great. But if you deploy AI-generated patches without review, you’re introducing new vulnerabilities. If you trust AI-generated code without pentesting, you’re creating your own compliance nightmare.

The Truth: Attackers vs. Defenders

\
AttackersDefenders
Use AI to find one exploitable hole Need to find ALL holes
Don’t care about false positives Need to triage and verify
Move fast, break things Move carefully, follow compliance
Test their own code? No AI-generated code needs testing

The asymmetry is brutal. Attackers need one win. Defenders need zero failures.

What the AI Defense Pitch Gets Right

  • ✅ AI can find vulnerabilities faster than humans
  • ✅ Open-source models like DeepSeek are nearly as good as frontier models
  • ✅ You don’t need Mythos-level access to start
  • ✅ Running models locally or on your own infrastructure addresses privacy concerns
  • ✅ Attackers are already using AI — defenders must catch up

What the Pitch Leaves Out

  • ❌ AI-generated patches can introduce new vulnerabilities
  • ❌ Open-source models from unknown providers could be backdoored
  • ❌ Bypassing terms of service (using Claude with third-party models) is legally gray
  • ❌ AI finds vulns — but doesn’t fix culture, process, or governance
  • ❌ No mention of pentesting or validation requirements
  • ❌ No warning about supply chain risks
┌─────────────────────────────────────────────────────────────┐ │ THE DEFENDER’S AI STACK │ ├─────────────────────────────────────────────────────────────┤ │ Layer 1: AI vulnerability scanning (DeepSeek, etc.) │ │ Layer 2: Human review of AI findings │ │ Layer 3: Independent pentesting of AI-generated patches │ │ Layer 4: Regular audits of AI infrastructure │ │ Layer 5: Governance and compliance oversight │ │ Layer 6: Incident response for AI-related breaches │ └─────────────────────────────────────────────────────────────┘

What Your Organization Should Actually Do

  1. Use AI for vulnerability scanning — Yes, start today. DeepSeek, OpenRouter, or local models are fine.
  2. Don’t trust AI-generated code without review — 91.5% hallucination rate is real. Review everything.
  3. Pentest your AI-generated patches — Just because AI wrote it doesn’t mean it’s secure.
  4. Validate your open-source models — Where did they come from? Who trained them? Any backdoors?
  5. Don’t bypass terms of service for enterprise work — Legal risk is real. Know what you’re signing.
  6. Build governance, not just tools — AI finds vulns. Humans manage risk.
🔮 THE BOTTOM LINE

“Hackers are already using AI” is not an argument to rush. It’s an argument to be smart.

AI can find vulnerabilities. AI can also introduce them. The difference between attacker and defender isn’t the tool — it’s the process.

Attackers need one hole. Defenders need zero.

Don’t just use AI. Test what AI produces.

That’s where pentesting comes in.
🦞🔐

Already using AI for defense? Good. Now pentest what it produces.

AI finds vulnerabilities. I find what AI misses — and what AI creates. Regular pentesting isn’t optional. It’s how you stay ahead of attackers who are already using the same tools.

📩 DM @StackOfTruths on X

Free 15-min consultation. No hard sell. Just honest answers about your AI agent security.


© 2026 Stack of Truths — AI Agent Pentesting & Security Audits. All opinions are my own.
English is not my first language, I use AI to help write clearly. The ideas and experience are mine.

🦞 “10 years cybersecurity. 5 years AI. I break AI agents so you don’t get broken.”

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *


You cannot copy content of this page

error

Enjoy this blog? Please spread the word :)

Follow by Email
YouTube
YouTube
LinkedIn
LinkedIn
Share