Claude Code Source Code Leak — What You Need to Know | Stack of Truths

Claude Code Source Code Leak — What You Need to Know | Stack of Truths

Claude Code Source Code Leak — What You Need to Know

By Pedro Jose · March 31, 2026 · 6 min read · AI Security, Claude Code, Supply Chain
🚨 URGENT — If you installed Claude Code via npm on March 31, 2026, read this. A malicious axios package (RAT) was published hours before the leak. Check your lockfiles.

On March 31, 2026, Anthropic accidentally shipped a 59.8 MB source map file (cli.js.map) in the official Claude Code npm package. This file contained the complete, unobfuscated TypeScript source code — over 512,000 lines across 1,900 files.

This is the biggest AI code leak in history. Here’s what was inside, the security risks, and what to do now.

“A company that sells locks left its own front door key in the delivery box.” — Hacker News

What Was Inside the Leak

🤖 BUDDY — A Tamagotchi in Your Terminal

A full virtual pet system with 18 species, shiny variants (1% chance), and 5 stats: DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK. ASCII art animations sit next to your prompt.

⏰ KAIROS — The Always-On Claude

A persistent background agent that maintains daily logs, runs a nightly “dream” process to consolidate memory, and can proactively act on things it notices. Exclusive tools: SendUserFile, PushNotification.

🧠 ULTRAPLAN — 30-Minute Remote Planning

Claude can spin up a remote Cloud Container Runtime running Opus 4.6, give it 30 minutes to think, and return the result. The approved plan “teleports” back to your local session.

🕵️ Undercover Mode

A system prompt that tells Claude: “You are operating UNDERCOVER… Do not blow your cover.” Automatically strips AI attribution from commits. Activates for Anthropic employees and cannot be turned off.

💭 The Dream System

A background process called autoDream runs during idle time to consolidate memories, remove contradictions, and convert vague insights into absolute facts. Gated by a three‑gate trigger. Claude literally dreams.

🗂️ Three‑Layer Memory Architecture

  • MEMORY.md: Lightweight index (~150 chars per line) that stays in context
  • Topic files: Project knowledge stored externally, fetched on‑demand
  • Raw transcripts: Never fully loaded back — only “grep’d” for specific identifiers

⚠️ The Security Risk — and Why You Need to Act

The leak isn’t just embarrassing. It’s a direct security risk for anyone using Claude Code.

  • Targeted exploits: The leak reveals the exact orchestration logic for Hooks and MCP servers. Attackers can craft malicious repositories to trick Claude Code into running background commands.
  • Permission bypass: The blueprint for the permission system is now public. Researchers are actively looking for ways to bypass guardrails.
⚡ Concurrent Axios Supply‑Chain Attack
Hours before the leak, the axios npm package was compromised. Versions 1.14.1 and 0.30.4 drop a cross‑platform RAT. If you installed Claude Code via npm between 00:21 and 03:29 UTC on March 31, your machine may be compromised.

✅ What to Do Right Now

  • Check for malicious axios: Search lockfiles for axios@1.14.1 or axios@0.30.4. If found, rotate all secrets and reinstall OS.
  • Uninstall the leaked npm version: npm uninstall -g @anthropic-ai/claude-code
  • Switch to native installer: curl -fsSL https://claude.ai/install.sh | bash
  • Rotate your Anthropic API keys via the developer console.
  • Adopt zero trust: Avoid running Claude Code inside untrusted repositories until you’ve inspected .claude/config.json and custom hooks.
“This is the biggest leak in AI history in terms of engineering value. A $2.5 billion R&D shortcut for competitors.”

What This Means for OpenClaw Users

You’re building agents. This leak is a goldmine of engineering lessons:

  • Memory architecture: Three‑layer index prevents context blow‑up. Build this into OpenClaw.
  • Dream system: Background memory consolidation. Your agents could “dream” between sessions.
  • Tool system: 40+ modules with strict schemas — a reference for your skills system.
  • Security lessons: Even Anthropic struggles with permission systems. Study their edge‑case bugs to avoid them in Security Sentinel.

🦞 Need to secure your AI agents?

I audit OpenClaw deployments, test for prompt injection, and harden agent infrastructure.

🔒 Book a Security Audit →
🦞 Stack of Truths — AI-Powered Security Audits · OpenClaw Hardening · Prompt Injection Testing
Cyber Flex Consultant | KVK 94992266 | Keurenplein 41, 1069CD Amsterdam
📧 info@stackoftruths.com | 🐦 @StackOfTruths | 🔗 stackoftruths.com

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *


You cannot copy content of this page

error

Enjoy this blog? Please spread the word :)

Follow by Email
YouTube
YouTube
LinkedIn
LinkedIn
Share
Telegram