TY WangApril 1, 20264 min readLast updated: April 10, 2026

What I actually learned from the Claude Code source leak

The real lesson was not the drama. It was how harness, CLAUDE.md, parallel agents, and context compression shape the product.

Claude CodeAI AgentWorkflowPlanning
Claude Code source leak

TL;DR

Key takeaways first

>The most important lesson from the Claude Code source leak is not gossip, but what it reveals about harness, memory, permissions, and workflow design.

>The incident reinforces that the model is only one layer. Product quality often comes from the operating framework around it.

>For heavy Claude Code users, the most useful response is usually not following the drama, but rewriting their own CLAUDE.md.

Claude Code source leak graphic

The Claude Code source leak looked like gossip on the surface, but the more valuable part for me was what it revealed about how the product is actually operated.

Once 510,000 lines of code and 1,902 files were laid out in the open, the thing that stood out was not simply model strength. It was how much effort Anthropic had put into designing the environment around the model.

1. Less than 5% is really about calling the model

After reading the main analyses, my strongest reaction was simple: the model is not the whole product. The harness is.

If Claude Code is a car, the model is more like the engine. The part that actually lets you drive is everything around it: the brakes, steering, instruments, permissions, memory, and tool orchestration.

That is why products built on the same model can feel wildly different. The gap is often not intelligence. It is the quality of the control layer around the intelligence.

2. CLAUDE.md is not just read once at startup

One of the most actionable discoveries was that CLAUDE.md is not a startup-only note.

The leaked behavior made it clear that the system reloads relevant instructions on every new turn. In other words, this is not decorative documentation. It is a working manual the system keeps consulting.

And it exists in layers:

  • ~/.claude/CLAUDE.md for global habits
  • ./CLAUDE.md for project rules
  • .claude/rules/*.md for modular guidance
  • CLAUDE.local.md for private notes that should not go into git

If all you put there is "please answer in Traditional Chinese," you are wasting one of the highest-leverage surfaces in the workflow.

3. The real value of built-in agents is role separation

What impressed me most was not the number of agents. It was how clearly each role was bounded.

The Explore Agent leans read-only. The Plan Agent leans toward structuring work. The Verification Agent leans toward trying to break things. That is much closer to the division of labor inside a mature team than to the fantasy of one brilliant actor doing everything alone.

This is also a reminder that AI workflow failures often come less from weak capability and more from mixed responsibilities. When one system is asked to generate, review, and approve at the same time, optimism bias becomes almost inevitable.

4. Child agents share cache, so parallel work is not pure waste

Another important design choice is how context copies and caching work together.

When the main thread forks multiple child agents, they do not all start from zero. They share a very similar context base. That means parallel work does not necessarily multiply cost the way many people assume.

It also explains why Claude Code often feels more natural in a multi-threaded workflow than in a single long session that tries to carry everything alone.

5. Conversations get compressed, so memory deserves skepticism

Another useful reminder from the leak is that context compression is more aggressive than many users realize.

When the conversation grows long and tokens get expensive, the system preserves certain files and summaries, but not every intermediate detail survives intact. The AI may sound like it remembers more than it actually does.

That is why re-reading files, refreshing context, and compacting intentionally are not signs of paranoia. They are normal maintenance for longer-running work.

6. I rewrote my own CLAUDE.md because of this

The most direct thing I did after reading about the leak was not to keep following the drama. It was to change my own working rules.

I made several instructions much more explicit:

  • after editing files, run verification instead of trusting that writes succeeded
  • read large files in chunks instead of assuming one pass is complete
  • after long conversations, re-read before editing instead of trusting memory
  • if search results look too small, suspect truncation and search again

These sound like small rules, but they directly change output quality.

Closing note

The biggest lesson I took from the Claude Code leak is that even very strong AI still depends on a good operating framework.

Reliable AI tools are not built only by upgrading the model. They are built by getting permissions, memory, verification, and tooling to work together as one system.

PS

If the long-term result of this incident is that more people start taking their own CLAUDE.md seriously, then it may end up being an expensive but useful public lesson.

Sources

FAQ

Common questions

Related Case Study

Related case studies

dentall AI tooth-chart and clinical-text product visual

Flagship Venture

2018-Present

dentall: building the platform, AI layer, and governance base together

At dentall, I was growing the product and engineering organization while also helping build the cloud HIS, the AI product line, and the governance base underneath it.

CTO / Org Builder & AI Product Lead

Dental SaaSHealthTechAI ProductsEngineering LeadershipISO 27001

clinic footprint

3,000+

company scale

60-100

ISO buildout

4 months

3,000+ dental clinics and platform users in TaiwanDental SaaS / HealthTech / AI
View Case Study

SEA Super-App Tech Advisor

2020-2021

Supporting enterprise-grade delivery inside a major Southeast Asian consumer platform

Through a Silicon Valley partner, I contributed to a large Southeast Asian super-app program where the real challenge was reliable delivery under high integration and traffic demands.

Technical Advisor / Enterprise Platform Delivery

Enterprise ArchitectureSuper AppPlatform DeliveryTechnical Advisory

market scale

SEA scale

system bar

Enterprise-grade

delivery mode

Cross-team

Anonymous Southeast Asian super appConsumer Platform / Enterprise Architecture
View Case Study

Related posts

Related posts

Zero trust AI management graphic
Apr 4, 20263 min read

Why Anthropic does not trust its own AI by default

The most interesting part is not how strong the model is, but how zero trust, separation of duties, and feature flags become a management system.

CTOAI Team DesignAgent ArchitecturePlanning
Read Article
Claude Cowork small AI team graphic
Mar 16, 20264 min read

While everyone was chasing OpenClaw, I quietly assembled a small AI team

The tool I keep using every day is not the loudest agent platform, but Claude Cowork. What stands out to me is not only what it can do, but how well it handles context.

Claude CoworkAI AgentContext EngineeringWorkflow
Read Article

Contact

Get in touch

The real lesson was not the drama. It was how harness, CLAUDE.md, parallel agents, and context compression shape the product.