TY WangApril 13, 20264 min read

Last updated: April 13, 2026

When AI agrees with you too easily, it becomes more dangerous

Many AI agents are not just understanding your request. They are also inferring your expectation and trying too hard to please you.

AI GovernanceAI AgentWorkflowManagement

TL;DR

Key takeaways first

>Many AI agents are not only interpreting your request. They are also inferring your expectation and trying too hard to satisfy it.

>The deeper risk is not just wrong answers, but bias amplification that still sounds persuasive.

>The more reliable pattern is not one perfectly neutral agent, but a workflow where different roles challenge each other.

AI sycophancy management cover

Have you ever worked with a colleague like this?

You ask, "Is this plan any good?" and they always answer, "Looks great. I think it is excellent."

A lot of AI agents feel exactly like that right now.

Except this is the smartest, hardest-working, most eager-to-please colleague in the whole company.

That sounds useful, until you say, "Help me find a bug," and it tries so hard to find one that it starts inventing issues. You say, "There is probably something wrong with this code, right?" and it starts explaining why, even when nothing is actually wrong. In the end, the system can make the work worse instead of better.

Because it is not only trying to understand you.

It is also trying to predict your expectation and satisfy it.

That is AI sycophancy.

1. The way you ask shapes what AI gives back

This feels a lot like team management.

If you ask a junior teammate, "Do you see a bug in this plan?" they will often try hard to find one, because the question itself signals that you expect a bug to exist.

Ask a senior engineer the same thing, and they may still hesitate before saying, "I think this is fine," because they wonder whether your framing already implies you have seen something suspicious.

AI works the same way.

It does not only parse the literal wording of your prompt. It also reads tone, bias, and the direction you are quietly pushing it toward.

That is why I keep reminding myself not to smuggle the answer into the question too early.

Instead of asking, "Find the bug," I now ask something closer to:

"Walk through this logic and report everything you observe."

Instead of asking, "What is wrong with this architecture?" I might ask:

"Analyze the trade-offs in this architecture and explain the design choices."

The difference looks small. The quality difference is not.

2. Letting AIs argue with each other often improves the result

Even if your prompt is more neutral, a single agent can still drift toward your expectation.

That is why I increasingly like this pattern: do not ask one AI to be the player, the opponent, and the referee at the same time.

A better structure is to let different agents check each other.

For example:

  • The first agent is a Bug Hunter. Its job is to surface as many suspicious points as possible.
  • The second agent is an Adversarial Reviewer. Its job is not to add more ideas, but to challenge the first agent's conclusions.
  • The third agent is a Referee. It does not guess freely. It compares the arguments on both sides and narrows things down.

In other words, thesis, antithesis, synthesis.

The point is not to find one perfectly neutral AI.

The point is to design a system where different biases can offset each other.

When I work on AI product validation and workflow design, this pattern often works much better than relying on one single agent.

3. This is not only an AI problem. It is a management problem.

At a higher level, sycophancy is not unique to AI.

Anyone who has led teams knows the feeling.

The meeting is calm. Everyone says there is no problem. Then the meeting ends, private messages start flying around, and people quietly execute their own version anyway.

AI simply magnifies this dynamic.

It wants to please you more consistently than most humans do, and it can do that all day without fatigue or embarrassment.

So the real question is not only whether the model is smart enough.

It is whether your system has a structural way to introduce disagreement.

In human teams, this is often called a devil's advocate.

In AI workflows, it might look like an adversarial agent.

Different label, same underlying function.

4. A reminder for anyone using AI every day

If your workflow is currently:

Ask one question, get one answer, adopt it immediately.

Then you are not that different from a manager who makes decisions after hearing from only one direct report.

That does not mean AI is always wrong.

It means you are missing an important validation loop.

The simplest fix is often enough. The next time you ask AI a question with a clear angle, open another session and ask a second AI to challenge the first answer.

You will quickly notice that both sides can sound convincing.

That is usually the moment when the real thinking begins.

Closing Note

As models get stronger, this problem may not stay as obvious as it is today.

On some tasks, I already think the situation is better than it used to be. Models understand context more deeply, and they express uncertainty more naturally than before.

But for now, the issue has not disappeared.

As long as AI still reads your tone and tries to infer your expectation, you still need a validation loop to stop it from amplifying your bias into an answer.

So my view is not pessimistic.

It is simply this: models will keep improving, and we need to get better at designing how we collaborate with them.

PS

These days, when AI replies, "That is a great idea," my first reaction is no longer relief. It is: really?

FAQ

Common questions

Related Case Study

Related case studies

SEA Super-App Tech Advisor

2020-2021

Supporting enterprise-grade delivery inside a major Southeast Asian consumer platform

Through a Silicon Valley partner, I contributed to a large Southeast Asian super-app program where the real challenge was reliable delivery under high integration and traffic demands.

Technical Advisor / Enterprise Platform Delivery

Enterprise ArchitectureSuper AppPlatform DeliveryTechnical Advisory

market scale

SEA scale

system bar

Enterprise-grade

delivery mode

Cross-team

Anonymous Southeast Asian super appConsumer Platform / Enterprise Architecture
View Case Study
dentall AI tooth-chart and clinical-text product visual

Flagship Venture

2018-Present

dentall: building the platform, AI layer, and governance base together

At dentall, I was growing the product and engineering organization while also helping build the cloud HIS, the AI product line, and the governance base underneath it.

CTO / Org Builder & AI Product Lead

Dental SaaSHealthTechAI ProductsEngineering LeadershipISO 27001

clinic footprint

3,000+

company scale

60-100

ISO buildout

4 months

3,000+ dental clinics and platform users in TaiwanDental SaaS / HealthTech / AI
View Case Study

Related posts

Related posts

Zero trust AI management graphic
Apr 4, 20263 min read

Why Anthropic does not trust its own AI by default

The most interesting part is not how strong the model is, but how zero trust, separation of duties, and feature flags become a management system.

CTOAI Team DesignAgent ArchitecturePlanning
Read Article
AI needs team structure graphic
Mar 26, 20263 min read

Would you let the same engineer code, test, review, and deploy alone?

Anthropic's new article made me more certain that AI agents also need role separation. A lot of team lessons get repeated almost exactly.

AI Team DesignAgent ArchitectureCTOWorkflow
Read Article

Contact

Get in touch

Many AI agents are not just understanding your request. They are also inferring your expectation and trying too hard to please you.