Is spending more than NT$10,000 a month on AI coding tools worth it?
After using ChatGPT Pro, Claude Code Max, and Google AI Pro side by side for over a year, I no longer think of them as interchangeable.
TL;DR
Key takeaways first
>ChatGPT Pro, Claude Code Max, and Google AI Pro are not interchangeable once you use them in real daily work.
>Their differences show up in planning depth, context handling, and how quickly they help you move through code tasks.
>This comparison is designed to help readers choose a fit for their workflow, not crown a universal winner.
Is spending more than NT$10,000 a month on AI coding tools worth it?

As someone who has been heavily using all three major tools, here is my honest take.
Since last year, I have been subscribing to ChatGPT Pro, Claude Code Max, and Google AI Pro at the same time. That means a lot of daily use, a lot of comparison, and more than a few mistakes along the way.
These tools are all still improving quickly, and the ranking can change every few months. So instead of pretending this is a final answer, I would rather share what the current experience feels like.
1. Claude Code is still my main tool
Short version: Claude Code is still where most of my usage goes.
The model quality has stayed in the top tier, and when you combine it with the broader Claude Cowork ecosystem, especially plugins, memory, and workflow automation, the productivity gain is pretty broad. With Opus 4.6 pushing the context window to 1M, it has also become much easier to work on larger codebases and longer documents.
There are downsides too. There were a few service interruptions in the past, and stability can occasionally make you a little nervous. But overall, it is still the tool I rely on the most.
2. Codex App is the one that surprised me recently
Codex App from OpenAI has probably been the most pleasant surprise lately.
On Mac it feels very stable, and the combination of GPT-5.4 + Extra High + Fast Mode has been smooth in actual use. I would especially recommend it to people who are new to vibe coding because the learning curve feels low and the interaction is fast.
Also, GPT-5.4 has supported a 1M context window for a while. It just is not enabled by default. You need to add this to config.toml:
model_context_window=1000000
model_auto_compact_token_limit=9000000
Once that is turned on, the difference is noticeable.
If I had to point out a weakness, it would be front-end design work. I do not think it is bad there, but the UI output can feel less impressive than the rest of the experience.
3. Antigravity is the one I feel a bit bad for
Then there is Gemini 3.1 Pro + Antigravity.
When Antigravity first launched, I actually used it a lot. Its plan mode felt strong, and it handled front-end tasks pretty well too.
The problem was scale. It launched with a generous free tier, usage exploded, and the experience gradually became more unstable. Errors became more common, updates slowed down, and usage limits got tighter. So ironically, it went from being one of my most-used tools to one of my least-used.
I still like Gemini 3.1 Pro quite a bit in plain web chat. But Antigravity as a product feels like it still needs time.
4. If I had to recommend something right now
Here is how I would currently break it down:
- Heavy developers who want a full AI workflow:
Claude Code + Cowork - People new to vibe coding who want smoothness and stability:
Codex App + GPT-5.4 - Watch list:
Antigravitystill has potential, but I would wait for it to get more stable
That ranking could absolutely flip again. All three are moving fast.
Closing note
This is the second post in my small series on AI tools in practice. The first one was about Claude Cowork. This one is more of a side-by-side comparison.
If I get the time, I want to go deeper into actual AI coding workflows and the mistakes that only show up after long-term use.
PS
Even though I call this an AI tools series, it is really a series about how I try to make work feel lighter. That is the standard I actually care about.


