Excess Was the Right Choice
Anthropic made the right bet. OpenAI made the bigger one.
OpenAI has received a ton of criticism—much of it earned—for being unfocused. They’ve tried any and everything: bought Jony Ive’s hardware startup for a few billion and tried their hand at a number of apps. Recently they’ve started cleaning house, shuttering Sora, their AI TikTok clone, along with some other projects. But getting their act together has been a bumpy road. The recent acquisition of TBPN, a tech podcast, has left analysts utterly confused. Still, despite some missteps, OpenAI does look more focused, and that’s because Anthropic gave them no choice.
A Brief History
It started with pasting code into ChatGPT and hoping to get something useful back. Sometimes you hit gold; many times it made up some function that seemed like it should exist but ultimately didn’t. Then came GitHub Copilot, tab completion that was hit or miss. Then came the VSCode forks, Cursor and Windsurf, first with better tab, later introducing agents, the end state. Many more have since entered the fray, but the name of the game has been clear since then: lean on the models as much as possible.
Anyone who has consistently used generative AI coding tools over the last few years will tell you the same story. The end of 2025 was an inflection point. Before then, it was deeply suspicious if a model worked on a task for more than a few minutes. You could safely assume it’d gone down some strange rabbit hole, and you should probably rush to pull the plug before things got too messy. But then, in the last days of November, Anthropic released Claude Opus 4.5. That model, combined with the Claude Code harness, was a godsend. You could hand it large, fairly complex tasks. It would chug along, and a surprising amount of the time, it all just worked. Absolutely mind blowing, and a gut punch to developers who thought, maybe hoped, that AI coding might soon hit a wall. It seemed the labs were right. There may not be a wall.
The Right Bet
Even before Opus 4.5, developers loved Claude. Anthropic consistently, one release after the next, produced the best coding model, because that was their focus. They made the right bet: coding is the killer app. AI will undoubtedly reshape many industries, but programming is its first clear, dramatic win. It’s what people are willing to pay for and where the results are most visible. And not only is it a winner on the balance sheet, it also has a recursive benefit. A better coding model makes itself better tools.
OpenAI called a code red. At first they were slow off the mark. They released GPT 5.2-Codex the following December. Though some reported decent results, the model was too slow to be useful. But finally in early February, they pulled it off. GPT-5.3-Codex wasn’t just good, it was better. It handled complex, long-horizon tasks with more patience. Claude was too eager to start writing code. GPT-5.3-Codex took its time to gather the right context, and the code it produced was tighter. Though Claude still has an edge in frontend design and the general art of making things look nice, OpenAI has kept the lead. Many have soured on recent Claude releases, 4.6 and 4.7, while others sing the praises of GPT-5.4, the latest from OpenAI, released just a month after 5.3-Codex.
Compute
Anthropic has always been stingy with compute. The $20 Claude plan never got you as much as the equivalent from ChatGPT. But recently it’s been getting worse. They banned using Claude subscriptions with OpenClaw, and they’re testing removing Claude Code from their cheapest plan. OpenAI is doing just the opposite. They welcome OpenClaw, and their cheap plan keeps Codex. Every time Codex hits another million weekly users, and sometimes just because, they reset limits. Anthropic offers no such token jubilees. It can’t. All the outages and rationing make it obvious that they barely have the compute to handle current demand. Meanwhile OpenAI is wielding its compute advantage to keep existing users happy and entice new ones.
The Scientist and the Dealmaker
Early this year, Dario Amodei, CEO of Anthropic, went on the Dwarkesh Patel podcast and explained his reluctance to nab all the compute. Yes, revenue was scaling spectacularly well, but if growth came in even slightly below what he expected, and he’d bought too much compute, nothing in the world could stop him from going bankrupt. Reasonable, sensible answer. Exactly the kind of cautious, calculated response that makes me believe Dario has a pretty good relationship with his CFO.
Sam Altman, according to reporting, has a terrible relationship with his CFO, and it’s hard not to imagine that’s because he hasn’t been as restrained. He’s spent his time making every possible deal to get more compute, with seemingly little regard for the financials.
Only time will tell who comes out on top, or even survives. But right now this is one area where OpenAI’s recklessness was the right choice. Anthropic is running out of capacity to serve the users it already has, while OpenAI is raining down tokens from on high. Developers notice. They switch. And once they do, the recursive flywheel that made Claude the best coding model in the first place starts turning for the other team.
Arcnem AI — Software development and AI consulting, based in Tokyo. Get in touch
