← All summaries

Casey Muratori Doesn't Care About AI (Here's Why)

A Life Engineered · A Life Engineered — Casey Muratori · April 13, 2026 · Original

Most important take away

Casey Muratori’s stance on AI isn’t a prediction or a value judgment, it’s a philosophical position: if the activity of programming is what you love (the journey), then an AI that can do it for you is irrelevant; if the end product is what you’re in it for (the destination), then AI is potentially great. Identify which kind of engineer you are because that determines whether AI is a tool, a threat, or simply uninteresting to you.

Summary

Actionable insights and career advice:

  • Know which kind of engineer you are. Casey draws a sharp line between people who care about the craft of programming (he calls himself a “programmer/coder/educator”) and people who care about shipping the product (“software developers”). The latter group can rationally embrace AI and may become the most valuable hires going forward because they have deep skills but no emotional attachment to writing code by hand. The former group should expect their work to become economically marginalized and plan accordingly.
  • Don’t try to predict where AI is going. Even experts like Casey’s co-host Demetri (a long-time AI researcher) refuse to make firm predictions. Decide whether you want to use it based on whether it serves what you actually want to do, not based on a bet about future capability.
  • Stay slightly behind the curve, not in front of it. Casey and the host agree the danger zone is the X/Twitter crowd “way out over their skis.” Forward-deployed enthusiasts experimenting are useful, but executives firing teams to switch wholesale to AI right now is reckless. Wait for results before betting your org.
  • Babysitting AI may be the new senior-engineer job description. The host describes how at Amazon, post-outage, all AI-generated code now requires senior review. If that’s not the job you signed up for, plan an exit. If you don’t mind reviewing code more than writing it, you’re well-positioned.
  • The mid-level engineer is being squeezed. Juniors are being replaced by AI for entry-level work (and a smaller number of “AI-native” interns are being hired in larger classes — Cloudflare’s 111 interns is cited). Mid-levels who don’t aggressively adopt AI tools risk being left behind because management dashboards now track token spend per engineer.
  • Don’t game the token-burn metric naively. Whatever you measure becomes what gets optimized. Burning tokens to look productive is the current incentive but will flip in a few years to “reduce token cost.” Build skill, not metric performance.
  • Demand a productivity story before you spend. If a developer’s token bill is 10–50% of their salary cost (Jensen Huang’s quote: a $500K engineer not spending $250K on tokens makes him “go apeshit”), there must be a measurable productivity gain to justify it. The honest AI researchers only claim agentic workflows started actually working in late 2025; give it through ~March 2027 before judging whether the promised gains are real.

Tech patterns and observations:

  • Demetri’s “both things are true” framing on the AI bubble: AI will change everything AND there will be a financial crash, because most of the companies invested in will go to zero while one or two winners will dominate (the Google-vs-AltaVista pattern). The winner may not even be a software company — could be a hardware/fab/datacenter play.
  • Jevons Paradox applied to AI: as AI makes coding cheaper, we’ll do vastly more low-value coding work that previously wasn’t worth doing. Productivity gains may be diffuse and hard to see at the top level even if real.
  • The Anthropic C-compiler experiment is a useful case study. The blog post was honest engineering; the marketing video oversold it. The AI’s stochastic search through compiler code over $20K of compute is less impressive than people think (you could just download GCC for free), but the natural-language-to-intent capability remains genuinely remarkable. The system shipped without a type checker because the test suite didn’t require one — a reminder that AI-built systems will quietly omit anything not explicitly tested.
  • The AWS October 2025 outage came from a Route 53 alias behavior: updating an alias to point at a non-existent name hard-fails rather than retrying. AI agents driving config changes are exactly the wrong fit for high-dimensional configuration spaces; this kind of brittleness will keep biting AI-managed infrastructure.
  • The pipeline problem: if AI replaces juniors, where do future seniors come from? No one has a good answer.
  • Configuration/control-plane work and large-scale code refactors are where AI currently fails worst (creates new bugs faster than it fixes old ones). Localized bug-fixing should be a measurable win by 2027 — if it isn’t, that’s damning.

Casey’s rapid-fire predictions: AI-generated game assets going up (yes, but neither he nor gamers will like it); Rust replacing C/C++ (no, both stick around — “we’re still running COBOL”); mass return of performance-focused programming (sadly no); Casey himself using an AI coding assistant in five years (no — because the activity itself is what he values).

Chapter Summaries

Why Casey still doesn’t care about AI — Despite hosting an AI podcast with Demetri, Casey hasn’t changed his position from a year ago. He plays “the straight man” on his own show. His disinterest is philosophical, not pessimistic: like someone who wants to play drums rather than own a drum machine, he wants to do the activity of programming, regardless of whether AI can do it better.

Programmer vs. software developer — The critical distinction: a “software developer” is in it for the end product; a “programmer/coder” is in it for the activity. Casey identifies as the latter and sees this as why the AI debate fractures the way it does. People focused on the result can rationally change their minds as AI improves; people focused on the craft cannot.

Concerns about premature deployment — Casey is uneasy watching X/Twitter discourse. He’s open to AI being useful, but worried about people deploying it ahead of where it should be deployed. The forward-deployed enthusiasts are harmless and informative; the executives firing teams to bet the company are dangerous.

What Casey learned from Demetri — The “both things are true” framing of the AI bubble: the technology will succeed AND a financial crash is coming because only one or two companies in each category will survive (Google vs. all the other 90s search engines). Winners may be in hardware/fabs/datacenters rather than software.

Anthropic’s C compiler experiment — Read the blog, skip the marketing video. The natural-language understanding (“write me a C compiler”) is the genuinely impressive part; the stochastic code-search over $20K of compute less so. The result lacked a type checker because the test suite didn’t require one. Good experiment, oversold by marketing.

The Turing Test was quietly demolished — Casey’s most-impressed-by AI moment is conversational fluency on novel scenarios (the purple-dinosaur-in-a-mirror-that-changes-color test). Coding feels mechanical by comparison; NLP was a 30-year hard problem we suddenly blew through.

The AWS Kiro/Route 53 outages — Casey wouldn’t expect AI to delete environments under normal code review, but in an unsupervised agent loop touching control-plane configs, absolutely. High-dimensional config spaces are a terrible fit for current AI. The October 2025 outage came from Route 53 alias updates hard-failing instead of retrying.

The senior-as-babysitter problem and the pipeline problem — Senior engineers’ new job is reviewing AI code, which many didn’t sign up for. AI is at “junior engineer” capability per Demetri (corroborated by DHH). Cloudflare hired 111 AI-native interns this cycle. Mid-level engineers who don’t go all-in on AI risk being squeezed out. With juniors automated, there’s no obvious training pipeline for future seniors.

Token economics and productivity — Jensen Huang’s quote: a $500K engineer should burn $250K in tokens or he goes apeshit. Token cost is now 5–50% of an engineer’s loaded cost. There must be a productivity gain to justify it. Honest AI researchers only claim agentic workflows started working in late 2025; give through March 2027 before judging.

The 10% productivity scenario and Jevons Paradox — A real 10% gain would be huge under any other framing but feels like a letdown after 10x hype. Jevons Paradox: cheaper AI coding means we do vastly more low-value coding, so gains diffuse and become hard to measure at the top level. The proof would be measurable bug-rate reductions on localized issues by 2027.

Rapid-fire predictions — More AI game assets (yes, but disliked); Rust replacing C/C++ (no); mass return of performance-focused programming (sadly no); Casey using AI in 5 years (no). The deeper point: Casey’s self-described value-add is being in the code himself, thinking through new approaches; he prefers programming languages over English because they’re more precise.