20VC: Bret Taylor: The AI Bubble and What Happens Now | How the Cost of Chips and Models Will Change in AI | Will Companies Build Their Own Software | Why Pre-Training is for Morons | Leaderships Lessons from Mark Zuckerberg
Most important take away
We are in an AI bubble, but like the dot-com bubble, the excess will produce enduring trillion-dollar companies. The biggest opportunity is not at the model or hardware layer but in the application layer — building solutions for non-technical buyers that solve real business problems. For startups, pre-training your own model is “burning capital”; fine-tune existing foundation models instead and focus capital on product-market fit.
Summary
Actionable insights and patterns:
Career advice
- Take the unqualified-feeling job. Bret credits Mark Zuckerberg making him CTO of Facebook at 29 (when he didn’t feel ready) as the most transformative “yes” of his career. Look for sponsors who see something in you before you see it.
- Study leaders with relentless long-term focus. The common trait across Zuckerberg, Benioff, Page/Brin, and Mayer was the ability to think 2x farther out than everyone around them and to communicate that vision to bring the team along.
- Treat fundraising as advice-seeking, not transactional. Bret called only Peter Fenton for Sierra’s first round because he wanted the relationship, not just the check. Best investor relationships are first-call strategic partnerships.
- Be a board member who adapts cadence per company. Don’t treat every engagement the same; figure out how each CEO best receives advice, build an information cadence, and know when to “call backbone.”
- Side-hustle entrepreneurship is a legitimate on-ramp. Bret’s path started with a $400 website for a local mechanic that paid 100x his gas-station wage — small early bets compound into careers.
Tech patterns and strategy
- Don’t pre-train unless you’re an AGI lab. For 99% of AI startups, pre-training is the equivalent of building your own data center before finding product-market fit. Fine-tune Llama, Mistral, or GPT-4 class models instead.
- Foundation vs. frontier model distinction (per Reid Hoffman). Foundation models are commoditized — just download Llama. Frontier models still have meaningful leads via step-function improvements driven by data, compute, and algorithms.
- Expect AI’s commercial market to mirror cloud’s three-layer split: infrastructure (AWS/Azure/GCP analogs), tools (Snowflake/Databricks analogs), and SaaS solutions. The long tail of vertical SaaS will repeat in AI applications.
- “Software is a lawn” — companies regret building their own when SaaS exists. Don’t bet your AI strategy on enterprises wanting to assemble models into solutions themselves; they want push-button products from non-technical buyers’ perspectives.
- Inference costs will track Moore’s-Law-like declines via distillation, smaller parameter models, and hardware improvements. Build business models where cost scales with inference (and thus revenue), not upfront training.
- Conversational AI crossed the quality threshold with GPT-4. Bret predicts that just as websites became table stakes in 1995, branded AI agents will be table stakes by 2025. Every company will need one alongside (not replacing) apps/websites.
- Shift from “rules” to “goals and guardrails.” Old software enumerated every state; agentic software defines outcomes and constraints, trading determinism for empathy and creativity. New job roles: agent engineer and AI architect (conversation designer).
- Manage the agency dial deliberately. More agency = more delight and empathy, but less control. Start with lower-risk workflows (support, cancellations) and progress to revenue-critical use cases as confidence builds.
- AI services firms will see short-term revenue spikes but long-term value will compress toward solutions. Where consulting remains valuable is change management — restructuring departments and workforces, not last-mile code.
- Professional services revenue today often reflects the absence of mature SaaS solutions; expect that revenue to migrate as application-layer products mature.
- Open-source frontier models (Meta/Llama) shift the ecosystem. Without a cloud cash cow, Zuckerberg’s incentives differ from hyperscalers, accelerating an open-source “Postgres of AI” by years.
- For AI-content authenticity concerns, expect AI solutions to AI problems (white-hat/black-hat dynamic like cybersecurity). Build with “responsible iterative deployment” rather than ivory-tower prediction.
- Smartphones likely remain the primary interface medium-term despite Ray-Bans and AirPods; consumer electronics replacements have repeatedly failed against the iPhone.
Chapter Summaries
- Intro and origin story — Bret recounts going from gas-station minimum wage to making $400 websites for local businesses, then Stanford’s CS106A turning a would-be lawyer into a lifelong engineer.
- Is AI a bubble? — Yes, but like the dot-com bubble it will produce the next generation of trillion-dollar companies. Bret distinguishes valuation excess from the underlying economic transformation.
- Will models subsume vertical SaaS? — No. The AI market will play out like cloud: infrastructure, tools, and a long tail of SaaS solutions. Companies want solutions, not bags of floating point numbers.
- AI services and consulting — Short-term spike is real but reflects missing SaaS; long-term value lies in change management as AI restructures workforces.
- Model commoditization — Foundation models are commoditized; frontier models still progress via step changes. Pre-training is irrational capital allocation for startups.
- Step changes vs. diminishing returns — Three inputs (data, compute, algorithms) make a wall on all three unlikely; Bret is optimistic about continued AGI progress.
- AGI mission and consumer products — ChatGPT was the most important product of the decade because it democratized access. Building consumer/enterprise products is aligned with, not contrary to, an AGI mission.
- Economics, hyperscaler CapEx, and Moore’s Law — Inference costs are dropping while quality rises; hyperscaler CapEx is rational given AGI’s potential impact, and consolidation is likely among pre-training companies.
- Zuckerberg, Llama, and open source — Without a cloud cash cow, Meta accelerated the open-source frontier model timeline by years; ecosystem benefits.
- Sierra and conversational AI as a new form factor — Every company will need a branded AI agent; conversational interfaces win on convenience like touch did over Blackberry keyboards.
- Form factors, multimodality, and the phone’s future — Chat, voice, and multimodal all matter; smartphones will likely remain primary near-term.
- Goals and guardrails — Sierra’s core design problem is enabling business rules without crushing creativity. New roles like agent engineer and AI architect emerge.
- Risk, trust, and AI-generated content — Treat AI like fallible humans operationally; expect AI solutions to AI-content authenticity problems.
- Fundraising as a known founder — One call to Peter Fenton; the goal is strategic partnership, not capital.
- Quick fire — Biggest mind change: speed of cost reduction. Biggest misconception: too much focus on hardware/models versus applications. Best board member: Fidji Simo. Most important “yes”: Zuckerberg promoting him to CTO.
- Lessons from great leaders — Zuckerberg, Benioff, Page/Brin, and Mayer all share relentless long-term focus (2x farther than peers) and the ability to communicate vision to mobilize teams.