20VC: OpenAI's Sam Altman and Brad Lightcap on The Future of Foundation Models: Will They Be Commoditised | How to Solve the Problem of Compute | Open vs Closed: Which Dominates and Why | Which Companies and Verticals Will Be Steamrolled by OpenAI
Most important take away
There are two strategies for building on AI: assume today’s models are the ceiling and engineer workarounds, or assume models will keep improving at OpenAI’s trajectory and build for that future. Altman warns that ~95% of startups should bet on the latter, because as OpenAI does its “fundamental job” of making the base model better, anyone built on the former assumption will get steamrolled. The durable bet is to build products where a 10x-100x smarter model is a feature, not a threat.
Summary
Actionable insights and patterns from the conversation:
Career and founder advice (mostly from Altman, drawing on YC investing experience):
- Go after something that is huge if it works. Outlier returns come from being right once out of ten, not okay seven out of ten.
- Hire and back founders who generate many new ideas and have a fast iteration cycle.
- Communication skills matter more than most engineers realize. You must be able to explain to your company what you are doing and why, recruit, sell, and rally people. Not polish, but clarity.
- Promote internally whenever possible rather than hiring senior leaders externally.
- You learn more from success than from failure. Failure teaches you what to exclude; success teaches you what to look for.
- Mission orientation matters. Companies that get taken over by mercenaries who joined “because it is hot” usually come to regret it.
- In genuinely new categories, decades of prior experience can be neutral or even a negative because there are no playbooks. Keep the team flat enough that great ideas from anywhere get elevated.
- On relationships under extreme workload: over-communicate, be empathetic, and recognize that your partner pays more of the price than you do. Pick an enthusiastic, not just supportive, partner.
Strategic patterns for building with AI:
- The “two strategies” frame: bet on static models (build patches and scaffolding) vs. bet on the trajectory (build products that get dramatically better as the model does). Pick the second.
- Self-test for being “OpenAI-proof”: are you actively excited for and asking when the next model ships? Companies like Klarna are excited; that is the signal. If you are not asking, you are probably building scaffolding that GPT-N+1 will erase.
- Long-term model differentiation will not be raw base intelligence (that commoditizes). It will be personalization, lifelong context, and integration into a user’s life and tools.
- Iterative deployment is a deliberate strategy: ship models into the world so society can adapt, rather than dropping AGI as a surprise. Set expectations low and co-develop with users (especially creative communities and enterprises).
- The current external perception of model progress is too “punctuated” because OpenAI lives with models internally. Expect them to smooth releases further.
Enterprise and go-to-market patterns:
- Enterprises default to “throw AI at a process and quantify ROI.” That is fine, but it dramatically undercounts the compounding value of simply giving employees access. A task going from two days to two minutes frees that person to do 85 other things, multiplied across 100,000 employees.
- Enterprises wrongly treat AI as static like the iPhone or cloud. They should plan for a steep rate of change and design adoption around it.
- Talent expectations will force adoption: new hires will arrive having only ever worked with these tools and will expect them at work.
- Research drives product, product drives sales. The best way to sell more is to make the model better. Users are the most important reward signal for whether a model is actually good.
- For their own team: leadership skews 30s-40s, technical staff slightly older than peer startups (early 30s avg.), but seniority does not gate idea elevation.
Compute and economics:
- Treat compute as a whole-system problem. The bet is that cost of compute falls and value of model output rises, driving the cost of high-quality intelligence toward zero.
- The biggest internal risk factors: losing top researchers / research culture, or running out of compute to serve demand.
Macro and personal:
- Altman is most worried about general geopolitical and socioeconomic instability, not any single AI risk.
- Lightcap’s most surprising finding after six years: scaling laws keep working predictably. Bigger models keep getting better in ways that still “break his brain.”
- Bottleneck to AI curing cancer or accelerating science: “the models are not smart enough yet.” That is the gating variable; tool integration follows.
Decision-making cadence:
- Identify the one-to-three things that matter most at this stage of the company; delegate everything else. Strategic “what” decisions are rare; “how” decisions are constant.
- Altman: about one or two big strategic decisions per month, not per year, but tons of supporting decisions.
Chapter Summaries
-
Origin and conviction: Altman explains that two early signals - deep learning was clearly working and got better with scale - gave him the conviction to start OpenAI despite widespread skepticism. The specifics (language models, etc.) took years of “wandering in the desert.”
-
The Altman-Lightcap partnership: Lightcap first encountered OpenAI as a YC investor evaluating deep-tech bets, was asked to help recruit a CFO, failed 25 times, and took the job himself out of embarrassment. They describe complementary strengths: Altman’s laser focus on the 1-3 things that matter and long-horizon orientation; Lightcap’s adaptability and ability to stand up entirely new functions like enterprise GTM.
-
Compute, economics, and the future of foundation models: Altman dismisses marginal-cost questions as a non-issue, arguing the cost of intelligence is heading toward near-zero. Open vs. closed is a detail; the real story is the technological revolution making intelligence abundant.
-
Commoditization and where durable value lives: Like early car companies, many model providers exist now but will consolidate. Base intelligence will commoditize; lasting differentiation will be personalization and life integration.
-
The “OpenAI killed my startup” problem: The core framework for builders - bet on the trajectory, not the snapshot. Use “are they excited for the next model?” as the test for whether a company will be a beneficiary or a casualty.
-
Iterative deployment and rate of model improvement: External perception of progress is too punctuated; OpenAI wants to smooth this. Iterative deployment with low initial expectations and feedback loops (especially with creative industries) will continue even as stakes grow.
-
AI and scientific progress: Altman’s personal passion is using AI to accelerate science (curing cancer as one example). The gating factor is raw model intelligence; GPT-6 and GPT-8 will progressively unlock real research assistance.
-
Scaling ChatGPT and enterprise adoption: ChatGPT succeeded because it was the first genuinely human-feeling AI experience, used identically by researchers, engineers, and new parents. Enterprise is a newer focus with longer cycles; Lightcap predicts faster-than-expected enterprise adoption over the next year.
-
Talent, hiring, and culture: Mission-driven hiring matters; mercenary cultures decay. Promote from within. Communication ability is a key signal in founders. Experience is contextual: useful in some roles, neutral or negative in genuinely new categories.
-
Lessons from other founders: Altman names Brian Chesky (product, communication) and the Collison brothers (nonlinear insights) as recent heavy influences.
-
Quick-fire round: Biggest 12-month challenge is research and productization; biggest 5-year challenge is compute supply. Lightcap bets enterprise adoption will buck convention. Altman is most worried about macro instability, not AI specifically. He misses reading and “real life” but is “deeply happy.”
-
The 10-year view: Both expect we will look back at 2024 as barbaric - disease, unequal education, lack of time autonomy. Net excited for an era of “genuine abundance,” while acknowledging real losses along the way.