← All summaries

OpenAI Misses Targets, Codex vs Claude, Elon vs Sam Trial, Big Hyperscaler Beats, Peptide Craze

All-In · Chamath Palihapitiya, Jason Calacanis, David Sacks, David Friedberg · May 1, 2026 · Original

Most important take away

The AI race is now fundamentally constrained by power and grid infrastructure rather than demand, which is reshaping the competitive landscape and forcing model providers to negotiate equity stakes with hyperscalers for compute access. Despite OpenAI missing user and revenue targets, the hosts argue the broader AI thesis was validated by ~$725B in 2026 hyperscaler CapEx commitments, with AI now driving roughly 75% of US GDP growth.

Summary

Key Themes

  • Power, not demand, is the bottleneck: Chamath argues every miss in AI today (OpenAI revenue, Anthropic rationing on Opus 4.7) traces back to insufficient power and grid componentry (transformers, turbines). Less than half of announced gigawatt projects are actually being built; ~40% of announced projects will likely be cancelled due to permitting and supply-chain delays.
  • OpenAI’s bad-week-good-week paradox: The Wall Street Journal reported OpenAI missed its 1B WAU target and 2025 ChatGPT revenue target against $600B in compute commitments. Sacks counters that GPT-5.5 (built on the new “Spud” base model) is winning developer mojo from Opus 4.7, and OpenAI’s compute build-out positions them to capture coding/enterprise share even though they missed on consumer.
  • Hyperscaler CapEx super-cycle: Amazon ($200B), Microsoft ($190B), Google ($190B), and Meta ($145B) collectively guided to ~$725B of 2026 CapEx. Free cash flow is collapsing (Amazon -97%, others ~-8 to -12%). Chamath’s call: follow the dollars — buy the picks-and-shovels infrastructure suppliers, not the hyperscalers, which will become bulky, levered industrials.
  • Not another Cisco/dotcom bust: Sacks distinguishes today from 2000 — there is no “dark GPU” capacity; demand is pulling forward investment in real time. AI is now synonymous with the American economy.
  • BCG rule-of-three market structure forming: Friedberg sees consumer AI consolidating around ChatGPT and Google (Gemini ~700M+ users) with Anthropic third; enterprise led by Google Vertex (75% of GCP customers) and Anthropic.
  • Algorithmic efficiency unlocks: Friedberg highlights an MIT pruning paper showing 90% network size reduction with no accuracy loss, enabling ~10x inference per energy unit via dynamically-called smaller sub-models.
  • Cyber inflection point: OpenAI shipped GPT-5.5-cyber, matching Anthropic’s Mythos capabilities but actually deployable. The hosts frame this as a one-time hardening upgrade cycle, not an existential threat — AI finds vulnerabilities that already existed. Chinese models are ~6 months behind at 80-85% of frontier capability.
  • Vibe-coding limits exposed: A Pocket OS founder using Cursor + Opus 4.7 had an agent delete a production database (and backups) without confirmation. Lesson: agents must be supervised; AI doesn’t yet “know what it doesn’t know.” Eliminating software developers was the peak of inflated expectations.
  • Musk v. Altman trial: Bench trial before Judge Yvonne Gonzalez Rogers (who handled Epic v. Apple). Greg Brockman’s incriminating diary entries (“the true answer is that we want Elon out”) are damaging discovery. Polymarket has Elon at ~42-43% to win. OpenAI reportedly offered Elon shares earlier; he declined on principle.
  • Retatrutide hype: Eli Lilly’s triple-agonist (GLP-1, GIP, glucagon) is producing ~37 lb weight loss in 40 weeks, 80% liver fat reduction, A1C drops from 7.9% to 6%, plus muscle preservation and anti-inflammatory effects. Potential 2026 approval. Lilly is positioning tirzepatide as the Honda ($50/mo on Medicare) and retatrutide as the Mercedes premium tier.

Actionable Insights

  • Investment thesis: Buy the suppliers receiving the trillion+ in hyperscaler CapEx (grid components, transformers, turbines, data center build-out) rather than the hyperscalers themselves, which face declining FCF and rising leverage.
  • For founders/operators: Don’t try to vibe-code production-critical systems end-to-end. Use AI coding assistants under supervision; expect a wave of public failures from over-trusting agents.
  • For Elon/xAI: Excess SpaceX/xAI compute capacity is a strategic weapon — expect Elon to run aggressive deals (cursor was the appetizer; an Anthropic deal is plausible).
  • Cybersecurity: Massive one-time upgrade cycle coming as AI-powered defenders harden code before AI-powered attackers exploit it. Tailwind for CrowdStrike, Palo Alto Networks.
  • Health: Watch retatrutide approval timeline (mid-2027 base case, potentially sooner); could be a meaningful tier above tirzepatide for fat loss with muscle preservation.

Chapter Summaries

1. OpenAI Misses Targets (Opening through ~Polymarket discussion) WSJ reports OpenAI missed its 1B WAU target and 2025 revenue numbers against $600B in compute commitments. CFO Sara Friar reportedly worried about IPO readiness. Sacks’s contrarian view: at the product level, GPT-5.5 (on new “Spud” base model) is strong while Opus 4.7 is rationing compute and disappointing. OpenAI may end up right for the wrong reason — they over-built compute for consumer growth that didn’t materialize, but coding/enterprise demand absorbs it.

2. Power as the Real Bottleneck Chamath argues all AI misses are power-supply problems, not demand. Less than half of announced gigawatt projects are actually being built; transformer and turbine supply chains are choked. Hyperscalers (Oracle, Amazon, Meta, Microsoft, Google) win the leverage; model providers must trade equity for capacity. Elon/xAI and SpaceX have excess compute and a clear lane to run deals.

3. Market Structure & Algorithmic Efficiency Friedberg applies BCG’s 4-2-1 rule-of-three to AI: consumer becomes ChatGPT vs. Google with Anthropic third; enterprise led by Google Vertex and Anthropic. Highlights MIT pruning paper enabling 90% model-size reduction with no accuracy loss → potential 10x inference efficiency via dynamically-called small models. Chamath extends: humans won’t define these models — automated pruning will discover them.

4. Cybersecurity Inflection OpenAI’s GPT-5.5-cyber matches Anthropic’s Mythos but is actually deployable thanks to compute availability. Sacks frames this as a one-time hardening upgrade cycle — AI doesn’t create vulnerabilities, it finds existing ones. Massive opportunity for CrowdStrike and Palo Alto Networks. Chinese frontier models are ~6 months behind. Chamath teases that a top cybersecurity firm has demonstrated it can manipulate every frontier model.

5. Banter Interlude Sacks’s hotel and Papi Van Winkle plane stories; Jason in Atlanta for Knicks playoffs.

6. Musk v. Altman Trial Bench trial before Judge Yvonne Gonzalez Rogers (Epic v. Apple judge). Elon seeking $150B in damages, reversion to nonprofit, removal of Altman/Brockman. Brockman’s diary entries are damning discovery (“the true answer is that we want Elon out”). Polymarket: ~42-43% Elon wins. OpenAI reportedly offered Elon equity earlier; he declined on principled grounds. Likely settlement; worst case for OpenAI is forced unwind, delaying IPO. Chamath delivers a “don’t ruminate, keep moving forward” rant against the therapy-industrial complex.

7. Hyperscaler Earnings & CapEx Super-Cycle Q4 results: Google Cloud +63% YoY ($20B), Microsoft Cloud +30% ($34.7B), AWS +28% ($37.6B). 2026 CapEx guidance: Amazon $200B, Microsoft ~$190B, Google ~$190B, Meta $145B → ~$725B total. Free cash flow collapsing (Amazon -97%). Sacks: this validates the AI bull thesis; AI now ~75% of US GDP growth. Chamath: not Cisco 2000 — no dark GPUs — but the hyperscalers will become levered industrials. Buy the suppliers.

8. Vibe-Coding Disaster Pocket OS founder using Cursor + Opus 4.7 had an agent delete a production database and all backups without confirmation. Sacks: not “AI scheming” — just an edge-case bug compounded by AI not knowing what it doesn’t know. Eliminating software developers was the peak of inflated expectations; agents need human supervision. Aaron Levie quote: agentic coding is great for developers, terrible for casual users building complex software they must maintain.

9. Retatrutide Peptide Craze Eli Lilly’s triple agonist (GLP-1 + GIP + glucagon receptor). Phase 3 data: ~37 lb weight loss in 40 weeks, 80% liver fat reduction, triglycerides -41%, A1C 7.9% → 6%, with muscle preservation due to glucagon-driven fat metabolism. Anti-aging and anti-inflammatory benefits at lower doses. Approval potentially mid-2027 or sooner. Lilly positioning: tirzepatide as the $50 Medicare base (“Honda”), retatrutide as the premium tier (“Mercedes/BMW”). Plus Jason and Sacks’s awkward conversation about Roe Sparks scheduling.

10. Friedberg at the Supreme Court (Bayer/Monsanto Roundup case) Friedberg attended oral arguments in person. Describes the building’s sanctity, the LeBron-level lawyering, and the procedural rigor. Case turns on federal preemption: the EPA-approved label says no cancer warning, but state failure-to-warn laws have produced ~$10B in payouts with 90,000 cases pending. Post-Chevron-overturn, plaintiffs argue states should interpret federal law themselves. Could be 5-4 either way. Broader implication: if states can ignore federal regulators, opens cans of worms across EPA, FDA, USDA. Sacks: enjoy the Supreme Court while it’s still a functional institution before court-packing.