← All summaries

SpaceX And Anthropic Partnership | The Brainstorm EP 131

The Brainstorm · ARK Invest — Frank Downing, Sam Korus, Brett Winton · May 13, 2026 · Original

Most important take away

SpaceX and Anthropic have struck a deal in which SpaceX/xAI is leasing the 300 MW, 220,000-GPU Colossus One data center to Anthropic for inference, immediately relieving Anthropic’s acute compute shortage and reinforcing SpaceX’s emerging story as a vertically integrated AI infrastructure provider ahead of its anticipated IPO. The deeper investment thesis: compute is so scarce relative to monetizable AI demand that model providers will pay premium prices, and SpaceX’s space-based compute (enabled by Starship economics) becomes economically compelling once launch costs fall to roughly $300/kg, plausibly in the late 2020s to early 2030s.

Summary

Actionable insights and investment angles:

  • SpaceX (private, pre-IPO): The deal de-risks SpaceX’s IPO narrative by showing it can monetize compute infrastructure as a service today, even before its space-based compute ambitions scale. SpaceX’s vertical integration (launch, satellites, chips, models via xAI/Grok) is the central bull case. Watch for IPO timing and any disclosed compute-as-a-service revenue.
  • Anthropic (private): Demand is so far outstripping supply that Anthropic was forced to throttle usage and seek emergency compute from a former competitor. Bullish signal on Anthropic’s revenue trajectory and pricing power, but exposes a structural compute bottleneck.
  • Nvidia (NVDA): Each new GPU generation delivers 2–3x training gains and 10–30x inference performance per watt, and Nvidia captures some of that productivity uplift via pricing. Continued strong demand from hyperscalers, model labs, and now space-based deployments supports the bull case.
  • CoreWeave and neocloud infrastructure plays: A pure infrastructure-as-a-service operator can pay back a gigawatt-scale facility in roughly four years at current rates of ~$15B/GW/year in revenue; vertically integrated operators in ~two years. Useful framework for evaluating data center REITs and AI infrastructure names.
  • OpenAI (private): Sarah Friar’s disclosure implies ~$30B revenue per gigawatt of inference compute today after the recent frontier-model price doubling — a benchmark for sizing the overall AI software market.
  • Crusoe (private): Cited as a reference operator for gigawatt-scale data center economics; one to watch if it comes to market.
  • Small modular reactor (SMR) thesis: The hosts remain bullish on nuclear but expect SMRs and smaller chunky projects (tens to low hundreds of MW) to dominate in the U.S. over gigawatt-scale plants, because permitting and execution risk on large projects is too high. Implication: favor SMR-exposed names over large-reactor pure plays in the U.S. market.
  • Starship economics are the linchpin for space compute: A gigawatt of orbital compute needs to come in under ~$20B of launch cost to beat terrestrial. At $1,000/kg (single-use Starship) that’s $25B (uneconomic); at $300/kg (5x reuse) it’s ~$7.5B (clearly economic). Track Starship reuse cadence as the key leading indicator for the space-compute thesis.
  • Timing: Hosts expect meaningful space-based compute deployment to begin in 2028–2029, scaling to tens of gigawatts per year in launch by the early 2030s. At 10 GW/year, infrastructure-as-a-service revenue alone could exceed $150B annually for SpaceX.
  • Pricing power inflection: AI capability has crossed a threshold where enterprise knowledge workers actively demand the tools, shifting budgets from “interesting software” line items toward wage-comparable spend. Margins are inflecting up at both the model and infrastructure layers — a tailwind for the entire AI stack.

Why these matter: Compute is the binding constraint on AI monetization. Owning, building, or supplying compute (chips, power, cooling, launch, satellites, SMRs) is where excess returns are accruing. Velocity-to-market currently outweighs marginal cost, meaning premium-priced compute providers (including future space-based options) can command strong unit economics for years.

Chapter Summaries

  • The Deal: Anthropic, facing severe compute shortages and forced usage caps, signed a deal with SpaceX/xAI to lease the Colossus One data center (300 MW, 220,000 GPUs) for inference. Anthropic is lifting capacity restrictions and has expressed interest in multi-gigawatt space-based compute later. Notably, Anthropic had previously cut off xAI from its models via Cursor — now the two are on speaking terms again.
  • Why It Makes Sense For Both Sides: Colossus One has a mixed chip set better suited to inference than training, so xAI is monetizing an underused asset while focusing its own training on Colossus Two. The deal cleans up SpaceX’s financials ahead of its IPO and demonstrates infrastructure-as-a-service capability. Trade-off: SpaceX gives up optionality on 220,000 GPUs but, given vertical integration, can rebuild faster than competitors.
  • Gigawatt Economics: A 1 GW data center costs ~$60B (≈$19B facility/power/cooling, ~$30B GPUs, ~$10B other IT). Infrastructure-as-a-service generates ~$15B/GW/year; model providers can generate ~$30B/GW/year. Payback: ~2 years vertically integrated, ~4 years for a pure neocloud. OpenAI is generating roughly $30B per gigawatt of inference after recent price hikes.
  • Performance Per Watt And Pricing Power: Each Nvidia generation delivers large performance-per-watt gains (2–3x training, 10–30x inference). Productivity gains are split across the stack, but model providers pass most through as consumer surplus while training larger models. AI has crossed a utility threshold; enterprises now treat AI spend like wage budget, driving major pricing power and margin expansion at both model and infrastructure layers.
  • Space Compute Economics: GPU and AI-infrastructure costs are roughly the same in space as on earth; the question is whether you can launch a gigawatt for under ~$20B. At Starship reuse of 5x and $300/kg, it’s ~$7.5B — clearly economic. Satellite manufacturing benefits from production-line efficiencies versus bespoke terrestrial data centers.
  • Earth Scarcity vs. Space Scalability: Terrestrial compute scarcity (power, permitting, siting) means model providers will pay 50–100% premiums for available compute. Space provides strategic deployment certainty even at a cost premium, especially as performance per watt and revenue per watt keep climbing.
  • Timing And Trillion-Dollar Question: Repeatable space-compute deployment likely begins in 2028–2029, scaling to tens of gigawatts per year of launch capacity by the early 2030s. At 10 GW/year, infrastructure revenue alone could top $150B for SpaceX, with software monetization potentially doubling that.
  • Nuclear And SMR Outlook: Hosts remain bullish on nuclear but expect small modular reactors and chunky sub-gigawatt projects to win in the U.S. due to easier siting and modular fabrication. Large-scale gigawatt nuclear builds remain dominant only in China.
  • Wrap-Up And Disclaimers: Hosts conclude the deal benefits both parties and tee up future episodes; standard ARK Invest investment-advice disclaimers follow.