← All summaries

Anthropic, the Pentagon, and the Future of Autonomous Weapons

Odd Lots · Joe Weisenthal, Tracy Allaway — Paul Scharre · March 28, 2026 · Original

Most important take away

The dispute between Anthropic and the Pentagon is not really about fully autonomous weapons today — no one is seriously proposing that current AI make life-or-death targeting decisions on its own. The real fight is about who sets the rules for how AI can be used by the military going forward, and the Pentagon’s push for unrestricted “any lawful use” contracts threatens to erode the safeguards AI companies have put in place. This creates a race-to-the-bottom dynamic where the least safety-conscious AI lab wins the contract, as demonstrated by OpenAI stepping in after Anthropic’s refusal.

Chapter Summaries

Introduction and Context (Opening)

Joe and Tracy set the scene: just before the US war with Iran began, the biggest story was the public falling out between Anthropic and the Department of Defense over how AI could be used in military operations, particularly around autonomous weapons and surveillance.

Defining Autonomous Weapons

Paul Scharre explains there is no universally agreed-upon definition. Conceptually, an autonomous weapon chooses its own targets. He draws an analogy to self-driving cars — current military AI has increasing levels of automated features, but humans still make final targeting decisions. The spectrum ranges from missile defense intercepts (broadly accepted autonomy) to fully independent target selection (not currently in use).

How AI Is Being Used in the Iran War

The Pentagon uses AI in two main ways: narrow image classification systems (Project Maven, nearly a decade old) that identify objects in drone and satellite feeds, and newer large language model tools (including Anthropic’s Claude) integrated through Palantir’s Maven Smart System. These LLMs help intelligence analysts sift through massive datasets, find pattern intersections, and build strike packages by matching targets to available aircraft and munitions.

The Human-in-the-Loop Question

Tracy and Joe press on how meaningful the human role actually is. Scharre acknowledges the risk of “rubber stamping” AI outputs but says current use involves humans giving specific guidance. The accidental school strike — caused by outdated DIA targeting data — highlights the danger of poor data quality going into AI systems, regardless of how much human oversight exists at the output stage.

Paul Scharre’s Background

Scharre led the Pentagon effort to develop DoD policy on autonomy in weapons (still in effect today), starting around 2011. He worked in the Office of the Secretary of Defense and served as an Army Ranger. He continued this work through his books and at the Center for a New American Security.

Why the Government Cannot Build AI In-House

The military lacks the technical talent, which is fiercely competed for in the private sector. Private enterprise can mobilize far more capital for data centers and model training than defense budgets allow. The Anthropic contract was reportedly $200 million — not significant money for a major AI company. The defense sector is a small customer for these firms.

The Real Anthropic-Pentagon Dispute

The Pentagon’s January AI strategy demanded contracts allowing “any lawful use” of AI tools. This conflicts with AI companies’ acceptable use policies that restrict activities like offensive cyber operations. The core disagreement is about who sets the rules, not about deploying autonomous weapons today. When Anthropic balked, OpenAI stepped in, creating a race-to-the-bottom dynamic on safety.

Technical Safeguards and Their Limits

There are three ways AI companies enforce safety: training models to refuse certain requests, placing input/output classifiers that screen interactions, and monitoring user behavior for suspicious activity. However, if the military hosts models on its own infrastructure, the company may lose the ability to enforce these safeguards — making contract details critical.

The Path Toward Greater Autonomy

Trends point toward more autonomous systems: multimodal AI that integrates diverse data, AI agents networked together that could gradually push humans out of the loop, and embodied AI in drones or munitions with onboard autonomy. Loitering munitions that hunt and attack targets independently do not yet exist in widespread use, though narrow historical examples date to the 1980s.

Risks of Escalation and Flash Crashes in War

Scharre compares the risk of autonomous systems interacting unpredictably to financial flash crashes. Unlike markets, there is no referee to call timeout in war. Autonomous cyber operations are especially concerning — defending at machine speed requires autonomy, but interactions between competing AI systems could produce escalatory behavior no one intended.

AI and Ethical Decision-Making

AI could make warfare more precise (flagging strikes near protected sites, recommending smaller munitions) or less humane (reducing human moral engagement). There is a real concern that if no one feels morally responsible for killing, wars could become more destructive. The hosts draw a parallel to Ender’s Game.

The Stanislav Petrov Lesson

Petrov’s 1983 decision not to report a false alarm about incoming US missiles — based on his gut feeling about faulty Soviet technology — saved the world from nuclear war. An AI system would have followed its programming. This underscores why human judgment, instinct, and contextual understanding of stakes remain essential even as AI grows more capable.

Closing Discussion

The hosts reflect on the novelty of commercially-developed technology being essential to military operations (paralleling Starlink in Ukraine), the politics of Anthropic being perceived as the “last lib tech company,” and the inevitability that fully autonomous weapons debates will intensify soon.

Summary

The podcast explores the growing role of AI in US military operations through the lens of the Anthropic-Pentagon dispute and the ongoing Iran war, featuring Paul Scharre, a leading expert who helped write the Pentagon’s still-active policy on autonomous weapons.

Key themes and actionable insights:

The Anthropic-Pentagon split is about governance, not technology. The Pentagon wants unrestricted “any lawful use” access to AI tools. AI companies maintain acceptable use policies that restrict certain applications. This fundamental disagreement about who controls the rules will shape every future defense-AI contract.

AI is already deeply embedded in military operations. Through Palantir’s Maven Smart System, large language models including Claude are helping analysts process targeting data, identify patterns across intelligence sources, and build strike packages for the Iran conflict. This is not speculative — it is happening now.

The race to the bottom is real and accelerating. When Anthropic refused the Pentagon’s terms, OpenAI immediately stepped in. This dynamic means the AI lab with the least safety concern wins defense contracts. The same competitive pressure exists internationally, with China and Russia unlikely to adopt comparable safeguards.

Data quality is the underappreciated risk. The accidental strike on a school traced back to outdated DIA targeting data, not an AI failure. AI systems are only as good as the data fed into them, and thousands of pre-built targets may contain similar errors.

Investment-relevant observations:

  • Palantir (PLTR) is the integration layer for military AI through its Maven Smart System — it is the infrastructure through which LLMs connect to defense data, making it a key beneficiary regardless of which AI company holds the contract.
  • OpenAI is positioning itself as the defense-friendly alternative to Anthropic, potentially capturing significant government contracts.
  • Defense AI spending remains relatively small ($200M Anthropic contract) compared to commercial AI budgets, but the strategic importance and growth trajectory are significant.
  • Companies building autonomous drone and loitering munition technology are positioned for long-term growth as the military moves toward greater autonomy at the tactical edge.
  • The humanoid robotics sector (with heavy Chinese investment) intersects with future military applications, though practical battlefield deployment of robot soldiers remains distant.

The safeguard problem is structural. When AI models are hosted on military infrastructure rather than company servers, the AI company loses its ability to monitor usage and enforce policies. Contract architecture — not just policy language — determines whether safeguards actually function.

Autonomous weapons are coming incrementally, not suddenly. The path runs through multimodal AI agents handling longer task chains, networked AI systems where human oversight becomes nominal, and edge-deployed distilled models on drones. The concern is not a sudden leap to Terminator but a gradual erosion of meaningful human control.