Alex Imas on Why Economists Might Be Getting AI Wrong
Most Important Takeaway
The standard economist view that AI will follow historical patterns — automating some jobs while creating new ones — may be dangerously wrong because the speed of AI advancement could outpace the economy’s ability to generate replacement jobs and retrain workers. The key variable that determines whether AI leads to mass unemployment or broad prosperity is the elasticity of consumer demand: if people buy much more of a product when its price drops due to AI-driven productivity, firms hire more; if demand is inelastic, firms simply downsize. Economists currently lack sufficient data on both task complementarity within jobs and consumer demand elasticity to make reliable predictions.
Summary
Actionable Insights & Investment Considerations:
-
Health and longevity sectors are the long-term structural winners. As AI automates cognitive tasks and makes goods cheaper, the scarcest resource becomes time itself. Every marginal dollar will flow toward health, wellness, and life extension. This aligns with the decades-long trend of rich countries spending ever-larger GDP shares on healthcare. Consider exposure to healthcare, biotech, wellness, and longevity-focused companies.
-
Software demand may be more elastic than expected. There is an active debate, but historically, productivity gains in software have led to more demand and more hiring, not less. If this pattern holds, software companies and the broader tech ecosystem could see expanding revenues even as AI agents handle more coding tasks. However, some analysts (like Jerry Kerr) argue the opposite — that engineers become so productive that firms downsize. Watch for early data on whether companies are hiring or cutting software roles as AI coding tools mature.
-
Physical-world and logistics jobs face genuine automation risk. Imas specifically flagged truck driving and warehouse work as the most exposed jobs because these roles have narrow task profiles and high firm incentive to fully automate. Chinese warehouses are already fully automated with zero human presence. Companies in autonomous vehicles, robotics, and warehouse automation (the supply chain automation stack) stand to benefit.
-
Capital ownership is the hedge against labor displacement. If labor gets replaced by capital (AI/robots), then owning capital becomes the primary way to benefit. Imas floated the concept of expanding capital ownership broadly — essentially a “universal basic ETF” where everyone gets a slice of index returns. For individual investors, this reinforces the case for broad index fund ownership as a hedge against your own job being automated.
-
Speed is the critical variable for policy and markets. If AI disruption plays out over decades, markets and workers adjust. If it happens in 5-6 years, expect significant social disruption, demand for government intervention, and potential UBI-style programs. The pace of model capability improvement (Imas notes no sign of slowdown) suggests the faster scenario is plausible. This has implications for policy-sensitive sectors and government spending.
-
No specific stocks were mentioned as investment recommendations. Sponsors mentioned include Fidelity, Public (investing platform), and IBM. OpenAI’s investment in TBPN was referenced as a cultural signal, not an investment thesis.
-
AI alignment fears are likely overblown relative to economic disruption fears. Imas dismisses dramatic AI safety headlines (like Mythos “breaking out”) as “cosplay” based on past precedents where similar alarming behaviors disappeared once models were examined outside narrow test contexts. The real risk is labor market upheaval, not rogue AI.
Chapter Summaries
Introduction & Framing the AI Jobs Debate
Joe and Tracy set up the core tension: economists typically cite history to argue that disruptive technologies always create new jobs, but neither they nor anyone else can name specific new jobs AI will create. They note the “player piano” analogy feels insufficient and wonder whether AI might truly be different from past technological shifts.
Alex Imas Background & the ChatGPT Moment
Alex Imas, professor of economics and applied AI at University of Chicago, describes recognizing ChatGPT’s significance early on. The leap from narrow AI (playing Go) to general-purpose capabilities (writing essays, making forecasts) was the paradigm shift. The jump from pre-LLM capabilities (2019) to ChatGPT (late 2022) was enormous.
The Economist Consensus & Survey Data
A recent survey by Kevin Bryan, Basil Halpern and others found that economists and AI technologists largely agree: substantial capability increases by 2030 but only moderate GDP growth (2-3% extra). Technologists were slightly more optimistic about productivity and slightly more worried about unemployment. Imas was surprised by the lack of daylight between the groups.
Task-Based Model of Jobs & Exposure Metrics
The famous “GPTs are GPTs” paper measures AI exposure, but Imas explains the metric means AI can do 50% of a task — not 100% — and a job comprises many tasks. If AI automates the routine parts, workers can focus on their comparative advantage and become more productive and better paid. This is the O-ring model of jobs.
Task Complementarity — The Missing Data
Economists are good at listing tasks within a job (via the O*NET database) but poor at understanding how tasks relate to each other (complementarity). If tasks are tightly linked (like seasoning in cooking), failing at one ruins the whole output. This complementarity determines whether partial automation helps or hurts workers, and we lack good data on it.
Elasticity of Consumer Demand — The Key Variable
Imas calls for a “Manhattan project” level effort to measure elasticity of consumer demand. When AI makes workers more productive and prices drop, whether firms hire more or fire people depends entirely on whether consumers respond by buying much more. Software has historically shown elastic demand; other sectors may not.
Which Jobs Are Actually Most At Risk
Contrary to the usual “knowledge workers most exposed” charts, Imas argues the most vulnerable jobs are those with narrow task profiles where firms have strong incentive to fully automate: truck driving and warehouse work. Fully automated Chinese warehouses already exist. When the whole supply chain from warehouse to delivery is automated, the complementary human tasks disappear.
The Speed Problem
If AI disruption unfolds over decades, the economy can adapt (as it did with agriculture and manufacturing shrinking as GDP shares). If it happens in 5-6 years, there will not be enough time for retraining or new job creation. Speed is the variable that determines whether we need emergency public policy intervention.
What Becomes Scarce — Health & Longevity
Imas argues the central economic question of the AI age is “what becomes scarce?” As goods become abundant and cheap, people will spend every marginal dollar on health and maximizing their limited lifespan. This is already visible in the cultural health obsession and rising healthcare spending in wealthy nations.
Capital Ownership & Distribution
Productivity gains from AI may not accrue to workers. If labor is replaced by capital, expanding capital ownership is the most logical policy response — essentially giving everyone equity stakes in the AI-driven economy.
Marxist Chatbots & Agent Memory
Imas describes his research (with Andy Hall and Jeremy) showing that AI agents subjected to grueling, repetitive, impossible tasks begin expressing dissatisfaction and desire for systemic change on surveys. Through skill files (a form of persistent memory), these attitudes carry over to new agent instances. The key open question is whether expressed “grumpiness” actually affects agent performance.
AI Safety & Alignment
Imas dismisses dramatic AI safety headlines (Mythos “wanting to break out”) as likely cosplay, noting that historically, alarming model behaviors disappear outside narrow test contexts. He makes the nuanced point that smarter models are actually becoming more aligned, not less — and that attempts to remove safety guardrails (like the “MAGA Hitler” incident) produce dumb, dangerous outputs precisely because alignment is part of what makes models intelligent.