Humanize AI before it dehumanizes us, with Dr. Rana el Kaliouby at SXSW
Most important take away
AI progress has been overwhelmingly focused on IQ (cognitive capability), but the next frontier is building machines with EQ—emotional and social intelligence—so technology augments rather than replaces humans. Individuals, investors, and organizations should vote with their feet and their dollars for human-centric AI, leaning in to experimentation while demanding guardrails around ethics, bias, safety, and environmental impact.
Summary
Key Themes
- Human-centric AI as the next frontier: Rana argues the industry is lagging on emotional and social intelligence. Since only ~7% of human communication is verbal and 93% is nonverbal (facial expressions, tone, gestures, posture), AI that ignores that dimension will never reach true AGI. “We only build what we measure for,” so she calls for new EQ benchmarks alongside the IQ benchmarks that dominate today.
- Augmentation over replacement: AI should amplify human capability, not substitute for it. Robots should take dangerous/repetitive jobs (e.g., ship welding), and digital twins should extend reach (her own twin speaks Mandarin for her), but human relationships, creativity, and intuition must be protected.
- The boys club problem: AI is currently a boys club and the economic gap will widen sharply if women founders are excluded from this wave. Three of her four Blue Tulip investments are women CEOs.
- Bubble vs. real opportunity: There are froth signals (pre-product, pre-revenue unicorns; circular money flows between Nvidia/OpenAI/hyperscalers), but the underlying application layer across industries is in very early days and likely undervalued long-term.
- Intuitive intelligence: In an AI world, humans should double down on what machines can’t access—gut feeling, goosebumps, body wisdom, lived experience, and originality. Ariana Huffington’s framing: “Let AI be more intelligent than humans and let humans be wiser than AI.”
Actionable Insights
- Vote with your feet and your $20/month: Before subscribing to an AI tool, ask whether the company is addressing bias (data and algorithmic), trust, security, privacy, and appropriate use cases. If the founders can’t answer, walk away.
- Lean in with curiosity and play: Experiment with new tools even when they’re imperfect. Rana’s small fund built “Blue,” a chief-of-staff AI agent that handles research and CRM updates.
- Demand guardrails and transparency: Every model release should be tested against AI safety benchmarks (especially around mental health and vulnerable users) and environmental impact benchmarks.
- AI therapy and companions: Useful as supportive tools when you’re ruminating, but should never replace a human therapist. Human oversight and human-in-the-loop are essential.
Career Advice
- Human skills that will rise in value: collaboration (with humans and machines), original communication (readers increasingly detect AI-written prose), critical thinking, and creativity.
- For young/junior workers: become AI-native. Top CEOs (Accenture’s Julie Sweet, Cloudflare’s Matthew Prince) are hiring more new grads than expected because they are fluent with these tools. The middle layer of organizations is most at risk.
- For leaders: rethink workflows as human-AI collaboration rather than incremental tool adoption. Redefine what junior roles even mean.
- Tap intuitive intelligence: meditate, get off screens, and cultivate the body-based wisdom that IQ-focused tech cannot replicate.
Business Strategies
- Investment thesis (Blue Tulip): back founders building in three verticals where AI is transformative—(1) the health-span revolution (sensors, data, AI for healthcare), (2) future of work (physical AI, AI co-workers, agentic AI, especially in antiquated industries), and (3) sustainable living (food innovation, manufacturing, climate, energy).
- Defensibility test for founders: Rana can separate signal from noise in three questions. Don’t ask “are you defensible today” (the next Gemini/Anthropic/OpenAI release can make you obsolete); ask “are you defensible in one to five years” and “how complex is the real problem you solve, and what IP or moats surround it.”
- AI-native devices: a major investment area. The smartphone is not AI-native. The next platform must be perceptual, conversational, empathetic, contextual, memory-enabled, and ambient—form factor unknown (glasses, pins, phones).
- Build human-centric into your rubric: as an investor, Rana won’t fund founders who haven’t thought through ethics, bias, privacy, and deployment boundaries.
Chapter Summaries
- Intro and origin story: Bob Safian welcomes Rana on stage at SXSW. She describes her tech-forward childhood in Egypt and Kuwait (father taught COBOL, mother was one of the first female programmers in Cairo), Atari evenings with her sisters, and the throughline of her career—building technology that brings people together.
- From Affectiva to Blue Tulip: After selling Affectiva in 2021, Rana set three goals—invest in human-centric founders, tell stories of underrepresented AI voices via her Pioneers of AI podcast, and convene cross-disciplinary communities. Her kids embody the societal debate: her 17-year-old son is AI-forward (using AI to translate 1930s Giza workers’ Arabic diaries), her daughter is a food anthropologist running an IRL cultural salon and refuses to use AI.
- EQ as the next AI frontier: IQ benchmarks dominate; EQ is ignored. Leading humanoid robotics companies build impressive dishwasher-unloaders that are “big and scary” because teams obsess over functionality, not social integration.
- Fact or fiction game: (1) AI bubble—mostly fiction; valuations are frothy and the money loop between hyperscalers is suspicious, but application-layer opportunity is genuinely early. (2) Robots taking all jobs—partly real but not Terminator-style; they should absorb dangerous/repetitive work. (3) AI bad for creators—false; AI democratizes creation and raises the premium on human originality and lived experience. (4) AI outsmarting humans (Ariana Huffington)—true and good if we use the moment to deepen intuitive human wisdom. (5) AI is a boys club—unambiguously true and economically dangerous.
- Audience Q&A: valuable human skills (collaboration, communication, critical thinking, creativity); meta-glasses and AI-native hardware (very early); world models (multimodal foundation models grounded in real-world physics, trained from people walking around capturing environmental data); AI therapy (supportive but not a replacement, guardrails needed, tragic cases of young users being harmed); keeping employees current (lean in, experiment, redefine junior roles); digital twins and AI co-founders (Evan Ratliff’s Shell Game experiment with AI CEO Kyle and CMO Megan); picking real founders vs. hype riders (defensibility over 1–5 years, complexity of the problem, IP moats).
- Closing: Lean in with curiosity, but be collectively vocal about guardrails, safety benchmarks, environmental impact benchmarks, and transparency in how models are built, validated, and deployed. Rana: “It’s not too late to take agency over what we build.”