← All summaries

Who's Really Running AI? Inside the Billion-Dollar Battle Over Regulation, with Alex Bores

Equity · TechCrunch / Equity Podcast — Rebecca Bellan · February 27, 2026 · Original

Chapter Summaries

Chapter 1: The Political Landscape of AI in Early 2026 Host Rebecca Bellan sets the scene: the DoD is pressuring Anthropic to allow unrestricted military AI use, communities are protesting data center buildouts, and the public is split between “doomers” and “boomers.” She frames the episode around the growing regulatory fight and introduces Alex Bores, NY Assembly Member running for NY’s 12th congressional district, who has become the biggest target of Silicon Valley’s anti-regulation money.

Chapter 2: Who Is Alex Bores and Why Is He a Target? Bores has a master’s in computer science, was among the first of his party elected in New York with that degree, and previously worked at Palantir (which he left in 2019 specifically over its work with ICE). He argues he’s a threat to the anti-regulation camp because (1) he actually understands the technology and can’t be dismissed, and (2) he successfully passed the RAISE Act into law — the only state AI bill explicitly targeted by Trump’s executive order on state AI regulation that was still enacted after that order.

Chapter 3: What Is the RAISE Act? The RAISE Act (NY’s AI safety law, signed by Governor Hochul in December 2025) closely mirrors California’s SB 53 but is slightly stronger. It applies only to the largest frontier AI labs (those with $500M+ in revenue building sufficiently large models — currently only a handful: Google, Meta, OpenAI, xAI, Anthropic). Requirements: (1) publish a safety plan and actually adhere to it (amendments must be made in advance), (2) report “critical safety incidents” — events that directly caused or could imminently cause injury or death — to the government. Bores frames this as light-touch compared to other industry regulations (e.g., autonomous vehicle incident reporting).

Chapter 4: The Super PAC Money War Leading the Future PAC — backed by Joe Lonsdale, a16z (Marc Andreessen, Ben Horowitz), and OpenAI President Greg Brockman — has committed at least $10M to defeat Bores and has already spent $1.3M. Meta separately invested $65M into two super PACs (American Technology Excellence Project: $45M; Mobilizing Economic Transformation Across Meta: $20M) targeting state races in California and New York. On the other side, Public First Action PAC (backed by Anthropic’s $20M investment) supports candidates who favor reasonable AI guardrails. Bores notes that the average NY Assembly race raises ~$100K total — making $10M+ in opposition spending a deliberate intimidation tactic to chill other legislators.

Chapter 5: State vs. Federal Regulation Bores says the majority of AI rules should be set federally, and he released an 8-topic, 43-sub-point national AI framework. However, he argues states must act when the federal government doesn’t. He notes that the RAISE Act (NY) and SB 53 (CA) were developed in coordination, creating the seeds of a de facto nationwide standard across two of the largest tech markets. The political reality: AI companies don’t want any regulation; the “only federal, not state” argument is a tactical delay, not a genuine preference.

Chapter 6: Legislation in the Pipeline Bores has two additional state bills near passage: (1) a bill requiring large AI models to disclose training data types, including whether they used copyrighted material or PII (a similar version passed the CA Assembly unanimously but stalled in the Senate); (2) a content provenance bill requiring C2PA metadata to be attached to AI-generated content. He calls deepfakes “an incredibly solvable problem” if C2PA (a free, open-source industry standard) is universally required. Governor Hochul has included a version of the provenance bill in her budget. His full national AI framework is publicly available at AlexBores.nyc/ai-framework.


Summary

The billion-dollar battle over AI regulation is no longer just a policy debate — it has become a raw political power struggle, with Silicon Valley money attempting to drown out state-level democratic accountability. This episode profiles Alex Bores, the NY Assembly member who successfully passed the RAISE Act into law and is now facing an unprecedented $10M+ campaign funded by the venture capital backers of major AI labs.

Actionable insights:

The most important structural insight is that the fight is not “regulation vs. no regulation” in good faith — it’s a small minority of extremely well-funded industry actors trying to prevent any binding accountability standards while the federal government remains paralyzed. States (NY and CA in particular) are the only active regulatory venues. For anyone working in AI policy, following the RAISE Act and SB 53 is essential — these two laws are functionally converging into what may become a de facto national minimum standard, regardless of federal action.

For AI companies and practitioners: the RAISE Act is far narrower than most public perception suggests. It only covers labs with $500M+ in revenue building very large frontier models. Its requirements — publish a safety plan, adhere to it, report catastrophic incidents — are comparable to safety reporting regimes in autonomous vehicles and pharmaceuticals. Companies not near that revenue/model threshold face no obligations under this law. Understanding this distinction matters for business planning and policy engagement.

The C2PA content provenance standard is worth watching closely. Bores calls deepfakes “incredibly solvable” with mandatory C2PA metadata — and New York looks likely to pass legislation requiring it. If this passes and propagates to other states, it could create a compliance requirement for any AI content generation tool deployed to consumers. Companies building generative media products should begin evaluating C2PA implementation now.

The training data disclosure bill (requiring AI models to disclose data types, copyright usage, and PII) narrowly missed passage in California and is moving in New York. If enacted, it would create new transparency obligations for frontier labs and potentially expose legal risk for companies that cannot document their training data provenance.

Career note: Bores’ background is instructive — a CS master’s degree combined with industry experience (Palantir) gave him the technical credibility to pass legislation that couldn’t be dismissed by industry experts. He is one of only a handful of elected officials nationally with hands-on AI industry experience. For technical professionals considering public policy careers, this is an increasingly important lane as the regulatory environment matures.

No stocks or investment recommendations were made in this episode. However, Anthropic’s $20M investment in Public First Action PAC signals it is differentiating itself from OpenAI and Meta on regulatory posture — a strategic positioning worth monitoring for anyone tracking the competitive dynamics of major AI labs.