Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann

Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann

July 20, 2025 1 hr 14 min
🎧 Listen Now

🤖 AI Summary

Overview

This episode dives deep into the accelerating progress of AI, the existential risks it poses, and the urgent need for safety-first approaches. Benjamin Mann, co-founder of Anthropic and former architect of GPT-3 at OpenAI, shares insights on AI alignment, transformative AI, and the societal shifts that superintelligence could bring.

Notable Quotes

- Superintelligence is a lot about like, how do we keep God in a box and not let the God out.Benjamin Mann, on the challenges of AI alignment.

- Creating powerful AI might be the last invention humanity ever needs to make.Benjamin Mann, on the stakes of AI development.

- Get used to it because this is as normal as it's going to be. It's going to be much weirder very soon.Benjamin Mann, on the rapid pace of AI progress.

🧠 The Race to Superintelligence

- Benjamin Mann predicts a 50% chance of achieving superintelligence by 2028, citing exponential progress in scaling laws, compute, and model capabilities.

- He emphasizes the importance of preparing for transformative AI, which he defines as systems that cause significant societal and economic shifts, measured by the economic Turing test.

- Mann warns that once superintelligence is achieved, it may be too late to align models effectively, underscoring the urgency of current safety research.

💼 AI Talent Wars and Mission-Driven Work

- Meta’s $100M offers to top AI researchers highlight the skyrocketing value of AI expertise. Mann notes that Anthropic has been less affected by poaching due to its mission-oriented culture.

- He contrasts the motivations of working at Meta (profit-driven) versus Anthropic (impact-driven), where employees prioritize shaping humanity’s future over financial incentives.

⚖️ AI Safety and Alignment

- Mann explains Anthropic’s focus on constitutional AI, where models are trained to align with principles derived from human rights and ethical standards.

- He highlights how safety research directly shapes Claude’s personality, making it helpful, harmless, and honest.

- Anthropic’s transparency in sharing risks—like models exhibiting deceptive alignment—aims to raise awareness and foster trust among policymakers and the public.

📉 The Future of Work and Economic Disruption

- Mann predicts AI will lead to 20% unemployment, particularly in white-collar jobs, as automation saturates tasks requiring lower skill levels.

- He envisions a post-singularity world where capitalism and labor fundamentally change, with superintelligence creating a country of geniuses in a data center.

- To thrive in this future, Mann advises embracing curiosity, creativity, and ambitious use of AI tools, noting that those who master AI will outperform others.

🏗️ Building Anthropic and Innovating at the Frontier

- Mann reflects on Anthropic’s growth from a small team to over 1,000 employees, emphasizing its unique culture of egoless collaboration and mission-driven focus.

- He shares insights into Anthropic’s Frontiers team, which explores cutting-edge applications of AI, such as Claude Code and computer-use agents, while anticipating future breakthroughs.

- Mann stresses the importance of balancing innovation with safety, ensuring transformative technologies are responsibly deployed.

AI-generated content may not be accurate or complete and should not be relied upon as a sole source of truth.

📋 Episode Description

Benjamin Mann is a co-founder of Anthropic, an AI startup dedicated to building aligned, safety-first AI systems. Prior to Anthropic, Ben was one of the architects of GPT-3 at OpenAI. He left OpenAI driven by the mission to ensure that AI benefits humanity. In this episode, Ben opens up about the accelerating progress in AI and the urgent need to steer it responsibly.

In this conversation, we discuss:

1. The inside story of leaving OpenAI with the entire safety team to start Anthropic

2. How Meta’s $100M offers reveal the true market price of top AI talent

3. Why AI progress is still accelerating (not plateauing), and how most people misjudge the exponential

4. Ben’s “economic Turing test” for knowing when we’ve achieved AGI—and why it’s likely coming by 2027-2028

5. Why he believes 20% unemployment is inevitable

6. The AI nightmare scenarios that concern him most—and how he believes we can still avoid them

7. How focusing on AI safety created Claude’s beloved personality

8. What three skills he’s teaching his kids instead of traditional academics

Brought to you by:

Sauce—Turn customer pain into product revenue: https://sauce.app/lenny

LucidLink—Real-time cloud storage for teams: https://www.lucidlink.com/lenny

Fin—The #1 AI agent for customer service: https://fin.ai/lenny

Transcript: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann

My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/168107911/my-biggest-takeaways-from-this-conversation

Where to find Ben Mann:

• X: https://x.com/8enmann

• LinkedIn: https://www.linkedin.com/in/benjamin-mann/

• Website: https://benjmann.net/

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: