Why I don’t think AGI is right around the corner

Why I don’t think AGI is right around the corner

July 03, 2025 14 min
🎧 Listen Now

🤖 AI Summary

Overview

This episode explores why Dwarkesh Patel believes Artificial General Intelligence (AGI) is not imminent, delving into the limitations of current AI systems, the challenges of continual learning, and predictions for transformative AI capabilities over the next decade.

Notable Quotes

- The fundamental problem is that LLMs don't get better over time the way a human would. - Dwarkesh Patel, on the limitations of current AI systems.

- An AI that is capable of online learning might functionally become a superintelligence quite rapidly without any further algorithmic progress. - Dwarkesh Patel, on the transformative potential of solving continual learning.

- AI progress over the last decade has been driven by scaling training compute for frontier systems, but this cannot continue beyond this decade. - Dwarkesh Patel, forecasting the shift from compute scaling to algorithmic innovation.

🌟 Limitations of Current AI Systems

- Large Language Models (LLMs) struggle with continual learning, making it hard for them to improve over time or adapt to user preferences.

- Patel highlights that while LLMs can perform tasks at a baseline level, they lack the ability to build context, interrogate failures, and refine their performance like humans.

- Attempts to use LLMs for tasks like rewriting transcripts or co-writing essays reveal their inability to retain session-specific learning beyond immediate interactions.

- Patel compares teaching LLMs to teaching a student saxophone with written instructions—an ineffective approach that underscores the gap between human and AI learning modalities.

📈 Predictions for AGI and AI Progress

- Patel predicts transformative AI capabilities, such as end-to-end tax preparation, by 2028, and AI systems capable of learning on the job as effectively as humans by 2032.

- He emphasizes that solving continual learning will lead to a broadly deployed intelligence explosion, where AI systems can amalgamate learnings across all copies, accelerating their utility.

- Patel notes that AGI timelines are log normal, with high probabilities of breakthroughs this decade but diminishing chances after 2030 due to limits in scaling compute and algorithmic progress.

🛠️ Challenges in Computer Use Agents

- Patel is skeptical about reliable computer-use agents emerging by 2026, citing three key obstacles:

- Long rollout times for complex tasks like tax preparation.

- Lack of multimodal pre-training data for computer use tasks.

- The difficulty of algorithmic innovations required for these capabilities.

- He contrasts the rapid progress in natural language processing with the slower pace of advancements in multimodal tasks, which require more sparse and unfamiliar data.

🤔 Advances in AI Reasoning

- Patel praises the reasoning capabilities of newer models like Gemini 2.5, which can break down problems, react to internal monologues, and self-correct.

- He describes the experience of using advanced models for coding tasks as wild, showcasing their ability to deliver functional applications with minimal input.

- Despite these advancements, Patel remains cautious about overestimating AI's current capabilities, emphasizing the need for sober assessments of its limitations.

📉 The Future of AI Progress Post-2030

- Patel argues that AI progress driven by scaling compute will plateau after 2030, shifting the focus to algorithmic innovation.

- He warns that the low-hanging fruits of deep learning will be exhausted, reducing the yearly probability of AGI breakthroughs.

- While transformative outcomes are possible this decade, Patel suggests that a relatively normal world could persist into the 2030s or 2040s if key bottlenecks remain unsolved.

AI-generated content may not be accurate or complete and should not be relied upon as a sole source of truth.

📋 Episode Description

I’ve had a lot of discussions on my podcast where we haggle out timelines to AGI. Some guests think it’s 20 years away - others 2 years.

Here’s an audio version of where my thoughts stand as of June 2025. If you want to read the original post, you can check it out here.



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe