Adam Marblestone – AI is missing something fundamental about the brain
🤖 AI Summary
Overview
This episode explores the intersection of neuroscience and artificial intelligence, focusing on the brain's unique mechanisms for learning and decision-making. Adam Marblestone discusses how the brain's architecture, reward functions, and learning algorithms differ fundamentally from current AI systems. The conversation also delves into the potential of mapping the human brain, the role of formal mathematics in AI, and the future of AI development.
Notable Quotes
- Evolution may have built a lot of complexity into the loss functions, encoding a specific curriculum for what different parts of the brain need to learn.
– Adam Marblestone, on the brain's unique learning mechanisms.
- The brain cannot be copied, and you don't have external read-write access to every neuron and synapse. That's very annoying.
– Adam Marblestone, on the limitations of biological hardware.
- If every iPhone was also a brain scanner, you could train AI with brain signals.
– Adam Marblestone, on the potential of brain data to enhance AI training.
🧠 The Brain’s Learning Framework
- The brain's learning efficiency may stem from its use of complex, evolutionarily encoded loss functions rather than simple architectures.
- Adam Marblestone suggests that evolution has created a curriculum of loss functions that activate at different developmental stages, enabling the brain to learn efficiently.
- The brain's cortex might function as a general prediction engine, capable of omnidirectional inference, unlike large language models (LLMs), which are limited to predicting the next token.
- The steering subsystem
in the brain integrates innate responses and learned behaviors, enabling humans to adapt to novel situations and encode desires for things evolution has never encountered.
🔄 Amortized Inference and Evolution’s Role
- The brain may use a form of amortized inference, where it approximates Bayesian reasoning by learning to predict causes from observations.
- Evolution has likely optimized the brain's architecture and reward functions, allowing it to generalize efficiently from limited data.
- The genome encodes the architecture and learning rules of the brain, but the reward functions are compact and highly efficient, enabling complex behaviors to emerge.
🧬 Mapping the Brain for AI Insights
- Mapping the brain's connectome (its neural wiring) could reveal the architectures, learning rules, and reward functions that underpin human intelligence.
- Adam Marblestone argues that understanding the brain's steering subsystem could inform AI alignment and safety.
- Current efforts, like E11 Bio, aim to reduce the cost of mapping a mouse brain connectome to tens of millions of dollars, with the ultimate goal of mapping human brains.
- The connectome could serve as a constraint for refining AI models, helping to answer questions about energy efficiency, learning algorithms, and inference mechanisms.
📐 Automating Mathematics and Formal Proofs
- Formal mathematics tools like Lean enable verifiable proofs, which could accelerate mathematical discovery and software verification.
- AI could automate the mechanical aspects of math, allowing mathematicians to focus on conceptual breakthroughs.
- The ability to RLHF (reinforcement learning with human feedback) proofs could lead to provably secure software and hardware, with applications in cybersecurity and AI safety.
- Adam Marblestone highlights the potential for AI to democratize math, enabling outsiders to contribute to fields like quantum gravity by automating the technical groundwork.
⚙️ Biological Hardware vs. Digital Systems
- The brain’s biological hardware has trade-offs: it’s energy-efficient but slow and lacks the copyability of digital systems.
- Co-locating memory and compute, as the brain does, could inspire future AI hardware designs.
- The stochastic nature of neurons might naturally align with probabilistic inference, a feature that could be leveraged in AI systems.
- Despite its limitations, the brain's architecture and learning mechanisms offer valuable lessons for AI development.
AI-generated content may not be accurate or complete and should not be relied upon as a sole source of truth.
📋 Episode Description
Adam Marblestone has worked on brain-computer interfaces, quantum computing, formal mathematics, nanotech, and AI research. And he thinks AI is missing something fundamental about the brain.
Why are humans so much more sample efficient than AIs? How is the brain able to encode desires for things evolution has never seen before (and therefore could not have hard-wired into the genome)? What do human loss functions actually look like?
Adam walks me through some potential answers to these questions as we discuss what human learning can tell us about the future of AI.
Watch on YouTube; read the transcript.
Sponsors
* Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com
* Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) – The brain’s secret sauce is the reward functions, not the architecture
(00:22:20) – Amortized inference and what the genome actually stores
(00:42:42) – Model-based vs model-free RL in the brain
(00:50:31) – Is biological hardware a limitation or an advantage?
(01:03:59) – Why a map of the human brain is important
(01:23:28) – What value will automating math have?
(01:38:18) – Architecture of the brain
Further reading
Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode.
A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI
Adam’s blog, and Convergent Research’s blog on essential technologies.