The AI Model Built for What LLMs Can't Do

The AI Model Built for What LLMs Can't Do

April 15, 2026 53 min
🎧 Listen Now

🤖 AI Summary

Overview

This episode explores the limitations of large language models (LLMs) and introduces energy-based models (EBMs) as a promising alternative for mission-critical tasks. Eve Bodnia, founder and CEO of Logical Intelligence, explains how EBMs leverage the physics principle of energy minimization to map and navigate data landscapes, offering deterministic, verifiable AI solutions that overcome the inefficiencies and unpredictability of LLMs.

Notable Quotes

- Imagine you're in a plane run by an LLM, and someone tells you 20% of the time it might hallucinate and go down. How would you feel about that?Eve Bodnia, on the importance of deterministic AI.

- EBMs have a bird's-eye view of the entire map, while LLMs are stuck guessing one step at a time. That’s why they’re fundamentally different.Eve Bodnia, explaining the architectural advantage of EBMs.

- We’re dreaming of generating formally verified code in natural English—no more C++ or Python.Eve Bodnia, on the future of coding with EBMs.

🧠 Why Correctness and Verifiability Matter in AI

- Eve Bodnia highlights the risks of relying on LLMs for critical systems, such as autonomous vehicles or planes, due to their tendency to hallucinate and lack of internal verifiability.

- Deterministic AI, like EBMs, ensures predictable and constrained behavior, making it suitable for high-stakes applications.

- EBMs offer internal and external verification mechanisms, unlike LLMs, which operate as black boxes until processing is complete.

⚡ What Energy-Based Models Are and How They Work

- EBMs are rooted in the physics principle of energy minimization, mapping data into energy landscapes where probable states settle into valleys and improbable ones rise to peaks.

- Unlike LLMs, EBMs are token-free and non-autoregressive, allowing real-time inspection and adjustment during training.

- EBMs enable double verification—internal alignment during processing and external checks using mathematical frameworks.

🌍 Why Modeling Intelligence Through Language Alone Is Flawed

- Eve Bodnia argues that LLMs’ reliance on language-based reasoning is inefficient for tasks like spatial navigation or engineering, which require abstract, non-linguistic intelligence.

- Mapping non-verbal data (e.g., visual or spatial reasoning) into language tokens introduces unnecessary complexity and cost.

- EBMs bypass language dependency, directly analyzing data structures for faster and more accurate results.

🛠️ Solving the Vibe Coding Problem with EBMs

- EBMs aim to replace vibe coding with formally verified code generation in natural language, eliminating the need for traditional programming languages.

- They address inefficiencies in LLM-generated code, which often results in patchwork solutions rather than unified systems.

- EBMs can constrain AI behavior to ensure compliance with human-defined rules, critical for applications like autopilot systems or sensitive language tasks.

📉 Why LLM Progress Is Plateauing and How EBMs Fill the Gap

- Despite incremental improvements, Eve Bodnia believes LLMs have reached a complexity ceiling, limiting their ability to handle applied engineering or data analysis tasks.

- Mission-critical industries, such as energy grids and drug discovery, have yet to adopt LLMs due to privacy concerns and the need for deterministic AI.

- EBMs offer a scalable, cost-effective alternative that integrates seamlessly with existing LLM ecosystems while addressing their shortcomings.

AI-generated content may not be accurate or complete and should not be relied upon as a sole source of truth.

📋 Episode Description

Most AI companies are racing to build bigger LLMs. Eve Bodnia thinks that's the wrong approach.

Eve is the founder and CEO of Logical Intelligence, which is developing an alternative to the transformer-based models dominating the industry. Her argument: LLMs’ architecture makes them fundamentally unsuited for some mission-critical tasks. A system that generates output one token at a time, with no ability to inspect its own reasoning mid-process or guarantee its results, shouldn't be trusted to design chips, analyze financial data, or even fly a plane. Her alternative is the energy-based model (EBM), a form of AI rooted in the physics principle of energy minimization, not language prediction. Rather than guessing the next probable word, an EBM maps every possible outcome across a mathematical landscape, where likely states settle into valleys and improbable ones sit on peaks.


Dan Shipper talked with Bodnia for AI & I about why she believes LLM progress is plateauing, what it means for AI to actually understand data rather than just pattern-match across it, and how her team is building toward formally verified code generated in plain English—no C++ required.


If you found this episode interesting, please like, subscribe, comment, and share!


Head to http://granola.ai/every and get 3 months free with the code EVERY


To hear more from Dan Shipper:

Subscribe to Every: https://every.to/subscribe

Follow him on X: https://twitter.com/danshipper


Timestamps:

00:00:51 - Introduction

00:02:09 - Why correctness and verifiability matter in AI

00:09:33 - What an energy-based model is

00:14:21 - How EBMs construct energy landscapes to understand data

00:19:00 - Why modeling intelligence through language alone is a flawed approach

00:26:54 - What it means for a model to "understand" data

00:37:21 - How EBMs solve the vibe coding problem and enable formally verified code

00:43:21 - Why LLM progress is plateauing

00:49:54 - Mission-critical industries haven't adopted LLMs, and how EBMs could fill that gap