Intentional Tech: Designing AI for Human Flourishing | Alex Komoroske, Cofounder and CEO of Common Tools

Intentional Tech: Designing AI for Human Flourishing | Alex Komoroske, Cofounder and CEO of Common Tools

July 09, 2025 1 hr 11 min
🎧 Listen Now

🤖 AI Summary

Overview

This episode explores the transformative potential of AI, particularly large language models (LLMs), and the critical decisions shaping their future. Alex Komoroske, co-founder of Common Tools and former Google and Stripe leader, discusses how AI can either amplify human flourishing or exacerbate engagement-driven pitfalls. The conversation delves into the architecture, ethics, and societal impact of AI, emphasizing the need for intentional design to align technology with human values.

Notable Quotes

- We were in the very first inning. We're like rubbing sticks together. We still think that chatbots are the main thing.Alex Komoroske, on the untapped potential of LLMs.

- It is insane to allow a single company to decide what things you may run in [the most important computing device on earth].Alex Komoroske, on the closed ecosystem of smartphones.

- If you aren't paying for your compute and it's not working for you, it's working for somebody else.Alex Komoroske, on the importance of user-funded AI.

🧠 Why Chatbots Are a Feature, Not a Paradigm

- Alex argues that chatbots, while useful, are merely an entry point for LLMs and not the ultimate application.

- Chatbots lack the structure needed for long-term tasks, leading to inefficiencies and overwhelming context accumulation.

- The future lies in systems that integrate LLMs into workflows, enabling proactive, co-active collaboration rather than simple back-and-forth exchanges.

🌍 Intentional Technology and Human Flourishing

- Alex introduces the concept of intentional tech, which prioritizes alignment with user intentions over engagement maximization.

- Four pillars of meaningful computing:

- Human-centered: Focused on user needs, not corporate agendas.

- Private by design: Data is user-controlled and secure.

- Pro-social: Encourages societal integration, not isolation.

- Open-ended: Allows for creativity and user-driven innovation.

- He warns against the default path of engagement-maximizing AI, which risks creating a passive, less agentic society.

🔓 Breaking Free from Data Silos

- The same-origin paradigm, a browser security model, inadvertently centralized data and created tech monopolies.

- Alex advocates for contextual flow control, a new architecture enabling data portability and integration across platforms without compromising security.

- He likens ChatGPT to AOL—an early, closed system that must evolve into an open, web-like ecosystem to unlock AI's full potential.

⚠️ The Looming Threat of Prompt Injection

- Prompt injection, where malicious inputs manipulate LLMs, is a critical but under-discussed vulnerability.

- Current architectures, like OpenAI's plugin system, are built on insecure foundations, making them susceptible to exploitation.

- Alex calls for a rethinking of security models to prevent irreversible side effects, such as unauthorized data leaks or harmful actions.

📚 Systems Thinking and the Future of AI

- Alex shares his journey into systems thinking, emphasizing the importance of understanding emergent dynamics over individual decisions.

- He highlights the potential of LLMs to transfer tacit knowledge, enabling richer collaboration and creativity.

- However, he cautions that AI's impact on coordination and organizational dynamics will take years to fully understand, as new systems and norms emerge.

AI-generated content may not be accurate or complete and should not be relied upon as a sole source of truth.

📋 Episode Description

The smallest technical decisions become humanity's biggest pivots:


The same-origin policy—a well-intentioned browser security rule from the 1990s—accidentally created Facebook, Google, and every data monopoly since. It locks your data in silos—and you stayed where your stuff already is. This dynamic created aggregators.


Alex Komoroske—who led Chrome's web platform team at Google and ran corporate strategy at Stripe—saw this pattern play out firsthand. And he's obsessed with the tiny decisions that will shape AI's next 30 years:

- Whether AI keeps memory centrally or user-controlled?

- Is AI free/ad-supported or user-paid?

- Should AI be engagement-maximizing or intention-aligned?

- How should we handle prompt injection in MCP and agentic systems?

- Should AI be built with AOL-style aggregation or web-style openness?


This is a much-watch if you care about the future of AI and humanity.


If you found this episode interesting, please like, subscribe, comment, and share! 


Want even more?

Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It’s usually only for paying subscribers, but you can get it here for free.

To hear more from Dan Shipper:

Sponsors:

Google Gemini: Experience high quality AI video generation with Google's most capable video model: Veo 3. Try it in the Gemini app at gemini.google with a Google AI Pro plan or get the highest access with the Ultra plan.


Attio: Go to⁠⁠⁠⁠ https://attio.com/every⁠⁠⁠⁠⁠ and get 15% off your first year on your AI-powered CRM.


Timestamps:

  1. Introduction: 00:01:45

  2. Why chatbots are a feature not a paradigm: 00:04:25

  3. Toward AI that’s aligned with our intentions: 00:06:50

  4. The four pillars of “intentional technology”: 00:11:54

  5. The type of structures in which intentional technology can thrive: 00:14:16

  6. Why ChatGPT is the AOL of the AI era: 00:18:26

  7. Why AI needs to break out of the silos of the early internet: 00:25:55

  8. Alex’s personal journey into systems-thinking: 00:41:53

  9. How LLMs can encode what we know but can’t explain: 00:48:15

  10. Can LLMs solve the coordination problem ins