Skip to content
Home » Making Sense #469: w/ Tristan Harris on Escaping an Anti-Human Future (Transcript)

Making Sense #469: w/ Tristan Harris on Escaping an Anti-Human Future (Transcript)

Editor’s Notes: In this episode of the Making Sense podcast, Sam Harris welcomes back Tristan Harris, co-founder of the Center for Humane Technology and a driving force behind The Social Dilemma. Together, they explore the existential “devil’s bargain” of artificial intelligence, shifting the conversation from the familiar harms of social media to the rapidly accelerating risks of AI development. Drawing parallels to the cultural shift caused by the Cold War-era film The Day After, they discuss the urgent need for global guardrails to prevent a catastrophic, “anti-human” future before the window for political action closes. (April 10, 2026)

TRANSCRIPT:

Introduction

SAM HARRIS: I’m here with Tristan Harris. Tristan, it’s great to see you again.

TRISTAN HARRIS: Sam, it’s great to be back with you.

SAM HARRIS: So you’ve been busy. You’ve been busy worrying about social media for years. And you created this— in part created this documentary, The Social Dilemma, which it seems half of humanity saw.

TRISTAN HARRIS: Yeah.

SAM HARRIS: We still have a problem with social media, I’ll point out, but you, as much as anyone, alerted us to the nature of the problem and are continuing on that front. But now you have added to your portfolio concerns about AI, and there’s this new documentary, The AI Doc, which I just saw, which is very super watchable and entertaining in its own way, but also very worrying. And we’ll talk about the reasons to be worried here and maybe some of the reasons to be optimistic or at least cognizant of the upside should things go well. But there’s a lot to fear on the front of things not going well. So, well, let’s just take it from the top. When did you start worrying about AI?

From Social Media to AI

TRISTAN HARRIS: Well, first, it’s just good to be back with you, Sam, because you really in a way helped launch my ability to speak on these topics with the 60 Minutes interview that I did in 2017. And then I remember recording in that same hotel our first podcast, which actually really got a lot of attention back in the day about persuasive technology.

SAM HARRIS: Yeah, yeah.

TRISTAN HARRIS: And in a way about the baby AI that was social media that was just pointed at your kid’s brain trying to figure out which photo, video, or tweet to put in front of your nervous system. And as we know, that little baby AI was enough to create the most anxious and depressed generation in our lifetimes, was enough to break down shared reality, polarise political parties much further, change the incentives of the entire media environment, basically colonise the entire world from that baby AI.

But to get to your question, so how did we get into AI? First of all, I wasn’t wanting to switch into it. It was that I got calls from people inside the AI labs in January of 2023. This is like a month after, month and a half after ChatGPT had launched, I think. And these were friends I knew in the tech industry who were now at AI labs. And they basically said, “Tristan, there’s a huge step function in AI capabilities that’s coming. The world is not ready. Institutions are not ready. The government is not ready. The arms race dynamic between the companies is out of control. And we want your help to help raise awareness about this.”

And so my first reaction was, aren’t there 1,000 people who’ve been working in AI safety and AI governance for a decade? And the challenge was just that all the PDFs that people had produced about policy and governance were just kind of not— it’s not like that was turning into actual action or policy. There’s a kind of material— you have to, what does Eric Weinstein call it? Confrontation with the unforgiving. Like you have to be affecting the actual incentives and institutions in the world.

So basically, my co-founder and I, Aza Raskin, we interviewed the top 100 people in AI at that time. This is in January 2023. We turned that into a presentation.

SAM HARRIS: Co-founder of the Center for Humane Technology.

TRISTAN HARRIS: I co-founded the Center for Humane Technology, which is the nonprofit vehicle that’s been housing our work for the last decade, basically. And we ran off to New York, DC, and San Francisco, and we basically gave this presentation called The AI Dilemma that tried to show that we could predict the future that we were going to get with AI.

Predicting the Future: Incentives and Outcomes

If you look at the incentives, I think a huge problem that both the film, the AI doc, and our AI Dilemma presentation were trying to tackle is this myth that you can’t know which way the future is going to go. The future is uncertain. A million things can happen. These are just unintended consequences from technology. The best route is just to accelerate as fast as possible. And that is not true.

And just to repeat a quote that is heard from every one of my interviews, but it’s because it’s so accurate. Charlie Munger, Warren Buffett’s business partner, saying, “If you show me the incentives, I’ll show you the outcome.” And with the incentives of social media being the race to maximize eyeballs and engagement, that would obviously produce the race to the bottom of the brainstem, shortening attention spans, bite-sized video, more extreme and outrageous content, sexualization of young people, the whole nine yards of everything.

SAM HARRIS: Hyper-partisanship.

TRISTAN HARRIS: Hyper-partisanship. And all of it happened. Like, there’s just a moment just to sort of soak in. Literally everything that we said was going to happen happened. And it’s not like we could predict all of it, but directionally you could know the contours of where we were going.

And part of this relates to, I think, the mistake we make in technology. We get obsessed and seduced by the possible of a new technology, but we don’t look at the probable of the incentives and what’s likely to happen.