Skip to content
Home » Transcript: Why AI CEOs Are Building Bunkers – Tristan Harris

Transcript: Why AI CEOs Are Building Bunkers – Tristan Harris

Editor’s Notes: In this episode, Chris Williamson sits down with Tristan Harris, co-founder of the Center for Humane Technology and star of The Social Dilemma, to discuss the existential risks posed by the rapid advancement of Artificial Intelligence. Harris explains why high-level AI CEOs are reportedly building bunkers, arguing that the industry’s “race to the bottom” is creating powerful technologies that outpace our human ability to govern them. The conversation explores the “representational rot” caused by training AI on social media data and the urgent need for “self-improving governance” to ensure these god-like powers are guided by wisdom and prudence. (April 2, 2026)

TRANSCRIPT:

Tristan Harris’s Background: From Google to AI Ethics

CHRIS WILLIAMSON: What is the journey of how you arrived thinking about the problems of AI?

TRISTAN HARRIS: Well, most people know me or our work through the film The Social Dilemma, and I used to be a design ethicist at Google in 2012, 2013. So that basically meant, how do you ethically design technology that is going to reshape especially the attention and information environment of humanity.

So it’s like, there I was at Google, it was 2012, 2013. This is in the heat of the kind of social media boom. I think Instagram had just been bought by Facebook. My friends in college started Instagram. So I was part of this cohort and milieu of people who really built this technology that the rest of the world just thought was natural. Like, this is just drinking water. Like, I just drink Instagram. I just live in this environment.

And so while I saw billions of people enter into this psychological habitat, I knew the handful of like 5 or 6 people that were designing and tweaking it and making it work a certain way. And I think that’s just a fundamental thing I want people to get is, you know, you think of technology like it just lands and it’s just inevitable and there’s just nothing we can do and it just comes from above. And it’s like, there are human beings making choices.

And, you know, as someone who grew up in the era of the Macintosh, like my co-founder — so I have a nonprofit called the Center for Humane Technology. My co-founder, Aza Raskin, his father invented the Macintosh project before Steve Jobs took it over. So this is the original Macintosh, the thing that we now — the MacBook, the iMac, the MacBook Pro — all of that started with his father, Jeff Raskin, and the idea of creating humane technology where technology could be choicefully designed to be really easy to use, to be accessible, to be an empowering extension of our humanity. Like a cello, like a piano, like a creative tool. Like, if you’re a video person, you can make films and videos.

And just so people understand, because we’re probably going to be talking about some darker things on this podcast, the premise of all this is not to be a speaker of doom or something like that. It’s to say, I want to live in a world where technology is in service of people and connection and all of the things that matter to us as humans, and then have technology wrap around ergonomically us to create that.

So that was kind of a side journey. There I was at Google in 2012, 2013, and I saw how essentially there was this arms race for human attention, and whichever company was willing to go lower on the brainstem to manipulate human psychology — this is exploiting like a backdoor in the human mind. So think of it just like software has backdoors and zero-day vulnerabilities. You can hack software. The human mind has vulnerabilities.

And as a magician, as a kid, I understood some of those, studying at a lab at Stanford called the Persuasive Technology Lab, where a lot of the Instagram co-founders had studied. I understood the psychological influences dynamics. And so it wasn’t just that we were making technology in this beautiful and empowering kind of Macintosh way. It’s that basically more and more of my friends were sucked into developing technology to hack human psychology.

And so I saw that problem, I became concerned about it, and I made a presentation at Google. I made a presentation saying, never before in history have 50 designers in San Francisco basically, through their choices, rewired the entire psychological habitat of humanity. And we need to get this right. We have a moral responsibility to get this right.

And I sent that to 50 people at Google, and when I clicked on the presentation the next day, on the top right of Google Slides, it shows you the number of simultaneous viewers, you know how that works? And it had like 150 simultaneous viewers and then 500 simultaneous viewers. And so it’s like, oh, this is spreading throughout the whole company. And that’s what led to me becoming a design ethicist where I had to research and ask the questions, what does it mean to ethically design and persuade people’s psychological vulnerabilities? You can’t not make choices about the psychological habitat. You have to make a choice about how — whether you’re going to do infinite scroll or not, or autoplay or not, or notifications or not, or “these 10 people followed you” or not. What does it mean to ethically make those choices?

Technology and Human Flourishing

CHRIS WILLIAMSON: That is you being concerned about some of the ways that a misalignment of technology with what human flourishing might look like.

TRISTAN HARRIS: Yeah, and how society — I think people are afraid to say, like, when you make a bridge, there’s a physics to whether that bridge will sustain or whether it’ll fall apart, right? And it’s not magic. We don’t say, oh, who would have known that that bridge would fall apart? We have a science of bridges and mechanical engineering and civil engineering.

And with technology and human psychology, there is a science to the dopamine system.