Skip to content
Home » ChatGPT Could Be The Start Of The End!: Sam Harris (Transcript)

ChatGPT Could Be The Start Of The End!: Sam Harris (Transcript)

Read the full transcript of The Diary Of A CEO Podcast episode titled “ChatGPT Could Be The Start Of The End!” with philosopher, neuroscientist, podcast host and author Sam Harris.

Listen to the audio version here:

TRANSCRIPT:

The Dangers of Artificial General Intelligence

STEVEN BARTLETT: Six years ago, you did a TED Talk. I watched that TED Talk a few times over the last week, and the TED Talk was called, “Can We Build AI Without Losing Control Over It?” In that TED Talk, you really discussed the idea of whether AI, when it gets to a certain point of sentience and intelligence, will wreak havoc on humanity. Six years later, where do you stand on it today? Are you optimistic about our chances of survival?

SAM HARRIS: Yeah, I mean, I can’t say I’m optimistic. I am worried about two species of problem here that are related. There’s sort of the near-term problem of just what humans do with increasingly powerful AI and how it amplifies the problem of misinformation and disinformation and just makes it harder and harder to make sense of reality together.

And then there’s just the longer-term concern about what’s called alignment with artificial general intelligence, where we build AI that is truly general and, by definition, superhuman in its competence and power. And then the question is, have we built it in such a way that is aligned in a durable way with our interests?

I mean, there’s some people who just don’t see this problem. They’re kind of blind to it. When I’m in the presence of someone who doesn’t share this intuition, they don’t resonate to it, I just don’t understand what they’re doing or not doing with their minds in that moment.

Let’s say I’m wrong about that. Well, then it’s just the other person’s right. And so we just have fundamentally different intuitions about this particular point.

The Inevitability of Superhuman AI

And the point is this. If you’re imagining building true artificial general intelligence that is superhuman, and that is what everyone, whatever their intuitions purports to be imagining here. I mean, there’s people on both sides of the alignment debate, the people who think alignment is a real problem and people think it’s a total fiction. But everyone, virtually everyone who’s party to this conversation agrees that we will ultimately build artificial general intelligence that will be superhuman in its capacities.

And there’s very little you have to assume to be confident that we’re going to do that. And it’s really just two assumptions. One is that intelligence is substrate independent, right? There’s no, it doesn’t have to be made of meat. It can be made in silico, right? And we’ve already proven that with narrow AI. And it’s just this, we obviously have intelligent machines. And you know, your calculator in your phone is better than you are at arithmetic.

And it’s just, that’s some very narrow band of intelligence. So as we keep building intelligent machines, on the assumption that there’s nothing magical about having a computer made of meat, the only other thing you have to assume is that we will keep doing this. We will keep making progress. And eventually we will be in the presence of something more intelligent than we are.

And that’s not assuming Moore’s Law, it’s not assuming exponential progress. We just have to keep going, right? And when you look at the reasons why we wouldn’t keep going, those are all just terrifying, right? Because intelligence is so valuable. And we’re so incentivized to have more of it. And every increment of it is valuable. It’s not like it only gets valuable when you get, you know, when you double it or 10x it. No, no, if you just get three more percent, right, that’s, that pays for itself.

So we’re going to keep doing this. Our failure to do it suggests that something terrible has happened in the meantime, right? We have had a world war, we’ve had a global pandemic, far worse than COVID. We got hit by an asteroid, something happened that prevented us as a species from continuing to make progress in building intelligent machines, right?

The Inherent Danger of Superior Intelligence

So absent that, we’re going to keep going. We will eventually be in the presence of something smarter than we are. And this is where intuitions divide. My intuition, and it’s shared by many people, I’m sure, and I know at least one who you’ve spoken to, my intuition is that there is something inherently dangerous for the dumber party in that relationship.

There’s something inherently dangerous for the dumber species to be in the presence of the smarter species. And we have seen this, you know, based on our entanglement with all other species, dumber than we are, right, or certainly less competent than we are. And so by reasoning by analogy, it would be true of something smarter than we are. People imagine that because we have built these machines, that is no longer true, right?

And here’s where my intuition goes from there. That imagination is born of not taking intelligence seriously, right? Because what intelligence is, a mismatch in intelligence in particular, is a fundamental lack of insight into what the smarter party is doing and why it’s doing it and what it will do next on the part of the dumber party, right?

The Dog Analogy

So I mean, you just can imagine that by analogy, just imagine that the dogs had invented us as their super intelligent AIs, right? For the purpose of making their lives better, you know, just securing resources for them, securing comfort for them, getting them medical attention. It’s been working out pretty well for the dogs for about 10,000 years, right? I mean, there’s some exceptions. We mistreat certain dogs. But generally speaking, for most dogs, most of the time, humans have been a great invention, right?

Now, it’s true that the mismatch in our intelligence dictates a fundamental blindness with respect to what we’ve become in the meantime, right?