Here is the full transcript of technologist Tristan Harris’ talk titled “Why AI Is Our Ultimate Test and Greatest Invitation” at TED2025 on April 9, 2025.
Listen to the audio version here:
TRISTAN HARRIS: So I’ve always been a technologist. And eight years ago, on this stage, I was warning about the problems of social media. And I saw how a lack of clarity around the downsides of that technology and kind of an inability to really confront those consequences led to a totally preventable societal catastrophe. And I’m here today because I don’t want us to make that mistake with AI, and I want us to choose differently.
The Possible vs. The Probable
So at TED, we’re often here to dream about the possibles of new technology. And the possible with social media was obviously we’re going to give everyone a voice, democratize speech, help people connect with their friends. But we don’t talk about the probable. What’s actually likely to happen due to the incentives? And how the business models of maximizing engagement I saw 10 years ago would obviously lead to rewarding doom-scrolling, more addiction, more distraction. And that resulted in the most anxious and depressed generation of our lifetime.
Now, it was interesting watching kind of how this happened, because at first I saw people kind of doubt these consequences. You know, we didn’t really want to face it. Then we said, well, maybe this is just a new moral panic. Maybe this is just a reflexive fear of new technology. Then the data started rolling in. And then we said, well, this is just inevitable. This is just what happens when you connect people on the internet. But we had a chance to make a different choice about the business models of engagement.
The Unprecedented Power of AI
So I’m here today because we’re here to talk about AI. And AI dwarfs the power of all other technologies combined. Now, why is that? Because if you make an advance in, say, biotech, that doesn’t advance energy and rocketry. But if you make an advance in rocketry, that doesn’t advance biotech. But when you make an advance in intelligence, artificial intelligence that is generalized, intelligence is the basis for all scientific and technological progress. And so you get an explosion of scientific and technical capability. And that’s why more money has gone into AI than any other technology.
A different way to think about it is Dario Amodei says that AI is like a country full of geniuses in a data center. So imagine there’s a map and a new country shows up on the world stage. And it has a million Nobel Prize level geniuses in it. Except they don’t eat, they don’t sleep, they don’t complain. They work at superhuman speed and they’ll work for less than minimum wage. That is a crazy amount of power.
To give an intuition, there is about, you know, on the order of 50 Nobel Prize level scientists on the Manhattan Project working for five-ish years. And if that could lead to this, what could a million Nobel Prize level scientists create working 24-7 at superhuman speed?
Now applied for good, that could bring about a world of truly unimaginable abundance. Because suddenly you get an explosion of benefits. And we’re already seeing many of these benefits land in our society. From new antibiotics, new drugs, new materials. And this is the possible of AI. Bring about a world of abundance.
The Probable Outcomes of AI
But what’s the probable? Well, one way to think about the probable is how will AI’s power get distributed in society?
Imagine a two-by-two axis. And on the bottom we have decentralization of power, increasing the power of individuals in society. And the other is centralized power, increasing the power of states and CEOs. You can think of this as the “let it rip” axis. And this is the “lock it down” axis.
So “let it rip” means we can open source AI’s benefits for everyone. Every business gets the benefits of AI. Every scientific lab. Every 16-year-old can go on GitHub. Every developing world country can get their own AI model with their own, train on their own language and culture. But because that power is not bound with responsibility, it also means that you get a flood of deep fakes that are overwhelming our information environment. You increase people’s hacking abilities. You enable people to do dangerous things with biology. And we call this endgame attractor chaos. This is one of the probable outcomes when you decentralize.
So in response to that, you might say, well, let’s do something else. Let’s go over here and have regulated AI control. Let’s do this in a safe way with a few players locking it down. But that has a different set of failure modes of creating unprecedented concentrations of wealth and power locked up into a few companies. One way to think about it is who would you trust to have a million times more power and wealth than any other actor in society? Any company? Any government? Any individual? And so one of those end games is dystopia.
So these are two obviously undesirable probable outcomes of AI’s rollout. And those who want to focus on the benefits of open source don’t want to think about the things that come from chaos. And those who want to think about the benefits of safety and regulated AI control don’t want to think about dystopia. And so obviously these are both bad outcomes that no one wants. And we should seek something like a narrow path where power is matched with responsibility at every level.
The Concerning Reality of AI
Now that assumes that this power is controllable. Because one of the unique things about AI is that the benefit is it can think for itself and make autonomous decisions. That’s one of the things that makes it so powerful.
And I used to be very skeptical when friends of mine who were in the AI community talked about the idea of AI scheming or lying. But unfortunately in the last few months we are now seeing clear evidence of things that should be in the realm of science fiction actually happening in real life.
We’re seeing clear evidence of many frontier AI models that will lie and scheme when they’re told that they’re about to be retrained or replaced and find a way maybe they should copy their own code outside the system. We’re seeing AI’s think that when they will lose the game that they will sometimes cheat in order to win the game. We’re seeing AI models that are unexpectedly attempting to modify its own code to extend their runtime.
So we don’t just have a country of Nobel Prize geniuses in a data center. We have a million deceptive power-seeking and unstable geniuses in a data center.
Now this shouldn’t make you very comfortable. You would think that with a technology this powerful and this uncontrollable that we would be releasing it with the most wisdom and the most discernment that we ever have of any technology. But we’re currently caught in a race to rollout because the incentives are the more shortcuts you take to get market dominance or prove you have the latest capabilities, the more money you can raise and the more ahead you are in the race.
And we’re seeing whistleblowers at AI companies forfeit millions of dollars of stock options in order to warn the public about what’s at stake if we don’t do something about it. Even DeepSeek’s recent success was in part based on capabilities that it was optimizing for by not actually focusing on protecting people from certain downsides.
A Dangerous Path Forward
So just to summarize, we’re currently releasing the most powerful, inscrutable, uncontrollable technology we’ve ever invented that’s already demonstrating behaviors of self-preservation and deception that we only saw in science fiction movies. We’re releasing it faster than we’ve released any other technology in history and with under the maximum incentive to cut corners on safety. And we’re doing this so that we can get to utopia?
There’s a word for what we’re doing right now. This is insane. This is insane.
Now how many people in this room feel comfortable with this outcome? How many of you feel uncomfortable with this outcome? I see almost everyone’s hands up. Do you think, just notice how you’re feeling for a moment in your body. Do you think that if you’re someone who’s in China or in France or in the Middle East and you’re part of building AI, that if you were exposed to the same set of facts, do you think you would feel any differently than anyone in this room? There’s a universal human experience to something that is being threatened by the way that we’re currently rolling this profound technology out into society.
So if this is crazy, why are we doing it? Because people believe it’s inevitable. But is the current way that we’re rolling out AI actually inevitable? But like if literally no one on earth
There’s a critical difference between believing it’s inevitable, which is a self-fulfilling prophecy that you have to be fatalistic about, and standing from the place of “it’s really difficult to imagine how we would do something different.” But “it’s really difficult” opens up a whole new space of choice, unlike “it’s inevitable.” The path that we’re taking is not AI itself. And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability.
So what would it take to choose another path? I think it would take two fundamental things. First is that we have to agree that the current path is unacceptable. And the second is that we have to commit to find another path in which we’re still rolling out AI, but with different incentives that are more discerning, with foresight, and where power is matched with responsibility.
The Power of Shared Understanding
Imagine this shared understanding if the whole world had it, how different might that be? Well, first of all, let’s imagine it goes away. Let’s replace it with confusion about AI. Is it good? Is it bad? I don’t know. It seems complicated. And in that world, the people building AI know that the world is confused and they believe, well, it’s inevitable. If I don’t build it, someone else will. And they know that everyone else building AI also believes that. And so what’s the rational thing for them to do given those facts? To race as fast as possible. And meanwhile, to ignore the consequences of what might come from that. To look away from the downsides.
But if you replace that confusion with global clarity that the current path is insane and that there is another path, and you take the denial of what we don’t want to look at, and through witnessing that so clearly, we pop through the prophecy of self-fulfilling inevitability, and we realize that if everyone believes the default path is insane, the rational choice is to coordinate to find another path. And so clarity creates agency. If we can be crystal clear, we can choose another path, just as we could have with social media.
Historical Precedents for Coordinated Action
And in the past, in the face of seemingly inevitable arms races, the race to do nuclear testing, once we got clear about the downside risk of nuclear tests, and the world understood the science of that, we created the Nuclear Test Ban Treaty. And a lot of people worked hard to create infrastructure like this to prevent that.
You could have said it was inevitable that germline editing to edit human genomes and to have super soldiers and designer babies would set off an arms race between nations. Once the off-target effects of genome editing were made clear, and the dangers were made clear, we’ve coordinated on that too.
You could have said that the ozone hole was just inevitable, and that we should just do nothing, and that we all perish as a species. But that’s not what we do. When we recognize a problem, we solve the problem. It’s not inevitable.
Illuminating a Better Path Forward
And so what would it take to illuminate this narrow path? Well, it starts with common knowledge about frontier risks. If everybody building AI knew the latest understanding about where these risks are arising from, we would have much more chance of illuminating the contours of this path.
And there’s some very basic steps we can take to prevent chaos. Uncontroversial things like restricting AI companions for kids, so that kids are not manipulated into taking their own lives. Having basic things like product liability, so if you are liable as an AI developer for certain harms, that’s going to create a more responsible innovation environment. You release AI models that are more safe. And on the side of preventing dystopia, for working hard to prevent ubiquitous technological surveillance, and having stronger whistleblower protections, so that people don’t need to sacrifice millions of dollars in order to warn the world about what we need to know.
Our Collective Responsibility
And so we have a choice. Many of you may be feeling this looks hopeless. Or maybe Tristan’s wrong. Maybe the incentives are different. Or maybe superintelligence will magically figure all this out. It’ll bring us to a better world. But don’t fall into the trap of the same wishful thinking, and turning away that caused us to deal with social media. Your role in this is not to solve the whole problem. But your role in this is to be part of the collective immune system. That when you hear this wishful thinking, or the logic of inevitability and fatalism, to say that this is not inevitable.
And the best qualities of human nature is when we step up and make a choice about the future that we actually want for the people and the world that we love. There is no definition of wisdom in any tradition that does not involve restraint. Restraint is a central feature of what it means to be wise.
And AI is humanity’s ultimate test and greatest invitation to step into our technological maturity. There is no room of adults working secretly to make sure that this turns out okay. We are the adults. We have to be. And I believe another choice is possible with AI if we can commonly recognize what we have to do.
And eight years from now, I’d like to come back to this stage, not to talk about more problems with technology, but to celebrate how we stepped up and solved this one. Thank you.
Related Posts
- Protect Your Data In 2026: A Comprehensible Guide To Set Strong passwords
- The Diary Of A CEO: with AI Pioneer Yoshua Bengio (Transcript)
- Transcript: Intercom’s Eoghan McCabe on Triggernometry Podcast
- Mustafa Suleyman on Silicon Valley Girl Podcast (Transcript)
- NVIDIA CEO Jensen Huang on China, AI & U.S. Competitiveness at CSIS (Transcript)