Read the full transcript of Sir Stephen Fry in conversation with Professor Yuval Noah Harari on “AI: How Can We Control An Alien Intelligence?” at the @OctopusEnergy Energy Tech Summit, London, June 2025.
The Power of Human Storytelling
SIR STEPHEN FRY: Goodness me, I wasn’t sure anyone would turn up. Isn’t this rather exciting? Well, it’s a great thrill for me to be with this hero of mine, really. Yuval Noah Harari. I’m sure many of you will have read the book that catapulted his name into world fame, “Sapiens: A Remarkable History of Our Species,” which involved many extraordinary insights and a kind of thrilling narrative of its own.
But one of the things, Yuval, that I think many of us were impressed by, was the way you showed that perhaps “Sapiens” was almost the wrong title. It wasn’t that we were the wise humanity, but that we were storytelling humanity. That what separated us from the Neanderthals and from other primates and set us on our course was the fact that we told stories about ourselves and about the world as we apprehended it. Is that a fair summation?
YUVAL NOAH HARARI: Yeah, absolutely.
SIR STEPHEN FRY: And would you say that this is both what gets us into trouble because a story is also another word for a lie and also what propels us into a stable future?
YUVAL NOAH HARARI: Yeah, I think it’s a double-edged sword in this regard, which becomes particularly important in the age of AI because for the first time in history we encounter a better storyteller than we are. We took over the planet not because we are more intelligent than the other animals, but because we can cooperate better. And we cooperate through storytelling.
You have the obvious examples like religions, but my favorite example is money, which is probably the greatest story ever invented, ever told, because it’s the only one everybody believes, or almost everybody. Humans, no other animal on the planet knows that money even exists because it only exists in our imagination. But now there is another thing on the planet that knows that money exists and that can maybe invent new kinds of money and new kinds of other stories.
And what will happen to us when we have to deal, it’s not just with one better storyteller, but with billions of better storytellers than us on this planet. That’s, I think, one of the biggest questions of our age.
The Emergence of AI Deception
SIR STEPHEN FRY: Yes, and you’re talking, of course about AI and it’s recently come to light that one of the large language model corporations, Anthropic, has revealed that its latest instance of Claude, was in a closed test, was seen to blackmail some people in order to achieve a goal.
And when we think about AI we either get very dismissive of it, it’s merely a parrot repeating probable instances of human communication, a “Stochastic parrot,” as Emily Bender, the computational linguist, famously called it, every time it gives you a sentence, no matter how intelligent it appears, it is merely following a probabilistic route.
YUVAL NOAH HARARI: The question is, is it the same with us? When I look at the way sentences are formed in my own mind, like now when I’m talking with you, I don’t know how the sentence will end. I start saying something and the words just keep bubbling up in the mind. And as a public speaker, it’s something terrifying because I don’t know if I will be able to complete the sentence. I’m not sure what will be the next word?
SIR STEPHEN FRY: That’s right. You’re talking and you’re trying to make sense, and suddenly you’ve come up with lettuce. Oh, where did that happen? It’s an unrolling carpet. Syntagmatic, I think, is the technical word.
YUVAL NOAH HARARI: And the amazing thing with AI, you can now ask the AI to show you exactly how it thinks. You can actually watch how the logic unfolds in the AI in a way which is very difficult for us to do with our own minds, but we can now do. I mean, the AIs don’t have minds, at least as far as we know. But we can see how the sentences, the stories are being formed in their whatever we call it, if we ask them to.
The Black Box Problem
SIR STEPHEN FRY: But the fact is, I’ve spoken to people like Geoffrey Hinton and people who are called the godfather of AI, and they will say that the most disturbing fact about these current models is that nobody knows what’s going on under the hood. They don’t actually know how it’s doing, what it’s doing.
YUVAL NOAH HARARI: If we could know everything that’s going on there and predict it, it wouldn’t be AI.
SIR STEPHEN FRY: That’s the point.
YUVAL NOAH HARARI: I mean, if you have something that you can predict how it will behave, what decisions it will make, what ideas it will invent, by definition, this is not AI. This is just an automatic machine.
SIR STEPHEN FRY: The way to make that clearer, perhaps, is to recognize that AI now can beat just on your phone any chess player in the world. And you obviously can’t predict the moves it’s going to make, because if you could predict the moves it was going to make, you would beat it, or it would be a stalemate every time.
And you expand that to the whole of AI. You don’t know what it’s going to say or do, which is why it’s a useful tool. If it could only match what we did, it would be not a tool. It’d be like saying a digger, a JCB could only dig as much as a human with a spade. The point is we have these machines to outmatch what we can do.
YUVAL NOAH HARARI: I think that’s the main point.
From Artificial to Alien Intelligence
YUVAL NOAH HARARI: The main promise of AI is to be better than us.
It’s decision making in many fields that it would be able to invent things that we cannot think about. But that is also the main threat, that we cannot predict how it will behave and we cannot control it in advance, no matter how much we try to make it safe, to align it with our aims, with our goals.
I think it really goes to the heart of what AI is that. Usually we think about AI as an acronym for Artificial Intelligence, but there is no. It’s no longer artificial if you think about what an artifact is. An artifact is something that we create and we control. With each passing day, AI is becoming less and less artificial, which is why I prefer to think about AI as an acronym for Alien Intelligence.
Even the word in a way. A true intelligence is never an artifact because an artifact is something that you create and control, whereas intelligence is characterized by the ability to create new things.
The Alignment Problem
SIR STEPHEN FRY: Yeah, you used a verb there very quickly, but it’s worth expanding on it. And that was aligned. And for those of us who are concerned, shall we say, about how AI is going, one of the main problems is what is known as the alignment problem.
There’s a simple way of looking at it, which is an old philosophical thought experiment called the genie. If you imagine a genie who gives you a wish and you love life, and you’re an empathetic person, so you say to the genie, “Can you really bring any wish to true?” And they say, “Yes.” You say, “In that case, could you end all suffering?” And instantly all life on the planet is extinguished.
Because to the genie, they look at the world and all suffering comes in form of life. All forms of life suffer, smallest animals and us. That’s the only place where suffering exists. Stones, as far as the genie is concerned, don’t suffer. Maybe trees do. It’s not sure. But if it gets rid of all life, it has solved our problem. And it bows and says, “There you are, master.”
Now, that’s a very broad sense of the alignment problem. We asked a very stupid question. We didn’t think it through. But the fact is, yes, we have to understand AI and alien, but we really have to understand ourselves because we don’t have shared ethical frameworks around the world. How do you encode dignity, love, sympathy, passion, joy, equality?
We’ve only just learned, if you like, recently, that women’s and men’s lives and dignities are equal and that people of different races are equal. We’ve only just discovered this. And there are many other things we aren’t sure about. And how therefore can we expect the machines to be sure? And therefore we ask them a question and it might do an equivalent of what the genie does. How do you see a solution to that?
AI as Humanity’s Child
YUVAL NOAH HARARI: One of the other key things about AI, because it can learn and change by itself, so whatever we teach it, we cannot be certain that it will always just comply by our instructions. Again, if it only does what we tell it to do, it’s not really an AI.
So when we think about the alignment problem or how to educate AI to be benevolent and not harmful, it’s a very problematic and imperfect analogy. But I think it’s still useful to think about AI as a child, as the child of humanity. And what we know about educating children is that they never do what you tell them to do. They do what they see you do.
If you tell a child, “Don’t lie,” and then the child observes you lying and cheating other people, it will copy your behavior, not follow your instructions. Now, if we have the kind of people who are leading the AI revolution telling the AI “Don’t lie,” but the AI observes them, observes the world and sees them lying and cheating and manipulating, it will do the same.
If AI is developed not through a cooperative effort of humans who trust each other, but it is developed through an arms race, a competition. So again, you can tell the AI as much as you like, to be compassionate and to be benevolent. But if it observes the world, because AI learns from observation, if it observes the world, it observes how humans behave towards each other, it observes how its own creators behave. And if they are ruthless, power hungry competitors, it will also be ruthless and power hungry. You cannot create a compassionate and trustworthy AI through an arms race. It just won’t happen.
The Current Arms Race Reality
SIR STEPHEN FRY: Well, yes, Yuval, you’ve said this. If you had said this in 1980 during what was known as the AI winter, we’d say, “Yes, we must plan and make sure this isn’t happening.” But it’s already happening. There is an arms race.
The very people who are spending the billions and billions are the same people, for example, Mark Zuckerberg and Meta, who gave us the disaster of Facebook and social media and what it has done to the polity of the world and to the poverty of the world.
To essentially, in this country, as we know, the rivers are polluted and contaminated. You wouldn’t swim in any river in Britain because there’s raw sewage being poured into it. Well, our children are breathing a cultural river which is similarly polluted and contaminated. And we all know this, and Facebook knows this, and Twitter and X know this, but they do nothing about it.
And if you even mention guardrails and regulation, they scream communism. And Trump has just announced that he will ban individual states in America from regulating AI. There is an arms race. So everything you said is happening. So how can we come together to stop AI in the hands of corporate greed and national greed? In this arms race, whether it’s China against America or it’s one company against another, it’s going in the wrong direction.
The Challenge of Rebuilding Trust
YUVAL NOAH HARARI: Yeah, it’s moving very fast. Partly because there is also enormous positive potential, of course, in everything from health care to tackling the climate emergency. So we have to acknowledge there is also this enormous positive potential. The question is not how to stop the development of AI, it’s how to make sure that it is used for good.
And here I think that the main problem is simply an issue of priority, of what comes first. Humanity now faces two major challenges. On the one hand, we have the challenge of developing a super intelligent AI. On the other hand, we have the challenge of how to rebuild trust between humans. Because trust all over the world, both between countries and also within countries, is collapsing.
Nobody is absolutely certain why it’s happening, but everybody is able to observe it. Maybe the last thing that Republicans and Democrats in the US can agree on is the trust is collapsing. Yes, they don’t trust each other. They don’t agree on any fact except that they don’t trust each other.
SIR STEPHEN FRY: So this is the worst possible time for us to come together. We no longer believe in global institutions like the United Nations or even the WHO.
YUVAL NOAH HARARI: National institutions.
SIR STEPHEN FRY: Yes.
The Erosion of Human Trust
YUVAL NOAH HARARI: One of the explanations for the collapse of trust is that over thousands of years, humans have been amazing in building despite all the conflicts and tensions and so forth. 100,000 years ago, humans lived in tiny bands of hunter gatherers and could not trust anybody outside their band of 50 or 100 individuals. Now we have nations of hundreds of millions of people. We have a global trade network, a global scientific network with billions of people. So we are obviously quite good at building trust.
But over thousands of years, we have built trust through human communication. And now within almost every relationship, there is a machine, an algorithm and AI in between. And we see a collapse of trust in humans, whereas there is a rise of trust in algorithms and AIs.
Again, I mentioned money as the greatest story ever told. So you see that people are losing trust in human made money like euros and dollars and pounds, but they shift.
SIR STEPHEN FRY: The trust from fiat to cryptocurrencies.
The Trust Problem and AI Development
YUVAL NOAH HARARI: And to an algorithm based money. So we have these two problems of the developing AI and rebuilding trust between humans. And the question is which one we solve first, which is the priority.
Now you, unfortunately, you hear some of the smartest people in the world, they say, “First we solve the AI problem and then with the help of AI, we’ll solve the trust problem.” And I think this is a very bad idea. If, again, if we develop AI through an arms race between humans who can’t trust each other, there is absolutely no reason to expect that we’ll be able to trust the AIs.
I mean, the big paradox is that when you talk with people like Mark Zuckerberg, like Elon Musk, they often say openly that they are also afraid of the dangerous potential of AI. They are not blind to it, they are not oblivious to it. But they say that they are caught in this arms race, that if I slow down, my competitors will not slow down, I can’t trust them, so I must move faster.
But then you ask them, “Okay, so you can’t trust your human competitors. Do you think you’ll be able to trust this super intelligent alien intelligent you’re developing?” And the same people who just told you they can’t trust other humans tell you, “Oh, but I think we’ll be able to trust the alien AIs,” which is almost insane.
So the right order of doing it is first solve the human trust problem. Then together in a cooperative way, we can develop and educate trustworthy AIs. But unfortunately, we are doing the exact opposite.
The Probability of Doom
SIR STEPHEN FRY: Yes. And there’s, as you know, there’s been for decades the Doomsday Clock, which the nuclear scientists set midnight is Armageddon, the end of everything. And it’s been roughly at 89 seconds to midnight for the last few years. It’s crept up over recent days for obvious reasons.
But there’s another metric that I’ve been studying recently called P. Doom. It’s the letter P, which is probability brackets. Doom, close brackets. It’s one used by people in the business, you know, the scientists in AI.
So, for example, Eliezer Yudkowsky, who’s the founder of the Machine Intelligence Research Institute in California, sets P at 90, that’s to say a 90% chance of human extinction through AI. Yann LeCun, who is the chief scientist for Meta, sets it at zero. But then he is the chief scientist for Meta. So that’s like a tobacco executive saying, “Cancer, no chance. What are you talking about? Can’t possibly happen.”
So I’ve worked out that roughly the lowest median is between 7.5 and 10% of human catastrophe of an extinction order through AI, if things are not controlled in the way you say they should be.
Now, the chance of winning the lottery in this country is 0.0000022%. So what you’re saying here is that the chance of human extinction at 7.5%, which is the lowest, really, amongst the current important scientists, Nobel Prize winners like Hinton and Hassabis, 7.5% is 3.4 million times greater than 0.000022%.
So if I were to give you a lottery ticket and say this is a valid lottery ticket, the only difference is you are 3.4 million times more likely to win. You would take it. And that’s the odds we’re playing with at a low rate.
So let’s look at the bad side of things. As we’ve said, we’re going about it in the wrong order, as you’ve put it. Most people who understand the science say there is a very severe chance that humanity will be extinguished by this. A greater chance than by nuclear Armageddon, in fact, or indeed climate change.
And humans are not in a position at the moment to trust each other and to establish guardrails to agree on how we should go forward. So do you have a solution for us? Yuval, I’m almost on my knees begging you at this point. I don’t have children, so I can almost say I don’t care. But I have lots of God children and I have lots of great nieces and great nephews. So I do care about what happens to our planet, and I’m sure you do too.
Human Agency in AI Development
YUVAL NOAH HARARI: Again, I think that it’s very dangerous to think too much about doomsday and extinction scenarios. They cause people to despair. We do have what it takes to manage this revolution because we are creating it.
This is not dinosaurs being extinguished by an asteroid coming from outer space that they have no way of even understanding, let alone controlling. This is a process which, at least for now, is under human control.
In five years, in 10 years, we will have millions and even billions of AI agents taking more and more decisions, inventing more and more ideas. It will be a hybrid society, so it will become more difficult. But we need to start with the realization that this is a completely human made danger. We have everything we need in order to manage it.
The main missing ingredient is trust between human beings. And again, over tens of thousands of years we have demonstrated that we are capable of building trust even on a global level. So it’s not beyond our capabilities.
And it then goes back to the old questions of politics and ethics, of how do you build trust between human beings? And we need to think also about more concrete and immediate questions. One of the biggest questions we will be facing in the next few years is how to deal with these new AI agents.
SIR STEPHEN FRY: Yes.
AI Personhood and Legal Rights
YUVAL NOAH HARARI: Do we consider them as persons? More and more people are entering relationships, personal relationships with AIs. More and more corporations and armies are giving AIs control agency over important decisions. This is not some kind of philosophical question.
SIR STEPHEN FRY: No, it’s real.
YUVAL NOAH HARARI: It’s completely like, should AIs have the ability to open a bank account and manage a bank account? That’s a very practical question. We are very close to the point when you can tell an AI, “Go out there, make money. Your goal is to make a billion dollars.”
In computer science, very often the difficult question is how do you define the goal?
SIR STEPHEN FRY: Yeah.
YUVAL NOAH HARARI: And one of the things about money, it’s a very easy goal to define. Now an AI can make money in many ways. It can hire its services to people to write their essays or books or whatever, many things it can do. It can earn money. It can then invest this money in the stock exchange.
So do we as a society want to allow AI agents? And then this is not a question for 50 years, this is a question for five years, maybe one year to open bank accounts and manage them in any way they see fit.
SIR STEPHEN FRY: Yeah.
YUVAL NOAH HARARI: In the US basically the legal system, they never thought about it. But I’m not sure about this in the UK, but in the US there is an open legal path for AIs to be recognized as persons with rights.
SIR STEPHEN FRY: Yes. Peter Singer has written about this and others.
YUVAL NOAH HARARI: According to U.S. law, corporations are persons and they have rights like freedom of speech.
SIR STEPHEN FRY: Yes.
YUVAL NOAH HARARI: Now, so in the US you can incorporate an AI. Previously when you incorporated a corporation like Google or General Motors or whatever, this was a fiction. Yeah, because all the decisions of Google were made by human beings. So okay, legally Google is a person with rights. But every decision of Google needs a human executive, accountant, engineer, lawyer to make the decision.
Now this is no longer the case. You can incorporate an AI and the AI can make the decisions by itself. The reason, one of the reasons the U.S. Supreme Court recognizes corporations as persons is in order to make it possible for corporations to donate money to politicians. This was the Citizens United Supreme Court decision.
Now, so imagine an AI that makes billions of dollars. Maybe the richest person in the US in a few years will not be Elon Musk or Jeff Bezos or Mark Zuckerberg. It will be an AI. And this AI has the freedom of speech, which includes the right to donate money to politicians, maybe on condition that they further advance rights for AI.
SIR STEPHEN FRY: Reduce guardrails and so forth. Yes.
YUVAL NOAH HARARI: Now this is a completely realistic scenario. This is not like science fiction.
SIR STEPHEN FRY: Absolutely.
YUVAL NOAH HARARI: And this is a question that we as a society need to decide. Do we acknowledge AIs as persons with rights? Now, there are people who are already convinced that AIs because they have interaction with them, have consciousness, have feelings. So maybe in a few years the world is divided between countries that recognize AI rights and countries that don’t.
The Future of Human Work
SIR STEPHEN FRY: Absolutely. And I suppose for everyone in the room there’s a consideration. I remember I gave a talk on AI back in 2015 and I said, the way things are going, there are certain jobs that people might not have. I said, for example, if you have a child who’s studying to be a doctor, that’s fine, but maybe a radiologist.
And the woman put her hand up very crossly and said, “My daughter’s studying radiology.” And I said, “Well, imagine that every mammogram ever taken is available to an AI and it can examine thousands in a second and make a judgment on it. A radiologist is going to go out of work.”
Now who’s going to go out of work here? I don’t know if the chief financial officer is present of Octopus, but I have been talking to some people who say that the first really high level job to be replaced by an AI completely will be a CFO. They can be so. Well, everything they do, it’s already happened.
YUVAL NOAH HARARI: It’s news editors. One of the most important jobs in the 20th century, even before was news editors. The editors of the newspapers, of the television. They were extremely powerful people who controlled the public conversation.
Human Exceptionalism in the Age of AI
SIR STEPHEN FRY: But I think what I would just end by saying to everybody is maybe don’t concentrate on how efficient you are, on how brilliantly you complete a task, because that’s what AI can do. Especially the agentic AI that Yuval’s been talking about.
Concentrate on what a wonderful human being you are, how kind you are, how courteous, how considerate, how you improve the life of those around you which is very often the opposite of what efficient people do. And maybe that for the time being at least is the secret that human exceptionalism will be. How good we are as people, how much we make a room light up when we walk into it, how much pleasure we spread, not how quickly we can complete a task because we’ll never match the AI.
So on that note at least I think we should end and I hope we haven’t given too much terror to everybody. I will always say that the thing about this technology is it’s simultaneously thrilling and chilling and the thrilling parts may well cause us all to live much longer, happier lives. We can hope so. But Yuval as always an absolute joy speaking to you and thank you for.
YUVAL NOAH HARARI: Thank you so much.
SIR STEPHEN FRY: Thank you, thank you everybody.
Related Posts