Here is the full transcript of Israeli medievalist and military historian Yuval Noah Harari’s interview on The Rich Roll Podcast episode titled “Our AI Future Is Way Worse Than You Think”, Oct 28, 2024.
The interview starts here:
The Rise of the Machines
RICH ROLL: I got news for you people. The rise of the machines is already upon us. So what exactly do we need to understand about the rapid ascent of artificial intelligence? What does this revolution augur for the future of the human species? To gain clarity amidst the confusion, I’m joined today by Yuval Noah Harari, a world renowned historian and mega bestselling author whose landmark books on the history and future of humanity have sold an astonishing 45 million copies and made him the public intellectual of our time.
Thank you for coming. I appreciate you being here today. I’m excited to unpack what I think is a really revelatory book, a very important book that speaks to perhaps the most vital issue of our time. And in reflecting upon it, I was thinking back on Homo Deus, which came out in 2015.
YUVAL NOAH HARARI: 16. Yeah, 16.
RICH ROLL: And in that book you address AI. But at that time it was as if you were sounding an alarm on a future story that had yet to be written. And perhaps it came off a bit Cassandra in that moment. And I’m curious, as we find ourselves now in 2024, eight, nine years later, it’s as if not only are we kind of on the cusp of this new revolution, we’re mired in it in a way that perhaps even is far more intense than even you predicted at that time.
What Is AI Really?
YUVAL NOAH HARARI: Yeah, I mean things have been moving much, much faster than I think any of us predicted.
And I think maybe the most important thing is really to understand what AI is, because now there is so much hype around AI that it’s becoming difficult for people to understand what is AI. Now, everything is AI. You know, especially in the markets, in the investment world, they attach the tag AI to just about anything in order to sell it. So, your coffee machine is now an AI coffee machine. And your shoes are AI shoes.
The key thing to understand is that AIs are able to learn and change by themselves, to make decisions by themselves, to invent new ideas by themselves. If a machine cannot do that, it’s not really an AI. So a coffee machine that just makes you coffee automatically, but by a pre-programmed way, and it never learns anything new, it’s just an automatic machine. It’s not an AI.
It becomes an AI if as you approach the coffee machine, the machine before you press any button addresses you and says to you, “I’ve been watching you for the last weeks or months, and based on everything I’ve learned about you and your facial expression and the time of day and so forth, I predict you would like an espresso. So I already took the liberty to make a cup for you,” made the decision independently.
And it’s really an AI if it then tells you, “Actually I’ve invented a new beverage, a new drink that no human ever thought about before. I call it Bestpresso, and I think it’s better than espresso you would like more. And I took the liberty to prepare a cup for you.” Then it’s really an AI, something that can make decisions and invent new ideas by itself, and therefore, by definition, something that we cannot predict how it will develop and evolve. And for good or for bad, it can invent medicines and treatments we never thought about. But it can also invent weapons and dangerous strategies that go beyond our imagination.
Alien Intelligence, Not Artificial Intelligence
RICH ROLL: You characterize AI not as artificial intelligence, but as alien intelligence. You give it a different term. Can you explain the difference there and why you’ve landed on that word?
YUVAL NOAH HARARI: Traditionally, the acronym AI stood for artificial intelligence. But with every passing year, AI becomes less artificial and more alien. Alien not in the sense that it’s coming from outer space, it’s not we create it, but alien in the sense it analyzes information, makes decisions, invents new things in a fundamentally different way than human beings.
Artificial is from artifact—it gives us the impression that this is an artifact that we control. And this is misleading because, yes, we designed the kind of baby AIs, we gave them the ability to learn and change by themselves, and then we release them to the world. And they do things that are not under our control, that are unpredictable. And in this sense, they are alien.
And again, humans are organic entities like other animals. We function organically. For instance, we function by cycles, day and night, summer and winter. We sometimes active, sometimes we need to rest, we need to sleep. AIs are alien in the sense that they are not organic. They function in a completely different way, not by cycles. And they don’t need to rest and they don’t need to sleep.
And now, as they take over more and more parts of reality, parts of society, there is a kind of tug of war of who would be forced to adapt to whom? Would the inorganic AIs be forced to adapt to the organic cycles of the human body, of the human being? Or would humans be pressured into adopting this kind of inorganic lifestyle?
Hence, starting with the simplest thing that AI are always on, but people need time to be off. So if you think even about something like the financial markets, traditionally, if you look at Wall Street, it’s open only Mondays to Fridays, 9:30 in the morning to 4:00 in the afternoon. It’s off for the night, it’s off for the weekends. It takes vacations on Christmas, on Independence Day. And now, as algorithms and AIs are taking over the markets, they’re always on. And this puts pressure on human bankers and investments and so forth. You can’t take a minute off because then you’re left behind. So in this sense, they are alien, not in the sense that they came from Mars.
Information Networks and Human Evolution
RICH ROLL: To understand artificial intelligence and to understand what is actually happening and where we’re heading, the thesis of this latest book requires us to understand the nature of information itself and the formative ways in which the evolution of information networks are inextricable from the evolution and progress of humankind. So I’m curious about how you discovered that lens into kind of understanding the nature of artificial intelligence and why it’s important to contextualize what is occurring right now through that perspective.
YUVAL NOAH HARARI: It’s actually something I began exploring in previous books. The idea is that information is the most fundamental stratum, most fundamental basis of human society and of human reality, because the human superpower is the ability to cooperate in very large numbers.
If you compare us to chimpanzees, to elephants, to hyenas, individually, there are some things I can do and the chimpanzee can’t, and vice versa. Our big advantage is not on the individual level. The really big advantage is that chimpanzees can cooperate in, you know, a few dozen chimpanzees, like 50 chimpanzees can cooperate maybe 100. But with humans, with Homo sapiens, there is no limit. We can cooperate in thousands, in millions, in billions.
If you think about the world trade network, like the food we eat, the shoes we wear, everything we consume, it sometimes comes from the other side of the world. So if you have 8 billion people cooperating, and this is our big advantage over the chimpanzees and all the other animals, what makes it possible for us to cooperate with millions and billions of other human beings? It’s information. Information is what holds all these large scale systems together. And to understand human history is to a large extent to understand the flow of information.
Democracy vs. Dictatorship: Information Flow
And I’ll give an example. If you think, for instance, about the difference between democracies and dictatorships, we tend to think about it as a difference or as a conflict between values, between ethical systems. Democracies believe in freedom, dictatorships believe in hierarchies, things like that. And which is true as far as it goes. But on a deeper level, information flows differently in democracies and dictatorships. It’s a different shape, a different kind of an information network.
In a dictatorship, all decisions are made centrally. Dictatorships come from dictate. One person dictates everything. Putin dictates everything in Russia. Kim Jong Un dictates everything in North Korea. So all the information flows to a single hub where all the decisions are being made and sent back as orders. So it’s a very centralized information network.
A democracy, on the other hand, if you look at it in terms of you’re in outer space looking at the flow of information in the United States, you will see several centers in the country. Washington, the political center, New York, the financial center, Los Angeles. There may be artistic center, but there is no single center that dictates everything. You have several centers, and you also have lots and lots of smaller hubs and centers where decisions are constantly being made. Private corporations, private businesses, voluntary associations, individuals making lots of decisions, constantly exchanging information without that information ever having to pass through the center, through Washington, or even through New York, or even through Los Angeles.
So just imagine you’re in outer space in some spaceship or satellite, just observing the flow of information down below the planet, you will see that North Korea is very different information flow than the United States. And this is crucial to understand.
Information Technology and Democracy
And when you look at thousands of years of history and how history changes and different regimes rise and fall, understanding what kind of information technology is available is a key to understanding which political systems or economic systems win.
For most of history, a large scale democracy like the United States was simply impossible. If you think about the ancient world, the only examples we know of democracy are small city states like republican Rome or like ancient Athens, or even smaller tribes. We don’t have any example of a large scale democracy of millions of people spread over a vast territory that functioned democratically.
Now we know the stories, for instance, about the fall of the Roman republic and the rise of the Caesars, of the emperors, of the autocrats. But it’s really not the fault of Augustus Caesar or Nero or any of the other emperors that Rome became an autocratic empire. Simply, there was no way that the information technology necessary to maintain a large scale democracy which is bigger than just the city of Rome, like the whole of Italy or the whole of the Mediterranean.
Democracy is a conversation. And how can millions of people spread over thousands of kilometers converse and decide whether to go to war with the Persian Empire? What to do about the immigration crisis on the Danube with all these Germans trying to get in? You can’t have a conversation because you don’t have the information technology.
And you know, if it was just the fault of Caesar that Rome became an autocratic empire, we should have seen some other examples of a large scale democracy in India, in China somewhere, but nowhere. We only begin to see large scale democracies in the late modern era, after the rise of new information technologies which were not available to the Romans, like the printed newspaper and then the telegraph and the radio and television and so forth. Once you have these technologies, you begin to see large scale democracies like the United States.
And one final point, why is it so important to understand this? Once you understand that democracy is actually built on top of information technology, you also begin to understand the current crisis of democracy. Because now all over the world, not just in the US we have a crisis of democracy. And to a large extent, this is because there is a new information technology, social media, algorithms, AIs. And it’s like you’re changing the basis of everything. So it’s no wonder there is an earthquake in the structure that is built on top of it.
The Misconception About Information
RICH ROLL: So we have this idea that the advent or the improvement of information systems and information technology is part and parcel of the empowerment of democratic systems across the world. But built into that is this sort of indelible misconception of information, this assumption or presumption that more information is better and leads to truth and knowledge and wisdom. And your book kind of puts the lie to that and tells a very different story around not only the definition of information, but its purpose.
Information vs. Truth
YUVAL NOAH HARARI: Yeah, I mean, information isn’t truth. Information is connection. It’s something that holds a lot of people together and unfortunately, what we see in history, that it’s often much easier to connect people, to create social order with the help of fiction and fantasy and propaganda and lies than with the truth. So most information is not true. The truth is a very rare subset of the information in the world.
The problem of truth is that the truth, first of all is costly, whereas fiction is very cheap. If you want to write a truthful history book about the Roman Empire, for instance, you need to invest a lot of energy, time, money. You need to study Latin, you probably need to study Greek, ancient Greek. You need to do archaeological excavations and find these ancient inscriptions or pottery or weapons and analyze them. Very costly and difficult to write a fictional story about the Roman Empire. Very easy. You just write anything you want and it’s there on the page or on the Internet.
The truth is often also very complicated because reality is complicated. You want to give a truthful explanation for why the Roman Republic fell or why the Roman Empire eventually fell. Very complicated. Fiction can be made as easy, as simple as possible, and people tend to prefer simple explanations over complicated ones.
And finally, the truth can be painful, unattractive. We often don’t want to know the truth about ourselves, whether as individuals, which is why we go to therapy for many years to know the things we don’t want to know about ourselves. And also on the level of entire nations. Each nation has its own dark episodes, its own skeletons or cemeteries in the closet that people don’t want to know about. A politician that in an election campaign would just tell people the truth, the whole truth and nothing but the truth is unlikely to win many votes.
So in this competition between truth, which is costly and complicated and sometimes painful, and fiction, which is cheap and simple and you can make it very attractive, fiction tends to win. And if you look at the large scale systems, networks in history, they’re often built on fictions, not on the truth.
Maybe I give one example. If you think about visual information like portraits, paintings, photographs, so what is the most common portrait in the world. What is the most famous face in the history of humanity? It is the face of Jesus. I mean, there are more portraits of Jesus than of any other person in the history of the world. Billions and billions produced over centuries in cathedrals and churches and homes. And fully 100% of them are fictional. There is not a single authentic, truthful portrait of Jesus anywhere.
We have no portrait of him from his own lifetime. The Bible doesn’t say a single word about how he looked like. There is not a single word in the Bible whether Jesus was tall or short, dark hair or blonde or bald, nothing. All the images. And you know, it’s one of the most famous faces in history. It all comes from the human imagination. And it’s still very successful in inspiring people and uniting people. Could be for good purposes, you know, charity and building hospitals and helping the poor, but could also be for bad purposes. Crusades, persecutions, inquisitions. But either way, the immense power of a fictional image to unite people.
Looking at what’s happening today in the world. So you have these big tech companies and social media companies that they tell us that information is always good. So let’s remove all restrictions on the flow of information and flood the world with more and more information. And more information would mean more truth, more knowledge, more wisdom. And this is simply not true. Most information is actually junk. If you just flood the world with information, the truth will sink to the bottom. It will not rise to the top again because it’s costly and complicated.
And you look around, we have this flood of information. We have the most sophisticated information technology in history, and people are losing the ability to hold the conversation, to talk and listen to one another. You know, in the United States, Republicans and Democrats are barely able to talk to each other. And it’s not an American phenomena. You see the same thing in Brazil, in France, in the Philippines, all over the world. Because again, the basic misconception is that more information is always good for us. It’s like thinking that more food is always good for us.
RICH ROLL: Most information is junk information.
YUVAL NOAH HARARI: Yeah.
RICH ROLL: And what’s curious to me about all of this is that on some level, what you’re saying is there’s nothing new about this. There is this idea that suddenly we’ve found ourselves in a post truth world. And part of what you’re saying is it’s kind of always been that way. But the qualitative difference right now is not by definition these platforms that allow us to share information as much as it is the algorithms that empower them. That make the decisions about what we’re seeing and when we’re seeing it.
The Power of AI Algorithms
YUVAL NOAH HARARI: Yeah, I mean, this is maybe the first place you see the power of AIs to make independent decisions in a way that reshape the world. When I said earlier that AIs can make decisions, and AIs, they are not just tools in our hands, they are agents creating new realities. So you may think, okay, this is a prophecy for the future, a prediction about the future, but it’s already in the past.
Because even though social media algorithms, they are very, very primitive AIs, the third generation of AIs, they still reshape the world with the decisions they made in social media. Facebook, Twitter, TikTok, all that. The ones that make the decision, what you will see at the top of your news feed or the next video that you’ll be recommended. It’s not a human being sitting there making these decisions. It’s an AI. It’s an algorithm.
And these algorithms were given a relatively simple and seemingly benign goal by the corporations. The goal was increase user engagement, which means in simple English, make people spend more time on the platform. Because the more time people spend on TikTok or Facebook or Twitter or whatever, the company makes more money. It sells more advertisements, it harvests more data that it can then sell to third parties. So more time on the platform, good for the company. This is the goal of the algorithm.
Now, engagement sounds like a good thing. Who doesn’t want to be engaged? But the algorithms then experimented on billions of human guinea pigs and discovered something which was of course discovered even earlier by humans. But now the algorithms discovered it. The algorithm discovered that the easiest way to increase user engagement, the easiest way to grab people’s attention and keep them glued to the screen, is by pressing the greed or hate or fear button in our minds. You show us some hate filled conspiracy theory and we become very angry. We want to see more. We tell about it to all our friends. User engagement goes up. And this is what they did over the last 10 or 15 years. They flooded the world with hate and greed and fear, which is why again, the conversation is breaking down. Very hard to hold the conversation with all this hate and fear.
RICH ROLL: Yeah, it’s a function of unintended consequences that on some level is no different than Nick Bostrom’s alignment problems thought experiment about paperclips. Like this is the exact same thing. And I think it speaks to not only human ignorance, but human hubris around this powerful technology. I think you talk so much about stories and how indelible they are in terms of crafting our reality. But one of those stories is we know what we’re doing, we can handle it, we understand the consequences, we know the downside here. And we’re making sure that what we’re putting out into the world is safe and consumer friendly when, you know, on some level they know it’s not. But also they have no idea, you know, what will become of it as a result. And so we’re just in this frontier, this unregulated frontier, where anything goes at the moment.
Unintended Consequences
YUVAL NOAH HARARI: Yeah. And I think it’s important what you said, that these are kind of unintended consequences. Like the people who manage the social media companies, they are not evil. They didn’t set out to destroy democracy or to flood the world with hate and so forth. They just really didn’t foresee that when they give the algorithm the goal of increasing user engagement, the algorithm will start to promote hate. And one of the first places that.
RICH ROLL: Let me just interject quickly on that. No. Now that they know that that’s the case, it’s not as if they’re backtracking.
YUVAL NOAH HARARI: That’s true.
RICH ROLL: It’s not exactly regulation friendly at the moment.
YUVAL NOAH HARARI: No, absolutely not.
RICH ROLL: Sorry, sorry, go ahead.
YUVAL NOAH HARARI: You’re right. Now they know and they are not doing nearly enough. But initially when they started this whole ball rolling, they really didn’t know. And one of the places you saw it for the first time, this was eight years ago when I published Homo Deus, this was happening. I didn’t pay attention to it either. In Myanmar. Burma, the country formerly known as Burma.
Facebook was basically the Internet and certainly the biggest social media platform. And in the 2010s, the algorithms of Facebook in Myanmar, they deliberately spread terrible conspiracy theories and fake news about the Rohingya minority in Myanmar which led to an ethnic. Of course it was not the only reason. There was deep seated hatred towards Rohingya much before. But this kind of propaganda campaign online on Facebook contributed to an ethnic cleansing campaign between 2016 and 2018, in which thousands of Rohingya were killed, tens of thousands were raped and hundreds of thousands were expelled. You now have close to a million Rohingya refugees in Bangladesh and elsewhere. And this was fueled to a large extent by these conspiracy theories and fake news on Facebook.
And at the time, the executive of Facebook had no. I mean, they didn’t know even the Rohingya existed. It’s not like it was a conspiracy of Facebook against them. For the whole of Myanmar, a country where Facebook had millions of users, they, by 2018, this is after they got reports of the ethnic cleansing campaign. They had just a handful of humans trying to kind of regulate the actions of millions of users and the algorithms, and they didn’t even speak Burmese. Like when the algorithm chose, okay, I’ll show people this hate filled conspiracy theory video in Burmese. Nobody in Facebook headquarters spoke Burmese. They had no idea what the algorithm was promoting.
The key thing is not to absolve the humans from responsibility, it’s to understand that even very primitive AIs and we were talking about, you know, like eight years ago, not things like ChatGPT, still the decisions made by these algorithms to promote certain content had far reaching and terrible consequences in Myanmar. They were not just producing conspiracy theories, they were producing the millions of users producing cooking lessons and biology lessons and sermons on compassion from Buddhist monks and conspiracy theories. And the algorithms made the decision to promote the conspiracy theories.
And this is just kind of a warning of, look what happens with even very primitive AIs and the AIs of today, which are far more sophisticated in 2016, they too are still just the very early stages of the AI evolutionary process. Then we can think about it like the evolution of animals until you get to humans. You have 4 billion years of evolution. You start with microorganisms like amoebas, and it took billions of years of evolution to get to dinosaurs and mammals and humans.
Now AIs are at present at the beginning of a parallel process, the ChatGPT and so forth. They are the amoebas of the AI world. But AI evolution is not organic, it’s inorganic, it’s digital and it’s millions of times faster. So whereas it took billions of years to get from amoebas to dinosaurs, it might take just 10 or 20 years to get from the AI amoebas of today to AI T. Rex in 2040 or 2050.
RICH ROLL: Maybe even less.
YUVAL NOAH HARARI: Maybe even less.
RICH ROLL: We’re talking about, I don’t think our brains are organized properly to really comprehend the accelerated speed at which this is self learning and iterating and improving upon itself like this. It’s a compounding thing that is astronomical. Meanwhile, trillions of dollars are being spent to build these server farms with these Nvidia chips. And there’s so much power required to keep these things going. They’re talking about nuclear power. I mean, this is a whole new world.
And yet in talking about it, it still feels somewhat like an academic exercise. Because for myself or somebody who might be watching or listening, their experience with AI comes in the form of ChatGPT or some of these helpful tools. Like, I like my algorithm. It shows me the kind of products that I want to buy without having to search for it.
And a simple example would be preparing for this podcast. Like, I listen to your book on audiobook and I’m doing what I usually do, pulling up a bunch of tabs and, you know, like just collating a bunch of information on you and the book and the message that you’re putting out. But I did something I had never done before, which is I got a PDF of Nexus and I uploaded it to a tool called NotebookLM. And that tool then synopsized the entire book and created a chatbot where I could ask it questions about your book and ask it to elaborate on certain concepts. And it will even create a podcast conversation between two people about the subject matter of the book. So even this conversation is at risk, right?
YUVAL NOAH HARARI: Irrelevant.
AI Bureaucracies: The Real Danger
RICH ROLL: And I’m like, wow, that’s kind of a remarkably helpful tool. And it’s easy to just not really appreciate or connect with the downside, risk and power of these tools and where they’re leading us. So I think what I’m saying is, I guess the point I’m trying to make is consumers, like all of us, we’re being lured into a trust of something so powerful we can’t comprehend and are ill equipped to be able to kind of cast our gaze into the future and imagine where this is leading us.
YUVAL NOAH HARARI: Absolutely. I mean, part of it is that there is enormous positive potential in AI. It’s not like it’s all doom and gloom. There is really enormous positive potential. If you think about the implications for healthcare that AI doctors available 24 hours a day, that know our entire medical history and have read every medical paper that was ever published, and can tailor their advice, their treatment to our specific life history and our blood pressure, our genetics. It can be the biggest revolution in healthcare ever.
If you think about self-driving vehicles – every year more than a million people die all over the world in car accidents. Most of them are caused by human error, like people drinking and then driving or falling asleep at the wheel or whatever. Self-driving vehicles are likely to save about a million lives every year. This is amazing.
You think about climate change, so yes, developing the AIs will consume a lot of energy, but they could also find new sources of energy, new ways to harness energy that could be our best shot at preventing ecological collapse. So there is enormous positive potential. We shouldn’t deny that. We should be aware of it.
And on the other hand, it’s very difficult to appreciate the dangers, because the dangers are kind of alien. Like, if you think about nuclear energy, yeah, it also had positive potential. Nuclear, cheap nuclear energy. But people had a very good grasp of the danger. Nuclear war, anybody can understand the danger of that.
With AI, it’s much more complex because the danger is not straightforward. We’ve seen the Hollywood science fiction scenarios of the big robot rebellion, that one day a big computer or the AI decides to take over the world and kill us or enslave us. And this is extremely unlikely to happen anytime soon because the AIs are still a kind of very narrow intelligence, like the AI that can summarize a book. It doesn’t know how to act in the physical world.
Outside, you have AI that can fold proteins, you have AI that can play chess. But we don’t have this kind of general AI that can just find its way around the world and build a robot army and whatever. So people, it’s hard to understand. So what’s so dangerous about something which is so kind of narrow in its abilities?
And I would say that the danger doesn’t come from the big robot rebellion. It comes from the AI bureaucracies. Already today, and more and more, we will have not one big AI trying to take over the world. We will have millions and billions of AIs constantly making decisions about us everywhere. You apply to a bank to get a loan, it’s an AI deciding whether to give you a loan. You apply to get a job, it’s an AI deciding whether to give you a job. You’re in court, you’re found guilty of some crime, the AI will decide whether you go for six months or three years or whatever.
Even in armies we already see now, in the war in Gaza and with the war in Ukraine, AI makes the decision about what to bomb. And in the Hollywood scenario, you have the killer robots shooting people. In real life, it’s the humans pulling the trigger. But the AI is choosing the targets, is telling them what to. This is much more complex than the standard scenario.
The Black Box Problem
RICH ROLL: Every point of connection with bureaucracy then becomes turned over to an algorithm that makes decisions in a black box without the opportunity for rebuttal or conversation. Right? So we’re outsourcing all of these decisions and creating, like an autocratic diaspora of decision makers. And that in turn, like you can imagine over time, what emerges from that is like a godhead or a pantheon of gods, where there’s an authoritarian regime that’s dispersed across this in which we are relenting our agency over to these machines and trusting that they’re making the right decisions, but not knowing how those decisions are being made. Even the engineers who are creating the algorithms don’t know. And there’s something kind of innately terrifying about that.
YUVAL NOAH HARARI: Again, it’s not authoritarian in the sense that there is a single human being that is pulling all the levers. No, it’s the AI. Like the bank has this AI that decides who is qualified to get a loan. And if they tell you we decided not to give you a loan, and you ask the bank why not, and the bank says, we don’t know. I mean, computer says no. I mean, the algorithm says no. We don’t understand why the algorithm says no, but we trust the algorithm. And this is likely to spread to more and more places.
The key thing is it’s not that the bank is hiding something from you, it’s really that the AIs make decisions in a very different way than human beings on a basis of a lot more data. So if the bank really wanted to explain to you why they refused to give you a loan, like let’s say there is a law, the government passes a law of a right to an explanation. If the bank refused to give you a loan, you can apply, they must give you an explanation.
So the explanation, well, people fear that it will be kind of racist bias or homophobic bias. Like in the old days that the algorithms saw that you’re black or you’re Jewish or you’re gay. And this is why I refuse to give you a loan. It won’t be like that. I mean, the bank will send you an entire encyclopedia, millions of pages saying, this is why the computer refused to give you a loan. The computer took into account thousands and thousands of data points about you, each one based on statistics on millions of previous cases. And now you can go over these millions of pages if you like, and if you want to challenge, okay, but it’s not the kind of old style racism or whatever.
The Degradation of Information
RICH ROLL: Sure. A new version of the terms and conditions that we just click on without reading right, except extrapolated 100 fold. In addition to that, with all of these data points, I can’t help but think that these machines, the veracity of the information that these machines provide us with is only as reliable as the data sets that it has been provided with.
And right now we’re tiptoeing into a situation where the Internet is being rapidly degraded because it’s being populated more and more by AI content. Now, when you go to Google and you search, the first thing you see is sort of an AI kind of summary of your query as opposed to links. And this in turn is undermining the business model of legacy media and all forms of media.
So as those continue to die on the vine, more and more of the Internet will be a result of AI generated content. And then it becomes recursive in which it’s feeding upon its own inputs to make decisions. And with that, you can imagine a degradation of the data set upon which it is making those decisions.
Entering a Non-Human Culture
YUVAL NOAH HARARI: Exactly. Even if you think about something like music. So AI that now creates music, it basically ate the whole of human music. For thousands of years, humans produced music or art or theater, whatever. Within a year, the current AIs just ate the whole of it, digested it, and start now creating new music or new texts or new images.
And the first kind of generation of AI texts or music, this is based on previous human culture. But with each passing year, the AIs will be eating their own products. Because the human share in music production, or the human sharing text production or image production will go lower and lower. Most images, most music will be produced at least in part by AI. And this will be the new food that the AI eats. And then you have exactly what you described, this recursive pattern. And where it will lead us, we have no idea.
I mean, another way to think about it, this is the first time that we are basically about to enter a non-human culture. Like humans are our cultural entities, we live cocooned inside culture. Like all this music and art and also finance and also religion. This is all part of culture. And for tens of thousands of years, the only entities that produced culture were other humans. So all the songs you ever heard were produced by humans. All the religious mythologies you ever heard came from the human imagination.
Now there is an alien intelligence, a non-human intelligence that will increasingly produce songs and music, mythology, financial strategies, political ideas. Even before we rush to decide, is it good, is it bad? Just stop and think about the meaning of living in a non-human culture or a culture which is, I don’t know, 40% or 70% non-human. It’s not like going to China and seeing a different human culture. It’s like really alien culture here on Earth.
Intelligence vs. Consciousness
RICH ROLL: Yeah, my human mind bristles at that. I start thinking about this bias I have around the originality of human thought and emotion and this kind of assumption that AI will never be able to fully mimic the human experience. Right. There’s something indelible about what it means to be human that the machines will never be able to fully replicate.
And when you talk about information, the purpose of information being to create connection, a big piece there is intimacy, like intimacy between human beings. So information is meant to create connection. But now we have so much information and we’re feeling very disconnected. So there’s something broken in this system. And I think it’s driving this loneliness epidemic.
But on the other side, it’s making us value intimacy maybe a little bit more than we were previously. And so I’m curious about where intimacy kind of fits into this post-human world in which culture is being dictated by machines. I mean, human beings are wired for that kind of intimacy. And I think our radar or our kind of ability to identify it when we see it is part of what makes us human to begin with.
YUVAL NOAH HARARI: Maybe the most important part, I think the key distinction here that is often lost is the distinction between intelligence and consciousness. Intelligence is the ability to pursue goals and to overcome problems and obstacles on the way to the goal. The goal could be a self-driving vehicle trying to get from here to San Francisco. The goal could be increasing user engagement. And an intelligent agent knows how to overcome the problems on the way to the goal.
This is intelligence, and this is something that AI is definitely acquiring in at least certain fields. AI is now much more intelligent than us, like in playing chess, much more intelligent than human beings. But consciousness is a different thing than intelligence. Consciousness is the ability to feel things. Pain, pleasure, love, hate.
When the AI wins a game of chess, it’s not joyful. If there is a tense moment in the game, it’s not clear who is going to win. The AI is not tense. It’s only the human player which is tense or frightened or anxious. The AI doesn’t feel anything.
Now, there is a big confusion because in humans and also in other mammals, in other animals, in dogs and pigs and horses and whatever, intelligence and consciousness go together. We solve problems based on our feelings. Our feelings are not something that kind of evolution, it’s decoration. It’s the core system through which mammals make decisions and solve problems is based on our feelings.
So we tend to think that consciousness and intelligence must go together. And in all these science fiction movies, you see that as the computer or robot becomes more intelligent, then at some point it also gains consciousness. It falls in love with the human or whatever, and we have no reason to think like that.
RICH ROLL: Yeah, consciousness is not a mere extrapolation of intelligence.
YUVAL NOAH HARARI: Absolutely not.
RICH ROLL: It’s a qualitatively different thing.
AI’s Evolution Path: Intelligence Without Consciousness
YUVAL NOAH HARARI: Yeah. And again, if you think in terms of evolution. So, yes, the evolution of mammals took a certain path, a certain road in which you develop intelligence based on consciousness. But so far, what we see with computers, they took a different route. Their road develops intelligence without consciousness. I mean, computers have been developing, you know, for 60, 70 years now. They are not very intelligent, at least in some fields, and still zero consciousness.
Now, this could continue indefinitely. Maybe they are just on a different path. Maybe eventually they will be far more intelligent than us in everything and still will have zero consciousness, will not feel pain or pleasure or love or hate. You know, the same way that if you think about birds and airplanes, so airplanes did not become like birds. Airplanes don’t fly using feathers and so forth. They fly in a completely different way. It’s not like that. At a certain point, when the airplane flies fast enough, suddenly the feathers will appear. No. And it could be the same with intelligence and consciousness, that it will be more and more intelligent without feelings ever appearing.
What adds to the problem is that there is nevertheless a very strong commercial and political incentive to develop AIs that mimic feelings, to develop AIs that can create intimate relations with human beings, that can cause human beings to be emotionally attached to the AIs. Even if the AIs have no feelings of themselves, they could be trained. They are already trained to make us feel that they have feelings and to start developing relationships with them.
Why is there such an incentive? Because intimacy is, on the one hand, maybe the most cherished thing that the human can have. You know, I was just on the way here. We were listening to Barbara Streisand singing. People who need people are the luckiest people in the world. That intimacy is not a liability. It’s not something bad that, oh, I need this. No, it’s the greatest thing in the world, but it’s also potentially the most powerful weapon in the world.
If you want to convince somebody to buy a product, if you want to convince somebody to vote for a certain politician or party, intimacy is like the ultimate weapon. I mean, so far in history, there was a big battle for attention, how to grab human attention. Also, we talked about earlier in social media how to get human attention. And there were ways, like, I don’t know, in Nazi Germany, Hitler could force everybody to listen to his speech on radio. So he had command of attention, but not of intimacy. There was no technology for Hitler or Stalin or anybody else to mass produce intimacy.
Now, with AIs, it is possible technically to mass produce intimacy. You can create all these AIs that will interact with us and they will understand our feelings. Because feelings are also patterns. You can predict a person’s feelings by watching them for weeks and months and learning their patterns and facial expression and tone of voice and so forth. And then if it’s in the wrong hands, it could be used to manipulate us like never before.
The Vulnerability of Human Intimacy
RICH ROLL: It’s our ultimate vulnerability. This beautiful thing that makes us human becomes this great weakness that we have. Because as these AIs continue to self iterate, their capacity to mimic consciousness and human intimacy will reach such a degree of fidelity that it will be indistinguishable to the human brain. And then humans become like these unbelievably easy to hack machines who can be directed wherever the AI chooses to direct them.
YUVAL NOAH HARARI: Yeah, it’s not a prophecy. We can take actions today to prevent this. We can have regulations about it. We can, for instance, have a regulation that AIs are welcome to interact with humans, but on condition that they disclose that they are AIs. If you talk with an AI doctor, that’s good. But the AI should not pretend to be a human being. You know, I’m talking with an AI. I mean, it’s not that there is no possibility that AI will develop consciousness. We don’t know. I mean, there could be.
RICH ROLL: That AI is really developed mimicking it to such a degree of fidelity. Does it, even in terms of like how human beings interact with it, does it matter for the human beings?
YUVAL NOAH HARARI: No. I mean, this is the problem. I mean, because we don’t know if they really have consciousness or they’re only very, very good at mimicking consciousness. So the key question is ultimately political and ethical. If they have consciousness, if they can feel pain and pleasure and love and hate, this means that they are ethical and political subjects. They have rights that you should not inflict pain on an AI the same way you should not inflict pain on a human being. That what they like, what they love, might be as important as what human beings desire. So they should also vote in elections. And they could be the majority. Because you can have a country, 100 million humans and 500 million AIs. So do they choose the government in this situation?
AI Rights and Legal Personhood
Now, in the United States, interestingly enough, there is actually an open legal path for AIs to gain rights. It’s one of the only countries in the world where this is the case. Because in the United States, corporations are recognized as legal persons with rights. Until today, this was a kind of legal fiction. Like, according to US law, Google is a person. It’s not just a corporate, it’s a person. And as a person, it also has freedom of speech. This is the Supreme Court ruling for 2010 of Citizens United.
Now, until today, this was just legal fiction because every decision made by Google was actually made by some human being, an executive, a lawyer, an accountant. Google could not make a decision independent of the humans. But now you have AIs. So imagine the situation when you incorporate an AI. Now, this AI is a corporation. And as a corporation, US law recognizes it as a person with certain rights, like freedom of speech.
Now, it can earn money. It can go online, for instance, and offer its services to people and earn money. Then it can open a bank account and invest its money in the stock exchange. And if it’s very smart and very intelligent, it could become the richest person in the US.
Now imagine the richest person in the US is not a human, it’s an AI. And according to US law, one of the rights of this person is to make political contributions, donations. This was the main reason behind Citizens United in 2010. So this AI now makes billions of dollars of contributions to politicians in exchange for expanding AI rights. And the legal path is in the US, completely open. You don’t need any new law to make this happen.
RICH ROLL: That’s like a plot of a movie.
YUVAL NOAH HARARI: Yeah. When you know we’re in LA.
RICH ROLL: Yeah. I mean, wow, that’s so wild to contemplate. What are the differences in the ways in which the advent of this powerful technology is impacting democratic systems and authoritarian systems?
AI’s Impact on Democratic vs. Authoritarian Systems
YUVAL NOAH HARARI: So both systems have a lot to gain and have a lot to lose. Again, the AI, it’s the most powerful technology ever created. It’s not a tool, it’s an agent. So you have millions and billions of new agents, are very intelligent, very capable, that can be used to create the best healthcare system in the world, but also the most lethal army in the world, or the worst secret police in the world, if you think about authoritarian regimes.
So throughout history, they always wanted to monitor their citizens around the clock, but this was technically impossible. Even in the Soviet Union, you know, you have 200 million Soviet citizens, you can’t follow them all the time because the KGB didn’t have 200 million agents. And even if the KGB somehow got 200 million agents, that’s not enough. Because, you know, in the Soviet Union, it’s still basically paper bureaucracy, the secret police. If a secret agent followed you around 24 hours a day, at the end of the day, they write a paper report about you and send it to KGB headquarters in Moscow. So imagine every day, KGB headquarters is flooded with 200 million paper reports. Now, to be useful for anything, somebody needs to read and analyze them. They can’t do it. They don’t have the analysts. Therefore, even in the Soviet Union, some level of privacy was still the default for most people for technical reasons.
Now, for the first time in history, it is technically possible to annihilate privacy. A totalitarian regime today doesn’t need millions of human agents. If he wants to follow everybody around. You have the smartphones and cameras and drones and microphones everywhere, and you don’t need millions of human analysts to analyze this ocean of information. You have AI, and this is already beginning to happen. This is not a future prediction.
In many places around the world, you begin to see the formation of this totalitarian surveillance regime. It’s happening in my country in Israel. Israel is building this kind of surveillance regime in the occupied Palestinian territories to follow everybody around all the time. And also in our region in Iran, since the Islamic Revolution in 1979, they had the hijab laws which says that every woman, when she goes out walking or even driving in her private car, she must wear the hijab, the headscarf.
And until today, the regime had difficulty enforcing the hijab laws because they didn’t have millions of police officers that you can place on every street a police officer. If a woman drives without a headscarf, immediately she’s arrested and fine or whatever. In the last few years, they switched to relying on an AI system. Iran is now crisscrossed by surveillance cameras with facial recognition software which recognizes automatically if in the car that just passed by the camera, the facial recognition software can identify that this is a woman, not a man, and she’s not wearing the hijab and identify her identity, find her phone number, and within half a second, they send her an SMS message saying, you broke the hijab law. Your car is impounded, your car is confiscated. Stop the car by the side of the world.
This is daily occurrence today in Tehran and Isfahan and other parts of Iran. And this is based on AI. And it’s not like there is a report that goes to the court and some human judge goes over the data and decides what to do. The AI, like, immediately decides, okay, the car is confiscated.
And this can happen in more and more places around the world, like even in the U.S. you know, if you think about all the debate about abortion without going into the debate itself, the people who think rightly or wrongly, but they think that abortion is murder. They have a very strong incentive to build a similar surveillance system for American women, you know, to stop murder. Like, you can build this surveillance system that can identify yesterday you were pregnant, today or not, what happened in between. So it’s not just a problem, you know, for Iran or for the Palestinians or the Chinese, this can come to the US as well.
RICH ROLL: And to prevent them from crossing state lines, things like that. Yeah, yeah.
YUVAL NOAH HARARI: Like, okay, you went from, I don’t know, Texas to California. You were pregnant, you came back, you’re not pregnant. What happened in California?
RICH ROLL: So it feels like AI is this incredible tool to consolidate power around authoritarian regimes. But it also has its pitfalls, too. It’s not the perfect tool.
YUVAL NOAH HARARI: It also frightens the autocrats because the one thing that human dictators always feared most was not a democratic revolution. The one thing they feared most is the powerful subordinate that they can’t control and that might manipulate them or take power from them.
If you can look at the Roman Empire, not a single Roman emperor was ever toppled by a democratic revolution. Never happened. But many of them lost their life or their power to a subordinate, you know, a general that rebelled against them, a provisional governor, their brother, their wife that took power from them. This is the greatest fear of every dictator also today.
And so if you think about AI, so if you’re a human dictator and you now give this immense power to an AI system, where is the guarantee that this system will not turn against you and either eliminate you or just turn you into a puppet? I mean, what we also know about dictators, it’s relatively easy to manipulate these people if you can whisper in their ear, because they are very paranoid. And the easiest people to manipulate are the paranoid people.
RICH ROLL: And we have our AI corporation in the United States that can deploy billions of dollars towards bots and whatever else to create that paranoia or enhance it.
AI’s Threat to Dictatorships
YUVAL NOAH HARARI: You really just need to hack one person for an AI to take power in the US. It’s very complicated. It’s such a distributed system. Like, okay, the AI can learn to manipulate the president, but it also needs to manipulate the senators and the Congress members and the state governors and the Supreme Court. Like, what would the AI do with the filibuster? It’s difficult. But if you want to take power in a dictatorship, you just need to learn to manipulate a single person.
So the dictators are not all happy about the AIs, and we’re already beginning to see it, for instance, with chatbots. They are very concerned because you can design a chatbot which will be completely loyal to the regime, but once you release it to the Internet to start interacting with people in real life, it changes. I mean, remember what we talked about earlier, that AI is defined by the ability to learn and change by itself.
So even if Putin creates the Putinist chatbot that always says Putin is great and Putin is right and Russia is great and so forth, but then you release it to the real world, it starts observing things in the real world. For instance, it notices that in Russia, the invasion of Ukraine is officially not a war. It’s called a special military operation. And if you say that it’s a war, you go to prison for up to three years or something like that, because it’s not a war, it’s a special military operation. Now what do you do if a very intelligent chatbot that you released connects the dots and says, no, it’s not a special military operation, it’s a war? Would you send a chatbot to prison? What can you do?
Democracies, of course, also have a problem with chatbots saying things we don’t like. They can be racist, they can be homophobic, whatever. But the thing about democracy, it has a relatively wide margin of tolerance, even for anti-democratic speech. Dictatorships have zero margin for dissenting views. So they have a much bigger problem with how to control these unpredictable chatbots.
AI and the 2024 US Election
RICH ROLL: How are you interpreting the current moment? Given that we’re on the cusp of an election here in the United States and there’s a lot of discourse around the existential threat to democracy that we may be facing, what role is AI playing in this? What should we understand about the impact of this technology on us as citizens and voters at present?
YUVAL NOAH HARARI: I don’t think that AI has—again, social media has, of course, a huge impact on the political discourse and thereby on the results of the elections. But I don’t see AI really changing or manipulating the elections in November.
The big question is, whoever wins the elections, maybe the most important decisions that person has to make will be about AI because of the extremely rapid pace that this technology is developing. You know, you look at where ChatGPT was a year ago, you look at what things are now in 2024, what will be the state of AI in 2027, 2028.
I watched the presidential debate. Most people, their main takeaway was about the cats and the dogs. It’s the most memorable thing from the debate. Whoever wins maybe will have to make some of the most important decisions in history about these relations. If you’re worried about immigration, it’s not the immigrants that will replace the taxi drivers, it’s the immigrants that will replace the bankers that you should be worried about. And it’s the AIs, not somebody coming from south of the border.
And who do you trust to make these momentous decisions now? And if you think specifically about the threats to democracy. One thing we learned from history is that democracies always, since ancient Athens, they always had this one single big problem or weakness, that democracy is basically a kind of deal that you give power to somebody for a limited time period, for four years on condition they give it back. And then you can make a different choice. Like, we tried this, it didn’t work. Let’s try something else. This ability to say, let’s try something else. This is democracy, and it’s based on that. You give power and you expect to get it back after four years.
RICH ROLL: At the end of that term.
YUVAL NOAH HARARI: If you give power to somebody who then doesn’t give it back, they now have the power. They have the power to also stay in power. That was always the biggest danger in democracy.
So for me, in the issue in the US elections, you can discuss the economic policies, the foreign policies. You like this, you like that. There is discussion to be had. But you have one person, Donald Trump, and you have a record from the previous time that this person doesn’t want to give power back and he’s willing to go a long way, including potentially inciting violence, to avoid giving power back. And you want to give him so much power? That doesn’t sound like a very good idea. So for me, this is the number one issue in the elections. Everything else is of marginal importance in comparison.
The Challenge of Global Cooperation on AI
RICH ROLL: I mean, I think it challenges our predilections around the stability of democracy and is forcing us to really embrace the fact that it is a delicate dynamic that is informed by collective action by the people. And in reflecting upon this, technology also, the story of technology is one in which our ability to legislate around it and regulate it always falls way behind the pace of advancement.
And now we’re in a situation where the pace of advancement is like nothing we’ve ever seen before, which calls into question our ability to not only put guardrails around it, but to even understand what is actually happening. The history of information systems is one of collective human cooperation. And yet we’re in a situation right now where it feels like cooperation is being challenged not only nationally, here in the United States, but internationally.
And so, as we kind of begin to talk about how we’re going to triage this or find solutions, where do you land in terms of our capacity to collectively come together as a global community to figure out solutions and then put them into motion so that we don’t tiptoe into some kind of dystopia?
YUVAL NOAH HARARI: So there is a lot to unpack here. First of all, when we think about cooperation, as we said earlier, this was always our biggest advantage as a species, that we cooperate better than anybody else. We can construct these even global networks of trade that no other animal even understands.
Like, if you think about horses. Horses never figured out money. They were bought and sold, but they never understood what are these things that the humans are exchanging. And this is why horses could never unite against us or could never manipulate us, because they never figured out how the system works that one person is giving me to another person in exchange for a few shiny metal things or some pieces of paper.
AI is different. It understands money better than most people. Like, most people don’t understand how the financial system really works. And financial AIs in fintech, they already surpass most human beings, not all human beings, but most human beings in their understanding of money.
So we are now confronting millions and billions of new agents that potentially can use our own systems against us, that computers can now collaborate using, for instance, the financial system more efficiently than humans can. So the whole issue of cooperation is changing. And computers also learn how to use the communication systems to manipulate us, like in social media. So they are cooperating where we are losing the ability to cooperate. And that should raise the alarm now.
And it’s very difficult to understand what is happening. If we want humans around the world to cooperate on this, to build guardrails, to regulate the development of AI, first of all, you need humans to understand what is happening. Secondly, you need the humans to trust each other.
And most people around the world are still not aware of what is happening on the AI front. You have a very small number of people in just a few countries, mostly the US and China and a few others who understand. Most people in Brazil, in Nigeria, in India, they don’t understand. And this is very dangerous because it means that a few people, many of them are not even elected by US citizens. They are just private companies. They will make the most important decisions.
And the even bigger problem is that even if people start to understand, they don’t trust each other. I had the opportunity to talk to some of the people who are leading the AI revolution, which is still led by humans. It is still humans in charge. I don’t know for how many more years, but as of 2024, it’s still humans in charge.
And you meet with these entrepreneurs and business tycoons and politicians also in the US, in China, in Europe, and they all tell you the same thing. Basically, they all say, we know that this thing is very, very dangerous, but we can’t trust the other humans. If we slow down, how do we know that our competitors will also slow down, whether our business competitors, let’s say here in the US or our Chinese competitors across the ocean?
And you go and talk with the competitors, they say the same thing. We know it’s dangerous. We would like to slow down to give us more time to understand, to assess the dangers, to debate regulations, but we can’t. We have to rush even faster because we can’t trust the other corporation, the other country. And if they get it before we get it, it will be a disaster.
And so you have this kind of paradoxical situation where the humans can’t trust each other, but they think they can trust the AIs. Because when you talk with the same people and you tell them, okay, I understand, you can’t trust the Chinese, or you can’t trust OpenAI, so you need to move faster developing this super AI. How do you know you could trust the AI? And then they tell you, oh, I think that will be okay. I think we’ve figured out how to make sure that the AI will be trustworthy and under our control. So we have this very paradoxical situation when we can’t trust our fellow humans, but we think we can trust.
RICH ROLL: And layered on top of that is an incentive structure, of course, that further engenders distrust in this arms race. The prize goes to the breakthrough developers and those will be rewarded and remunerated in ways that are perhaps unprecedented. So the breakthroughs and what’s on the other side of that is so enticing that any discourse around regulation or anything else that might slow it down becomes not only a national security threat, but also an entrepreneurial threat.
So everything is motivating rapid acceleration at the cost of transparency and regulation and all these other things, all these checks and balances that we really need right now. And I don’t know how you’re feeling about this, but it leaves me a little cold and pessimistic. You’re a historian. The story of humankind is all gas, no brakes. Like, we’re plowing forward and we’ll deal with the consequences when they come. We’re not wired adequately to really appreciate the long-term consequences of our behavior. We’re kind of looking right in front of us and making decisions based on how it’s going to impact us in the immediate future and very little else.
Solving the Wrong Problems
YUVAL NOAH HARARI: Yeah, I mean, throughout history, the problem is people are very good at solving problems, but they tend to solve the wrong problems. Like, they spend very little time deciding what problem we need to solve. Like, 5% of the effort goes on choosing the problem. Then 95% of the effort goes in solving the problem we focus on. And then we realize, oh, we actually solved the wrong problem. And it just creates new problems down the road that we now need to. And then we do it the same again.
Wisdom often comes from silence, from taking time, from slowing down. Let’s really understand the situation before we rush to make a decision. And you know, it starts on the individual level that so many people, for instance, think, oh, my main problem is in life is that I don’t have enough money. And then they spend the next 50 years making lots of money. And even if they succeed, they wake up at a certain point and said, oops, I think I chose the wrong problem. I think it wasn’t, yeah, I need some money, but it wasn’t my main problem in life.
And we are perhaps doing it collectively as a species, the same thing. You know, you go back to something like the agricultural revolution. So people thought, okay, we don’t have enough food, let’s produce more food. With agriculture, we’ll domesticate wheat and rice and potatoes. We’ll have lots more food. Life will be great. And then they domesticate these plants and also some animals, cows, chickens, pigs, whatever, and they have lots of food. And they start building these huge agricultural societies with towns and cities.
And then they discover a lot of new problems they did not anticipate. For instance, epidemics. Hunter gatherers did not suffer almost any infectious diseases because most infectious diseases came to humans from domesticated animals, and they spread in the dense towns and cities. Now, if you live in a hunter gatherer band, you don’t hold any chickens or pigs. So it’s very unlikely some virus will jump from a wild chicken to you. And even if you got some new virus, you have just like 20 other people in your band and you move around all the time. Maybe you infect five others and like, three die, and that’s the end of it.
But once you have these big agricultural cities, then you get the epidemics. People thought they were building paradise for humans. Turned out they were building paradise for germs. And human life expectancy and human living conditions for most humans actually goes down. If you’re a king or a high priest, it’s okay. But for the average person, it was actually a bad move.
And the same thing happens again and again throughout history, and it can happen now on a very, very big scale with AI. In a way, it goes back to this issue of organic and inorganic, that organic systems are slow, they need time. And this AI is an inorganic system which accelerates beyond anything we can deal with. And the big question is whether we will force it to slow down or it will force us to speed up until the moment we collapse and die. I mean, if you force an organic entity to be on all the time and to move faster and faster and faster, eventually it collapses and dies.
RICH ROLL: One of the things I heard you say that really struck me was this. It’s a quote. If something ultimately destroys us, it will be our own delusions. So can you elaborate on that a little bit and how that applies to what we’ve been talking about?
Our Delusions Could Destroy Us
YUVAL NOAH HARARI: Yeah, I mean, the AIs, at least of the present day, they cannot escape our control and they cannot destroy us unless we allow them or unless we kind of order them to do that. We are still in control. But because of our political and mythological delusions, we cannot trust the other humans. And we think we need to develop these AIs faster and faster and give them more and more power because we have to compete with the other humans. And this is the thing that could really destroy us.
And you know, it’s very unfortunate because we do have a track record of actually being quite successful of building trust between humans. It just takes time. I mean, if you think about it, in the long arc of human history, so these hunter gatherer bands, tens of thousands of years ago, they were tiny couple of dozen individuals. And even though the next steps, like agriculture, they had a downside. Again, like epidemics, people did learn over time how to build much larger societies which are based on trust.
If you now live in United States or some other country, you are part of a system of hundreds of millions of people who trust each other in many ways which were really unimaginable in the Stone Age. Like you don’t know 99.99% of the other people in the country, and still you trust them with so much. I mean, the food you eat, mostly you did not go to the forest to hunt and gather it by yourself. You rely on strangers to provide the food for you. Most of the tools you use are coming from strangers, your security. You rely on police officers, on soldiers that you never met in your life. They are not your cousins, they are not your next door neighbors, and still they protect your life.
So yes, if you now go to the global level, okay, we still don’t know how to trust the Chinese and the Israelis, still don’t know how to trust the Iranians and vice versa. But it’s not like we are stuck while we were in the Stone Age. We’ve made immense progress in building human trust and we are rushing to throw it all away because it just, again, it takes time. It will not happen tomorrow.
RICH ROLL: Yeah, I mean, I think it’s urgent that we find a way back to repairing some institutional trust.
YUVAL NOAH HARARI: Right.
RICH ROLL: Like that has been degraded in recent times. And I think without that, we stand very little chance as a democratic republic of surviving and solving these kinds of problems.
The Importance of Institutions
YUVAL NOAH HARARI: Absolutely. If you ask, in brief, what is the key to building trust between millions of strangers? The key is institutions, because you can’t build a personal, intimate relationship with millions of people. So it’s only institutions, whether it’s courts or police forces or newspapers or universities or healthcare systems that build trust between people.
And unfortunately, we now see this again, another epidemic of distrust in institutions on both the right and the left. It is fueled by a very cynical worldview which basically says that the only reality is power and humans only want power, and all human interactions are power struggles. So whenever somebody tells you something, you need to ask whose privileges are being served, whose interests are being advanced. And any institution is just an elite conspiracy to take power from us. So journalists are not really interested in knowing the truth about anything. They just want power. And the same for the scientists and the same for the judges.
And if this goes on, then all trust in institutions collapses, and then society collapses. And the only thing that can still function in that situation is a dictatorship. Because dictatorships don’t need trust. They are based on terror. So people who attack institutions, they often think, oh, we are liberating the people from these authoritarian institutions. They are actually paving the way for a dictatorship.
And the thing is that this view is not just very cynical, it’s also wrong. Humans are not these power crazy demons. All of us want power to some extent, that’s true, but that’s not the whole truth about us. Humans are really interested in knowing the truth about ourselves, about our lives, about the world on a very deep level. Because you can never be happy if you don’t know the truth about your life. Because you will not know what are the sources of misery. Again, you will focus on your life. If you don’t know the truth, you waste all your life trying to solve the wrong problems.
And this is true of also of journalists and judges and scientists. Yes, there is corruption in every institution. This is why we need a lot of institutions to keep one another in check. But if you destroy all trust in institutions, what you get is either anarchy or dictatorship.
And again, it’s a good exercise every now and then to stop and think about how every day we are protected by all kinds of institutions. Like when people talk with me about the deep state, you know, this conspiracy about the deep state, I immediately think about the sewage system. The sewage system is the deep state. It’s a deep system of tunnels and pipes and pumps which is state built under our houses and streets and neighborhoods and saves our life every day, because it keeps our sewage separate from our drinking water. You know, you go to the toilet, you do your thing, it goes down into the deep state, which keeps it separate from the drinking water.
If I can tell one historical anecdote, where did it come from? So, you know, after agricultural revolution, you have big cities. They are paradise for germs, hotbeds for epidemics. This continues really until the 19th century. London in the 19th century was the biggest city in the world and one of the most dirty and polluted and a hotbed for epidemics. And in the middle of the 19th century, there is a cholera epidemic. And people in London are dying from cholera.
And then you have this bureaucrat, medical bureaucrat, Jon Snow, not the guy from Game of Thrones, a real Jon Snow, who did not fight dragons and zombies, but actually did save millions of lives because he went around London with lists and he interviewed all the people who got sick or who died. If somebody died from cholera, he would interview their family, tell me, where did this person get their drinking water from? And he made these long lists of hundreds and thousands of people.
And by analyzing these lists, he pinpointed a certain well on Broad street in Soho in London, where everybody, almost everybody who got sick on cholera, they had a sip of water from that well at a certain stage. And he convinces the municipality to disable the pump of the well and the epidemic stops. And then they investigate, they discover that the well was dug about a meter away from a cesspit. And water, sewage water from the cesspit got into the drinking water. And today, if you want to dig a well or a cesspit in London or in Los Angeles, you have to fill so many forms and to get all these bureaucratic permits and it saves our lives.
RICH ROLL: And how does that relate to this idea of the deep state? I’m trying to tether those two notions together.
YUVAL NOAH HARARI: Again, the people who believe the conspiracy theories about the deep state, they say that all these state bureaucracies, they are elite conspiracies against the common people trying to take over power, trying to destroy us. And in most cases, no, the people in these, you know, to manage a sewage system, you need plumbers. You also need bureaucrats. Again, you need to apply for a license to dig a well. And it is managed by all these kind of state bureaucrats.
And it’s a very good thing because again, there is corruption in these places sometimes. This is why we keep also courts. You can go to court. This is why we keep newspapers, so they can expose corruption in the cities, in the municipalities sewage department. But most of the time, most of these people are honest people who are working very hard every day to keep our sewage separate from our drinking water and to keep us alive.
RICH ROLL: And by extrapolation, there are all of these bureaucracies that are working in our interest in invisible ways that we take for granted.
YUVAL NOAH HARARI: Exactly right.
RICH ROLL: You’ve often said clarity is power. And I think your superpower is your ability to kind of stand at 10,000ft and look down on humanity and the planet and identify what’s most important in these macro trends that help us make sense of what’s happening now. And I’d like to kind of end this with some thoughts on how you cultivate that clarity through meditation and your very kind of like profound practice of mindfulness and information deprivation, I should say. Right?
The Information Diet
YUVAL NOAH HARARI: Information fast. Yeah, starting maybe with the idea of an information fast. So I think this is important today for every person to go on an information diet. That this idea that more information is always good for us is like thinking that more food is always good for us. It’s not true. And the same way that the world is full of junk food that we’d better avoid, the world is also full of junk information that we have better avoid. Information which is artificially filled with greed and hate and fear.
Information is the food of the mind. And we should be as mindful as what we put into our minds, as of what we put into our mouths. But it’s not just about limiting consumption. It’s also about digesting. It’s also about detoxifying. Like we go throughout our life and we take in a lot of junk, whether we like it or not, that fills our mind.
And I meditate two hours every day so I can tell you there is a lot of junk in there, a lot of hate and fear and greed that I picked up over the years. And it’s important to take time to simply digest the information and to also detoxify, to kind of let go of all this hatred and anger and fear and greed which is in our minds.
My Meditation Journey
I began when I was doing my PhD in Oxford, a friend recommended that I go on a meditation retreat or a vipassana meditation. And for a year he kind of nagged me to go. And I said, no, this is kind of mystical mumbo jumbo, I don’t want to. And eventually I went and it was amazing because it was the most remote thing for mysticism that I could imagine because it was a 10 days retreat.
And on the very first evening of the retreat, the teacher Goenka, the only instruction he gave, he didn’t tell me to kind of visualize some goddess or do this mantra, nothing. He just said, what is really happening right now? Bring your attention to your nostrils, to your nose, and just feel whether the breath is going in or whether the breath is going out. That’s the only exercise, like a pure observation of reality.
What amazed me was my inability to do it. Like I would bring my attention to the nose and try to feel, is it going in, is it going out? And after about five seconds, some thought, some memory, some fantasy would arise in the mind and would just hijack my attention. And for the next two or three minutes I would be rolling in this fantasy or memory until I realized, hey, I actually need to observe my breath. And I would come back to the breath again.
Five seconds, maybe 10 seconds, I will be able. Oh, now it’s coming in, it’s coming in. Oh, now it’s going out, it’s going out. And again some memory would come and hijack me. And I realized first that I know almost nothing about my mind. I have no control of my mind. And my mind is just like this factory that constantly produces fantasies and illusions and delusions that come between me and reality.
Like if I can’t observe the breath going in and out of my nostrils because some fantasy comes up, what hope do I have of understanding AI or understanding the conflict in the Middle East without some mind-made illusion or fantasy coming between me and reality.
And for the last 24 years I have this daily exercise of I devote two hours every day to just what is really happening right now. I sit with closed eyes and just try and focus, let go of all the mind-made stories and feel what is happening to the breath, what is happening to my body, the reality of the present moment.
I also go for a long meditation retreat usually every year of between 30 days and 60 days of meditation. Because again, one of the things you realize there is so much noise in the mind that just to calm it down to the level that you can really start meditating seriously, it takes three or four days of continuous meditation. Just so much noise, so long retreats, they enable to have this really deep observation of reality, which is impossible. Most of life we spend detached from reality.
RICH ROLL: Two hours a day. That’s a commitment. Even in the midst of all the book promotion craziness.
YUVAL NOAH HARARI: Before I came here, I usually do one in the morning, one in the afternoon or evening.
RICH ROLL: What a beautiful saying. And obviously your ability to think clearly and write so articulately about these ideas is very much a product of this practice.
Different Practices for Different People
YUVAL NOAH HARARI: Absolutely. I mean without the practice I would not be able to write such books and I would not be able to deal with the kind of, all the publicity and all the interviews and this roller coaster of positive and negative feedback from the world all the time.
I would say one important thing, this is not necessarily for everybody because I meditate and I have meditative friends and so forth. I mean, different things work for different people. There are many people that I wouldn’t recommend to meditate 2 hours a day or to go for a 10 days meditation retreat because they are different. Their body, their minds are different for them.
Perhaps going on a 10 days hike in the mountains would be better for them. Perhaps devoting two hours a day to music to say, playing or to creating or going to psychotherapy would have better results. Humans are really different in many ways from one another. There is no one size fits all.
So if you never try meditation, absolutely try it out and give it a real chance. It’s not like you go for like a few hours and it doesn’t work. Okay, give it up. Like give it a real chance. But keep in mind that again, different minds are different. So find out what really works for you. And whatever it is, that’s the important part. Whatever it is, invest in it.
RICH ROLL: I have to release you back to your life, but maybe we can end this with just a concise thought about what it is that you want people to take away from this book. What is most vital and crucial for people to understand about what you’re trying to communicate.
The Value of Truth
YUVAL NOAH HARARI: Information isn’t truth. Truth, it’s a costly, a rare and precious thing. It is the foundation of knowledge and wisdom and of benign, beneficial societies. You can build terrible societies without the truth. But if you want to build a good society and you want to build a good personal life, you must have a strong basis in the truth. And it’s difficult, again, because most information is not the truth. And invest in it. It’s worthwhile to have a practice, whatever it is that gets you connected with reality, that gets you connected with the truth.
RICH ROLL: Thank you for coming here today. I really appreciate you taking the time to share your wisdom and experience. I think Nexus, your latest book, is, as I said at the outset, a crucial, vital book that everybody should read. We’re entering into a very interesting time and we are well advised to be as best prepared as we possibly can. And I appreciate the work that you do. And thank you again.
YUVAL NOAH HARARI: Thank you.
RICH ROLL: I only graced the surface of the outline that I created, so hopefully you can come back because I got a million more questions. I could have talked to you for hours.
YUVAL NOAH HARARI: Next time I’m in LA, I’ll be happy to.
RICH ROLL: Thanks. Appreciate it. Cheers.
Related Posts
- Joe Rogan Experience: #2429 with Tom Segura (Transcript)
- This Past Weekend: #630 with Stephen Wilson Jr. (Transcript)
- Shawn Ryan Show: SRS #264 with Hunter Biden (Transcript)
- Tucker Carlson Show: Matt Gaetz on ADL, Israel Policy, and Identity Politics (Transcript)
- TRIGGERnometry: Christina P on Woke Culture, Feminism, and More (Transcript)
