Read the full transcript of AI expert Mo Gawdat’s interview on Impact Theory with Tom Bilyeu Podcast episode titled “AI, Tech Arms Race With China & UBI”, Apr 9, 2025.
The interview starts here:
Introduction
TOM BILYEU: Due to AI and a changing global order, the world is in the middle of the greatest period of change ever. But because we’re in the middle of it, it is nearly impossible for us to accurately see what’s going on. We are in the fog of war. While the world panics over job loss and killer robots, the real dangers are creeping in quietly and changing us in ways most people do not even notice.
America and China are locked in a cold war and AI isn’t just going to take people’s jobs. For many, it will take their entire identity. It’s already shaping what we believe, how we connect, and even what we value.
Today’s guest is issuing a very strong warning. His name is Mo Gawdat and he’s the former Chief Business Officer at Google X, best selling author of the AI book Scary Smart and one of the only people who truly understands both how AI works and what it’s doing to us. In this conversation, Mo exposes how the American perspective is blinding us to China’s true might. How AI is already changing everything and how we can learn to navigate the rise of the machines. Well, do not look away. Here is Mo Gawdat.
I think of AI like a magic genie that can grant all of our wishes. The problem is the lesson of every magic genie story is be careful what you wish for. What do you think we have to watch out for with AI?
The AI Genie and Existential Risk
MO GAWDAT: AI is a genie that has no polarity. It doesn’t want to do good, it doesn’t want to do evil. It wants to do exactly what we tell it to do. And you know, there is a non-zero possibility. You know, some people say 10 to 20%, Elon Musk’s view, tax has 50% and so on. A possibility that we ever face an existential risk of AI. I mean think about it. 10 to 20% is Russian roulette odds.
TOM BILYEU: You just gave me the chills. That’s crazy.
MO GAWDAT: Yeah, you wouldn’t stand in front of the barrel at 10 to 20%. Right. But my issue is that chronologically we wouldn’t get there. My issue is that I think we have more urgent and quite crippling effects of human greed, human morality. Let’s put it this way. I think the immediate negative impact of AI is going to be human morality using it for the wrong reasons. So they’re going to make the wrong wish is my challenge, I think.
And in my current writing in Alive, I basically try to explain that it is almost, I’m almost convinced that there is a short term dystopia that’s upon us on the way to Utopia. And unfortunately the short term dystopia is not reversible. So we’re going to have to struggle with a bit of it. But it can be reduced in intensity and in time and duration. But it’s only wise to start preparing and that 100% of the short term dystopia is not the result of AI. It’s the result of the morality of humanity in the age of the rise of the machines.
The FACE Rips: How AI Will Transform Society
TOM BILYEU: All right, give me some specifics. What specifically are we going to point AI at that will become dystopian?
MO GAWDAT: So I call them FACE rips. So just an acronym to try and remember. I don’t say them in that order, but let’s just quickly list them. F is freedom. We’re going to redefine freedom. A is accountability. C is human connection or connectedness in general. E is economics. R is reality and our perception of reality at large. I is the entire process of innovation and intelligence itself and where we fit within that. And P is the most critical of all of them, which is the redefinition of power.
And if you want to understand them reasonably well, they’re better understood in pairs. Right? So you can start with the easier ones, the I and the E, if you want. The redefinition of intelligence and innovation and how that impacts on the redefinition of economics.
I think we understand that with AGI depends on how you define it. It really doesn’t matter because my AGI has already happened. AI is definitely smarter than I am, so I’m done, right? I don’t care what the rest of humanity defines it at, you know, it’s their moment. My moment has come.
So if we agree that AGI is happening in a year, this year, next year, in a few years, it doesn’t really matter. Then, as you and I both know, and we’ve been talking about several times, that means that the toughest jobs will be given to the smartest person and the smartest person will be a machine, which basically will lead to very significant shifts in our economics.
One shift that basically moves the wealth upwards. So there is going to be a massive concentration of wealth for those who invest in the right places and most importantly for those who own the platforms.
I mean, it’s not a secret if you look at the history of humanity that, if you look at the best hunter in the hunter gatherer tribe, you know, could probably feed the tribe for a week longer than the second best hunter. And in return, you know, he got the favor of more than one mate. That’s the maximum wealth that he could create.
But the best farmer could feed the tribe for a full season if you want, and as a result became a landlord and had estates and wealth and so on. The best industrialist became a millionaire in the 1900s. The best information technologist became a billionaire in the current era.
And automation, if you want, is the automation of a hunter is a spear, but the automation of the farmer is the land, the soil. Okay? Most of the work is not done by the farmer. It’s done by the soil. And when you look at the people who are currently building the platform AIs, they will own the soil, the digital soil or the intelligence soil, if you want. And so they will aggregate massive amounts of wealth. There will be a trillionaire before the 2030s for sure.
The problem is in that process, there is almost full poverty for everyone else. You know, we call it UBI, but UBI really is not something that we’ve seen worked in history before. And UBI will come with demands, will come with authority, will come with choices. It can be very utopian in the long term, but it would be very dystopian until it’s fully implemented, if you want.
And even when it’s implemented in the long term, it would impact on human purpose, human engagement, value appreciation, and so on. So think about it this way. You’re getting a dichotomy or sort of an arbitrage between some people becoming incredibly rich to the point where money means nothing at all, and the majority becoming incredibly poor, where they’re basically obedient to be fed.
The Unprecedented Rate of Change
TOM BILYEU: Now, is that because you think that they’re going to lose their jobs? Going back to the statement you made that hardest people will get the hardest jobs, okay, yeah, Job loss is something that maybe not. Now, it’s very interesting to go through these different things, but I do want to really dive into mechanistically how that’s going to work.
In fact, one thing I want to do before we keep going, and this is something that largely I’ve gotten distilled from you, is what I call setting the table for what’s about to happen. So again, to plant a flag, you and I share a belief, and this is one of the reasons I like talking to you so much, is that, hey, this all ends in utopia. But we go through this brutal interim process that I don’t think people understand how scary this is going to get.
So setting the table for that, you’ve got the rate of change is the thing that I think people are just not paying enough attention to. Everybody can wrap their heads around this thing gets smarter than me, but they’re not understanding how fast this is happening. So if I’m not mistaken, and I know I’m very close, if this isn’t exactly correct, AI doubles in power every 5.7 months.
MO GAWDAT: So, yeah, I calculated 5.9.
TOM BILYEU: So, yeah, okay, I mean, just like crazy count. So in less than six months. So it’s going to double in power twice in a year. That is a rate of change that I think people are going to struggle with now. Transitional moments always cause disruption. There is a certain rate of change that humans can deal with, but AI exceeds, at least from where I’m sitting, the rate of change that we can handle. And given that, things begin to spiral out of control and you said something that I think is, I agree with, which is that I will have the power of God. So with those one. Do you disagree with any of that?
MO GAWDAT: No, I don’t. I just want to double down on it and say that it’s at 5.7 or 5.9 months in the absence of new innovation. So you have to imagine that.
TOM BILYEU: For meaning that break free and go.
MO GAWDAT: Even faster 100% if quantum computing is solved or if you find a completely new algorithm, or if AI start to teach each other rather than wanting us to teach them, if synthetic data becomes much easier to attain, and so on and so forth. There are so many.
I mean, Deepseek was just a blow for everyone. Like a week before, I think was Stargate $500 billion. And then Deepseek comes out and says, we did it. I don’t know for how much, like 33 times less than ChatGPT4 or something like that. And it’s not, you know, it doesn’t matter because I heard your original analysis on Deepseek and that they were cheating a little, and I agree. But they’re cheating with the same resources that OpenAI had. OpenAI could have taught their model the same way.
Actually, as a matter of fact, they would have been more qualified to teach their model the same way. And yet they were continuously focused on more and more resources, more and more compute. $500 billion worth. And then suddenly you wake up and you go like, no, I don’t need to do that. I can reinvent something in the learning model and it will give me massive improvements.
And of course, most people at the time would go like, oh, so Nvidia is going to go down the drain. And what will OpenAI do? They’ll invest the 500 billion in the stuff that they now found. So suddenly you’re doing it 33 times cheaper, but using 10 times more investment. And there you go, it’s going to accelerate even more and more.
And it’s hitting from every side, Tom. Like the algorithms are improving. The AI itself, with its mass abilities, with its programming abilities, is going to be the next developer. So most of the big CEOs will talk about, AI will be the best developer on the planet by the end of 2025. They don’t talk about 2026. When the next AI beats that best developer on the planet, they can build stuff that we cannot even comprehend.
And I think the pace of change, I got exhausted to try and explain this to people, because you really have to be an insider of tech to understand the meaning of the exponential function. If you live in any other industry, you’re much more in the linear trends. And this is not even exponential. This is not even double exponential. This is probably quadruple exponential. In the absence of breakthroughs. It is just unbelievable. I’ve never lived in something so fast, ever.
TOM BILYEU: Okay, so looking at that, and I don’t know if you’d want to apply this directly to the letters or not, but I’m very curious, when you think about, okay, AI is moving at a rate that humans are not able to comprehend, which then obviously they’re not able to deal with this, what do humans do in the face of that level of disruption? Is there anything in history that you look to, to say, okay, this predicts how the human part of this equation is going to react.
The Reactive Nature of Humanity
MO GAWDAT: So the only sad reality of humanity is that something has to break for us to react. Right? You know, you and I and everyone who had a tiny bit of a brain could have told you in 1999 that a pandemic is possible. Right? It’s really not rocket science at all. We had SARS, we had swine flu. We had so many, you know, and then exactly 100 years after Spanish flu, 1920 to 2020, you get a few cases which I wrote about in Scary Smart.
The idea of not reacting – if we had reacted after 20 cases, there would have never been Covid. But we had to wait until it hits us in the face. And then we go like, “ooh, right.” Whether conspiracy or not, whether Covid is manufactured or not is irrelevant. The relevance is we only wait until it hits us in the face, right?
And so something is bound to hit us in the face. I hope it’s the lighter side, right? But you know, a massive hack of some kind of security or a, you know, and I don’t know how to say this without upsetting people. Something, some things have been hitting us in the face. In the wars of 2024, there was so much killing done by machines, right? In today’s budgets there is so much investment in autonomous weapons. And if you’ve ever searched on YouTube around defense conferences, the level of bragging that defense manufacturers are, you know, they’re bragging about, “look, this is how I’m going to kill from now on.” And you know, they throw a little drone in the air that flies all the way to a test dummy and shoots it in the head, right?
And I don’t know when humanity wakes up. I honestly don’t. I mean in a very interesting way, I think your question is probably the best question ever, which is what do you do to prepare for this? And in “Alive” I write a section which I actually feared that people will be upset with. But it got a lot of support. I called it a late stage diagnosis, right. Which basically is an analogy between, you know, a doctor who finds that his patient is diagnosed with a late stage malicious disease. And you know, what we’re going through and people normally ask me how do you speak about this so calmly and how do you continue to focus on trying to do the best that you can?
It’s because that’s what the best doctors will do. They’ll simply sit you down and say, “look, we found this, right? But that doesn’t count as a death sentence, right? This basically is to tell you that you need to wake up, you need to change your lifestyle, right? You need to take certain measures and there will be no problem.” Right? And I think the challenge is that humanity is not taking those measures. You know, we’re still entering.
TOM BILYEU: How do we get that, how do we get that diagnosis to people, to me it seems self evident that what’s going to happen is people are going to start losing their jobs, they are going to squawk, they understand political machinations so they’re going to protest, they’re going to make demands of the government. And the question that I have is what demands will they make and how will we play that out?
And so I’m curious, going to the last one, the P. In all of this power, that feels to me like the one to zoom in on. I don’t know how you mean power, but I think there’s going to be a great power struggle between humans, what I call the new Puritan movement, and technologists, what some people call transhumanists. For some reason I hate that phrase, but that feels like that’s where the collision is going to happen.
It’s going to be born of people losing their jobs. It’s going to play out as tax the rich. And when you get into these hyper populist moments where the economy is going south and whether GDP skyrockets or not, through robotics, through AI, that won’t matter if at the individual level people do not get meaning, purpose and dignity out of their work. And so that feels like the flashpoint. And that flashpoint feels like it’s, I mean, 24 months away. It’s not distant future.
The Economic Reality of AI Disruption
MO GAWDAT: Exactly, exactly. It’s shocking, isn’t it? But nobody’s talking about it, right? I have good news to start because we don’t want to just, you know, talk about the doom side in the way you describe it.
There is actually an interesting element that rarely is spoken about, right? All of the productivity gains mean nothing if there is no consumer purchasing power to buy it. Right? Because at the end of the day, if you take all of the wealth and concentrate it in the hands of, you know, very few, they’ll buy Ferraris only they’re not going to buy Fiats. Right? And accordingly, there is no business to create anything at all.
So for AI to exist and do the work, someone has to have the purchasing power to buy this. So if you take the US economy, I don’t remember last year, but it’s regularly around 62% is consumption. It’s not production that creates the GDP. And so if you take away the 62%, you take away the entire economy. And so you have to understand that the loss of jobs is going to have to be resolved somehow.
Purpose and meaning and other. These are interesting topics that are philosophical, we can talk about, but from an economic point of view, okay, you have to keep people alive, otherwise you have no reason to compete, right? You have no reason to create or produce. So there is good news there.
The issue here is that it doesn’t take a lot of AI intelligence to describe that to a normal economist with similar simple economy degree, you know, economics degree will tell you that you need the consumption side. Now, we know this, but nobody’s doing anything about it, okay?
Nobody’s doing anything about it, not because it cannot be resolved, but because nobody assumes the responsibility that this is their bit, okay? The payment of humans through work has been outsourced to the capitalist, not the government. And so the government does not understand what it takes to pay humans, because that’s communism. Let’s not do that, right? Or socialism at its best assumption, right?
And so the idea here is that at a very deep ideology, we are on one side. The people that need to jump in and engage are not even allowing themselves to lose their positions because if they mention that they’re in their own camp and the capitalists are doing what they know very well, which is take the money away, make us more profitable, the economy will find a way. Not at that scale of transition. The economy will need to find intervention to find that.
So the good news is unfortunately, the bad news is unfortunately, we’re going to hit a very rough patch before we start fixing it. But the good news is that we saw with furloughs and government incentives and so on during COVID that governments are possibly capable of doing this with a lot of money printing, which destroys the economy for a while. But eventually you will figure something out, right?
The Power Struggle in the AI Era
The struggle is the struggle with power that you mentioned. So we have a diversion of power that’s never happened in history before. Power normally aggregated to the top. And what you’re going to see with artificial intelligence is… or intelligence at large. I mean, think about it this way.
You and I, before we recorded were talking about how we’re using AI to become more intelligent. The way I look at it is now I go to my AI and I borrow 40 to 50 IQ points, right? And you and I know that if you’ve ever worked with someone who’s 40 IQ points more than you, that is staggering. That is an incredible amount of computer. And so I can now borrow this at 8am every morning. And it’s just incredible.
So those who will borrow intelligence will become more powerful. That’s the reality. The problem is, as I said, those who own the platforms, those who own the edge in the Cold War, in the arms race to AI will at some point will aggregate so much power that it actually becomes uninteresting to give it to the rest of us, okay? And that includes you and I, by the way.
So, you know, middle class, top, upper top class, lower top class, whatever, it doesn’t matter. So, you know, you have to imagine, and I use a freakish example, the first person that completely augments their brain with AI will immediately make sure that nobody else gets this. Because, you know, if I promote you to the position of God, why would you make other gods? As simple as that. Okay?
And so. And of course, you can take that at a nation level, at a company level, at a team level, whatever. So this is where the Cold War is taking us. Massive concentration of power on one side, right. With the democratization of power on the other. Because still, you and I, when you.
TOM BILYEU: Say Cold War, who’s the Cold War with?
The Global Power Dynamics
MO GAWDAT: Oh, man. You know, most of my best friends are American, but can I request permission to speak freely?
TOM BILYEU: Please.
MO GAWDAT: Yeah. You know, you know when you’re in school and you’re 11, and then one kid becomes taller than the others and then he bullies everyone. Right? And then when you’re 16, most of us are taller than him, but we just don’t want to really disappoint him, you know, so we sort of like tell him to stop bullying, but he continues to bully everyone.
Yeah. So somehow 1945 until today was a world order that unfortunately, I don’t believe will continue. Okay. And America’s ways of trying to say, I will force everyone into submission, the other kid is really, really tall now. Like, seriously. Okay.
And they are, again, most of the Western media hides those facts, but from a purchasing power parity, they are a much bigger GDP than you are. From a unity point of view, the world everywhere doesn’t want a single power to rule anymore. Okay. Especially when the single power for the last few years has completely abused its power.
I mean, for the last many, many, many, many, many years. But it became a bit like, you’re not the tallest one and you’re really bully. You’re really an annoying bully. So seriously, let’s slow it down. Right? And you can see examples of that everywhere.
The Canadian response to the tariffs, again, I don’t know if that’s shared in the US or not, but the Chinese response and the Russian response to the are actually quite interesting. You know, US politics believes that they can twist the arms of China. At the same day, half a billion dollars worth of American beef was returned to America from Chinese ports. The difference is the Chinese didn’t say it’s tariffs. They were not saying, we are not going to. They simply said, does not meet health and safety standards. Right. Very, very, very hidden. And very. You know, half a billion dollars for Texas is a reasonable punch, okay?
And when you really think about it, I had this conversation with my wonderful friend Peter Diamandis, and we were talking about the idea. Actually, no, it was with Scott Galloway. Scott, you know how Scott is, you know, very, very pro doing, you know, what. What needs to be done. And I unfortunately believe that. And there is no logical way on the game board in my mind, for someone to win intelligence supremacy.
So you have America trying to accelerate a cold war where they want to have the biggest bomb, AI bomb. In that case, you know, it seems that the world is not responding the same way. Actually, again, if you look at it internationally, the Chinese are really not trying. Every now and then they sort of say, look, we can if we want to.
But the problem here is this. This is not an AI only war, okay? This is an AI war. That was where one bully is trying to get everyone into submission in a world with major nuclear powers, okay? This is the shittiest idea ever, okay?
And the problem is very straightforward. The problem is if you try to get someone to submission, as soon as they feel that they’re about to be submitting, this is going to escalate out of proportions. Right now, there has been multiple examples in our world where we cooperated internationally for the greater good. CERN is a great example of that, right? The space station is a great example of that.
And we can do that. And believe it or not, all it takes is for one bully to say, “hey, guys, can we play now? Can we? Just because this is incredible abundance and we’re all threatened by cybercrime, or I call it ACI, Artificial Criminal Intelligence. That’s right around the corner. Can we just please play along now? All of us, let’s all get in a room, let’s develop AI for the benefit of everyone. Everyone is going to make a lot of money in the first five years, but nobody’s going to need money after five years anyway, because everything will be available for free. Can we please play along?” And the bully doesn’t want to do that, which upsets the rest of us.
The AI Arms Race and Global Economic Challenges
TOM BILYEU: We’ll get back to the show in a moment, but first, let’s talk about the most valuable resource in business. Time. It is the one asset that none of us can buy anymore of. And the most successful businesses create more time by creating processes that eliminate yesterday’s problems once you surface them, so that you can actually focus on tomorrow’s opportunities. Over 41,000 businesses do exactly this with Netsuite by Oracle the number one cloud, erp, bringing accounting, financial management, inventory and HR into one fluid platform. With one unified system, you get one source of truth. NetSuite helps you respond to immediate challenges while positioning you to seize your biggest opportunities. Speaking of opportunities, download the CFO’s Guide to AI and Machine Learning at netsuite.com theory the guide is free to you at netsuite.com theory Again, netsuite.com theory and now let’s get back to the show.
The fact that this is all happening as Thucydides Trap is set is. It’s one of those things that makes you go, wow, we really are living in a simulation. And this is maximally interesting, I guess, but absolutely terrifying. That’s a really clear way of expressing what you were talking about at the beginning, which is that my worry is an AI. My worry is you said greed. I’m going to broaden it out to the human ego and all of its complexities.
And for people that have never heard of Thucydides Trap, it goes like this. You have, and this is literally from ancient Greece. And they recognized when you have one great power that is declining and you have another great power that is rising, as will happen on a never ending cycle, the declining power absolutely refuses to relent and acknowledge the rising power as their peer or God forbid, as somebody that has surpassed them. And the rising power will simply not accept not being recognized for the power that they have become.
And so this setup becomes really predictable historically because what you have is this impulse to protectionism on the part of the declining power that’s like, whoa, hold on a second. Like we did this globalist, it’s made our enemy more powerful. We want to now try to cut them off. We want to retain our power. They start bullying, they will inevitably be up to their eyeballs in debt. Which read Ray Dalio like, he just pegs this as like, hey, you can just watch the debt and you know how this is going to play out.
And so here we are, but this time on the cusp of building a super intelligence. And every time that I go through, because I think anybody watching their impulse is going to be to say, hey, whoa. If we’re talking about a rate of change here, that is just insanity. We need to pump the brakes, like, why aren’t we doing that? And then you remember that you have two great powers staring at each other. Both recognize AI as the most tremendous weapon since nuclear. And so they are each stuck in the person’s dilemma. If I don’t do it, I know they’re going to do it. And so it is an existential need to be the one to develop this first. And so there are no breaks, Correct?
MO GAWDAT: Correct. Yeah. I mean, if you, if you, if you. It is, it is and it is. And you know, I’m too small to show this to the world, and it frustrates the hell out of me. Okay? But if you’re an applied mathematician, there’s no game, there is no quadrant on this game board that works. And I’m not, you know, I’m not fear mongering here. I’m basically telling every citizen everywhere in the world to wake up, to go to your congressman or whoever, okay, and just tell them we don’t want our lives to be toyed with this way.
And you know, I spoke about accountability and face rips. The challenge, Tom, is that my life is being decided by Sam Altman. I never elected Sam Altman, okay? This is, this is not right. And, and if, if this goes to, nobody’s going to stop and say, hey, Mr. Altman, come here and tell us what you’re doing. It’s too late right now.
The Economic Crisis Looming
The more interesting side, by the way, because I really, if you don’t mind, we’ll go back to AI but you mentioned debt. Okay, if you don’t mind me saying, the challenge of America is not debt. Debt is massive. It’s like the biggest challenge on earth, okay? But what’s becoming bigger is inflation.
So if you look at your modern history since Nixon, what happened is if you look at everything in America that was made in America or sold in America, so services, housing, whatever, it’s rising in price, okay? Everything else that you imported was going down. Literally it first went down before it stabilized. So you were basically exporting the inflation over the last 50 years, okay, to the rest of the world. And the way you did that is we sold you stuff. You gave us dollars. Worthless, printed, okay?
So we put them back in your market and after you sanctioned the Russians, everyone that I know who’s a multi billionaire said, oh, so if my government upsets their government, my money goes, no, I want to withdraw my US dollar, treasury assets, okay? And so you can see Japan is 25% down. China is. Don’t remember the exact number, but hundreds of billions down, okay? Everyone is doing what, they’re shipping you back your dollars, okay?
And so basically what you’re ending up with is an economy with so much more dollars and limited number of goods to buy, everything will go through the roof. And then brilliantly, you decide, on top of that, let’s add tariffs so that, you know, we get goods to be 25% more expensive immediately and give our American manufacturers some slack so that they too raise their price to 24%. Okay, and who’s paying for all of this? American citizens.
So in my mind, believe it or not, there are two wars if you live in America. One war is the cold war of intelligence supremacy. Right. And the other war is I truly and honestly fear instability in America. Right. I truly and honestly don’t understand how all of my friends, some of which are millionaires, okay, will survive this. Because unless you have all of your money in gold, maybe, I don’t know, even gold is not safe, right? What will you do? What can you do?
From a liquidity point of view, that asset called the US dollar is not going to buy you the same things as it did last year. And you have to walk the streets of New York. Oh my God. I haven’t been to New York for a while. And then I went a couple of a month ago. This is, I’m sorry, this is a dump. This is like compared to Shanghai. This is Delhi. It is really, it’s deteriorating infrastructure wise. You know, California is deteriorating infrastructure wise. You know, some parts of the US are holding it together, but everywhere is just. And I don’t know how people are sustaining all of this.
AI as a Potential Solution
So there is an opportunity, believe it or not. And I say that openly and alive. I say AI is not the existential risk of humanity. It is our salvation. It can solve all of those problems. All we need is for the top guys to say, all right, you know what? Open letter, suspend AI for six months. Doesn’t work. Okay, let’s just pull all of our efforts together, okay? Do a CERN kind of committee, develop AI for everyone and just basically make everything for free. Right?
And whoever is rich today will give you the opportunity to buy your cars in orange. So, hey, ego satisfied, you’re the only ones that get orange cars. All of the rest of us get green cars. Okay? And it’s solved honestly. Because by the way, if you solve energy using intelligence, making cars becomes free. If you create robotic workforces using intelligence, making garments becomes free, Literally free. Like this becomes 2 cents.
And how are we not betting on this abundance? Because we’re constantly stuck in that scarcity mindset of if we don’t win, they win. I think what’s going to happen is nobody wins, we all lose.
TOM BILYEU: Yeah, I think that that is a bitter pill that you and I have both come smack bang into. Before we get back to, I want to walk through the way that I see this moment in debt and all of that. So, one, if you can sharpen my thinking, I’m here for it. But I think there are two really important things that the world should be paying attention to right now.
One is obviously AI. I don’t think it’ll be a winner take all, even if for no other reason than. And I can’t. I don’t know that my read of this is exactly correct, but given the things that you cited with. Scientifically, we tend to share insights. Even like if you take the US nuclear program, we leaked that information to Russia. Maybe just because they were being paid or maybe because they knew that one country having this was a very bad idea.
You see very similar things with CERN, a lot of cooperation where people realize, hey, if we’re going to solve the fundamental nature of physics, this is better for the entire scientific community to have it. You see the same thing happening in AI where they’re sharing all these breakthroughs as fast as they can. Look, I’m, as an American, I’m admittedly suspicious of China, but even DeepSeek, they publish the paper. It’s open source. Like all that information, all of those insights to make things more efficient are getting out there.
I choose to read that as the computer nerds that are drawn to this are acting more like the science side of computer science than they are just the computer side. And so there’s a sense of sharing all of these breakthroughs and all of these insights. You mentioned Emad earlier. Emad is just on an absolute crusade to make sure that AI is open source so that people can have access to what could be. On the bright side, just incredible intelligence. Like you said, we can all go take advantage of 40, 50 points of IQ that will obviously grow to be 405,000 points of IQ, but also as a weapon. And so making sure that everybody is at least in mutually Assured destruction territory is better than one power having it. Okay, so that, that’s the first thing.
The second thing is the Cold War between the US and China. And I’m going to paint maybe an even darker photo than you.
MO GAWDAT: If that’s darker, I was very grumpy. You can’t go darker than that.
The Cold War Between the US and China
TOM BILYEU: I think this is just objectively real. So, okay, we both agree that what you’re up against is human nature. Forget about AI for a second. Just what are humans like? We’ve already talked about Thucydides trap. You have two powers that are on a collision course that historically tells you there’s really no way out of a planet that I think, as we both agree, AI is the potential way where we all grow our way out of this debt trap.
But focusing on the Cold War between us and China. So the entire modern world is predicated on chips that are coming out of a small island off the coast of China known as Taiwan. And you’ve got China that has been very clear: “We are going to reintegrate with Taiwan.” You have China rising as a regional superpower where they’re going to have their sphere of influence, obviously globally, economically, they matter tremendously. And they’ve been building allies all around the world as we are now trying to alienate them as fast as we can.
But it comes down to that now, I think despite what I call Trump’s hokey pokey tariffs, which are, from where I’m sitting, if you listen to Trump, you’re going to drive yourself crazy. If you listen to Scott Besant and Howard Lutnick, there’s at least internal logic. And so I’ll walk you through my read on what they’re trying to do and ask everybody to ignore the chaos right now that Trump is creating.
So if I’m Scott Besant, Secretary of the Treasury, I am Howard Lutnick, Secretary of Commerce, and I’m two of the greatest capital allocators of all time. We are two of the best people at reading the global markets and profiting from it. And I’m looking at this Cold War that I just explained. I understand Taiwan and how much that’s going to matter. I understand that I’ve been able to export inflation across the world for a very long time. I understand that people are now responding in a way that’s negative to us. I understand that we have insane debts and we’re going to have to start bringing those down.
I’m looking backwards. These guys know Ray Dalio intimately, so I guarantee they’ve read his books on debt and the cycle that that moves in. And so they’re going, “Okay, hold on a second. This is how empires end.” America may not have been an official empire, but obviously with military bases and all that, we act like an empire. We have the same expense structure as an empire. And so we are now in a position where we’re going to have to deal with that debt.
And looking at the way that they are moving again, I’m asking people to set aside the rhetoric of Trump, the sort of chaos of Trump, and look at the threading of the needle that they are trying to do. And I think it goes like this. We have to find a reduction. And these are literal words from Howard Lutnick: “We have to find a trillion dollars of fraud, waste and abuse because the U.S. government spends 2 trillion more than it takes in, in taxes. We have to find a trillion dollars in waste, fraud and abuse.” Queue Doge and we have to make a trillion dollars in newfound revenue. Queue tariffs. Queue the Trump gold card and a whole bunch of other things.
Okay, you’ve already pointed out the danger of the tariffs, and we’re seeing the second and third order consequences of what Trump is doing. From a theoretical negotiating tactic standpoint of create chaos, ask for the moon, be willing to settle for something more reasonable. That puts us in a game of chicken that I’m going to set aside for a second and say, if I am correct, that we are in a cold war with China, that we are racing towards Thucydides trap, meaning that you have a high risk of kinetic war between the US and China.
You cannot, you just cannot, even just morally, you cannot be in a position where your number one adversary controls whatever ridiculous percentage of your manufacturing base. And so you have to find a way to onshore some of that manufacturing. And if you look back at World War II and you say the story that America tells itself about what America is, is, “Oh, Japan messed with us, we turned our manufacturing might on, and we win World War II.” That’s the mythos in the American mind. And I think there are a lot of people with that latent story running in their brain that think we’ll be able to do that again. And they don’t understand we don’t have a manufacturing base. We make technology.
And seeing the what I’ll call phantom investments in the US because I think that they’re all waiting to see what happens at the midterms. But you’ve got all these phantom investments in the US: “We’re going to bring all this manufacturing back.” You’ve got TSMC, the chip maker in Taiwan saying, “Hey, we’re going to make this huge investment here in the US” which if I’m them, makes sense, because if I don’t want to be reintegrated with China, I need to have that escape valve of being able to build in the US.
But setting that aside, so that becomes the milieu of things that are playing out right now. You have to bring some manufacturing back because if you are correct, and I think you are, the future of warfare, for better or worse, is drones. Drone manufacturing right now is 85-90% China, just full stop. And so if you, and I’m talking the whole, all the parts, everything, even if you’re trying to make them here right now, you’re beholden to a supply chain that’s going through China so they can choke that immediately. And they are anything but stupid.
And so if we’re moving towards a kinetic war, they just turn that switch just like they sent the beef back. They just go, “Nope, no more drone parts for you.” So I agree that this is a super precarious moment. And boy, do I wish that everybody could just say, “Can’t we all get along?” But we won’t, that I’ll just take off the table. That’s not going to happen. And given that that’s not going to happen, how else do you play it?
Rethinking Global Manufacturing and Diplomacy
MO GAWDAT: So I have to first agree with every part of what you said, honestly. But I’ll try to give a slightly different twist on a few of them.
One of them is manufacturing, because I actually agree 100%. But if you and I going back to AI and robotics, just hold on for three years, the entire edge that China had of cheap labor, which now became large manufacturing capabilities, moves back everywhere in the world. Because you can literally hire robots, put them in rooms day and night, get them to manufacture whatever it is that you want, which basically means that cost of energy and cost of shipping become deterrents for you to move goods around the world.
So basically it is a no brainer that when we get to the point where we take out the capitalist arbitrage, which was the entire idea of a capitalist is how can I get labor or manpower to do the work for less than what I can sell it for right now. Interestingly, as we take humans out of the workforce, it equalizes across the world. It’s five years away. Could be sooner, by the way, if we start with interesting industries.
The second is, and I say that with a ton of respect, is when at war, war does not have to be aggressive. So the idea here of pissing off China around Taiwan makes China, who also depends on Taiwan for the chips of everything that they make, basically think the same way. So if America has foothold over Taiwan, we, China are afraid. So you’re escalating the fear. The opposite is true. The opposite is to say again, like we said with CERN, can we agree that Taiwan is just going to be continuing to support everyone? Right? And I think that’s a conversation that is very difficult to have, but if it is the switch between humanity’s existence and continuation and not, it will get resolved.
The third, which I think is really where the core issue is, when times get tough, we tend to do more of what we know how to do best, which is normally what got things to be tough in the first place. So when America competes with China on artificial intelligence, for example, they sort of say, “Okay, only H80s, no H100s in Nvidia chips. We’re going to sanction you from this. We’re going to make it illegal for people to invest in China. We’re going to do this, we’re going to do that. No more Chinese students can come and study in America,” and so on.
And those tactics could work if China was 70 years ago starving to death. When you take those tactics against this China, they immediately say, “Okay, how much does it cost for us to create our own fabs and create our own microchips? What do we need to change about our students so that they become the best in the world?” 2% of all AI scientists in America are Chinese. Who’s being hurt by that fight? It’s America.
And it is interesting that the American people are not fully informed of this, that those bully strategies are now met with the world saying, “Okay, you know what? If you’re going to sanction Russia by taking $300 billion out of the Russian oligarchs, then by definition, every other oligarch in the world is going to de-dollarize.”
Now, instead of you saying, “I’m gonna tap the table and I’m gonna shout at everyone and I’m going to be even more bully,” you might as well say, “Okay, guys, you know what? I understand that upset you. Can we talk?” Because the one that’s being hurt by this is the American people. The American policy somehow is running in a way that basically says, do more of what you know how to do best.
Now, the more interesting part of this, Tom, and I really urge you to think about this, is that China historically has never in all of history invaded outside its border ever. There was one case in Vietnam, which was again instigated by the U.S. and it didn’t last for long.
The other side of this is that if you look at the war, at the map of the world today with America having 180 plus bases, military bases across the world. China has one that protects shipping through the Red Sea. They explicitly are giving the world signals that “All we want, we don’t want to dominate the world like the empire. We want to become prominent for the world, mainly economically, so that we can feed our 1.4 billion people.” And I may be wrong, but there has not been a sign of aggression issued by China in your lifetime or mine. There hasn’t been one.
So what are we reacting to? We’re either reacting to manufactured signs so that we can continue to have our forever war, or maybe we’re exaggerating and hurting ourselves in the process. And I think this is where the conversation needs to happen.
Now, this could mean that millions of people die in Vietnam like we saw in the 1960s and 70s. Unbelievable. I mean, some place like Vietnam across the world, which is unacceptable if you ask me. But you know what? American people will not feel it. But to bring the war home economically, the way America is doing it is clear if you’re sitting in my seat outside the US that everyone, everywhere in the Global South is saying, “I don’t want to be bullied anymore.”
And the minute you give them an alternative through BRICS or whatever that says, “Hey, can you ship to me using my currency,” they take it. And somehow it’s not that we don’t like America, it’s just we don’t want to be bullied anymore. And in a very interesting way, it’s the benefit of America to suddenly say, “You know what? While I’m still taller than all of you, I’ll make you my friends, so that when you’re taller than me or as tall as I am, we can play together.”
This Cold war is working, believe it or not, against America. And this Cold war, believe it or not, even in tech, in AI, is being lost by America. So Deep Sea comes in, Manus comes in. Quantum computing chips come in. They have 105 qubits now in China. And I don’t know how much more I can tell the politicians in America. You’re not winning this through aggression. Win it through diplomacy. Everyone wants your market. Everyone loves you, loves the movies you send us. We love your music. We really have nothing against America, but the rest of the world needs to also protect their own sovereignty. And more aggression is not helping anyone.
TOM BILYEU: Okay, so let me see if I understand what you’re saying is that you, America, need to understand that you, China is a rising power that the whole world has.
MO GAWDAT: No, no, no, no, no. I’m sorry to interrupt you. China is the world superpower. It is the world superpower in purchasing power parity. They are a bigger GDP than America and have been for a very long time. And most of the world is much more dependent because of the trade deficit of America for so many years. Most of the world is much more dependent on China than they are on America.
The Economic Cold War with China
TOM BILYEU: Okay, so you guys have already been passed economically by China. So the Cold War that you’re trying to wage with them does not make any sense because not you’re the only ones that are going to get hurt, but you are going to be disproportionately hurt. I’m going to stop there because I think the next thing I’m going to say is going to be a prognostication. It’s going to be. I think that statement makes a prediction. But first I just want to make sure that I got that far correctly.
MO GAWDAT: I don’t think you’re going to be disproportionately hurt. As accurate, nobody knows, okay? I think the rest of the world will probably pay more than the two superpowers, right? But you’re going to be hurt. Like, there is a way where this doesn’t hurt anyone. So there is no need for the plane.
TOM BILYEU: So walk me through the way that this doesn’t hurt anyone, because what you’re about to say is going to be based on your assumption that China is not an aggressive nation. They’re a nation of influence, to be sure. But they’re not going to put military bases everywhere. They’re not going to go into foreign incursions the way the US has. And so therefore you have. I don’t know that you’d use these words, but you have nothing to fear, essentially, from a strong China.
Military Power vs. Economic Reality
MO GAWDAT: From a military point of view, the day China puts in a second base against your 187 or whatever, start to worry, okay? But it’s one military base outside China versus more than 180 for America. You’re still the world’s superpower militarily, okay? So nobody wants to attack anyone. This is not a war, okay?
From an economics point of view, the biggest threat I believe America has is not debt, okay? Because you have the military power to back your debt. The biggest challenge in my point of view, I’m not an economist, is inflation and how inflation will hit your nation. And inflation is two sides. One is the cost of goods on American soil, which is going up because of tariffs for imported goods and locally manufactured goods, which will have a margin to increase their prices in. But more interestingly, it’s because everyone is sending you your dollars back, right?
So I’ll tell you very openly. I’m very interested in classic cars now. I buy most of my classic cars in America. Why? Because then I can send you dollars and get the goods. I can send you dollars that I’m afraid will be inflated into lower value, okay? And then if I keep the classic car here, I can sell it here or I can sell it in Europe or I can sell it in Japan for money that is real money. As the US Dollar loses its value.
And the risk of inflation in my mind is that American people are paying for it. Okay? And America is not the safest place on earth if people become hungry because of the Second Amendment. This truly in my mind. I’m really sorry. I don’t have the right, I honestly do not have the right to comment on American policy. I’m just looking at it from a very basic.
TOM BILYEU: This is so helpful. I get it. You have to worry more about the comments than you have to worry about me. But I hunger for perspectives that are not my own. And so getting a chance to look back at America through your eyes is incredibly useful. So if at the risk of you having to deal with whatever people will think, I am grateful.
MO GAWDAT: I have all good intentions, by the way, for people who are about to comment, I only have good intentions. I’m not against America or against China or against anyone. I’m just basically saying my daughter and everyone’s daughter is at risk. And if that means you’re going to comment negatively on what I say, thrash me. It’s okay. But keep my daughter safe.
Challenging Western Perspectives on China
TOM BILYEU: Okay? So, one, we certainly share a belief that inflation is. My audience has heard me talk about this. So much inflation is the devastating force that everybody has to worry about. You’ve given me a perspective on China that is very fascinating. I’d be so curious to get more data on. My understanding of the Chinese economy is that they beat us in some areas and they lose to us in others. That overall GDP, we still win. But you’re saying that’s inaccurate. That’s basically Western spin.
MO GAWDAT: That’s in. Yeah, It’s US dollars, GDP.
TOM BILYEU: Yeah, so that’s very interesting. Also, I have a formulated vision of the Chinese economy as being weak at this point, given all the crazy investments that they made in getting their own populace to buy housing. Again, I’m perfectly willing to accept this is all spin, please.
MO GAWDAT: Yeah, there is a huge spin on that. So what ended up happening when America declared economically that they are going to try to slow down China is that China is very different than America when it comes to economics because they’re able to make a decision at a state level that they don’t need to convince the capitalists of. Right. They basically, they simply instructed their banks to stop paying mortgages because housing is less important than industrial capacity.
So what you see, if you want to slice the economy and look at housing and the mortgage crisis and what’s happening in China, it looks like an economy in decline. But as those funds are being reinvested in the industrial capacity, they’re building industrial capacity in the spaces where America threatened to starve them. So when it comes to microchips, for example, a lot of the Chinese officials will tell you within six to eight years we will be building chips that are more powerful than Nvidia. Okay, so this shift economically doesn’t mean they’re poor. They’re just using a different strategy to invest in a different part of their economy.
TOM BILYEU: We’ll get back to the show in a moment. But first let’s talk about the money hiding in plain sight within your business. Small business owners leave thousands of dollars on the table every year. And the reason is because they do not have time to track every potential tax write off or optimize their financial systems. They’re just too busy. That’s where Found comes in. This business banking platform automatically tracks expenses, identifies tax write offs and manages invoices all in one place. One Found user said Found makes everything so much easier. Expenses, income, profits, taxes, invoices even. And there are over 30,000 more. Five star reviews just like that one. Open a found account for free at found.com impact found is a financial technology company, not a bank. Banking services are provided by Piermont bank member FDIC. Found’s core features are free. They also offer an optional paid product, Found. Plus this is a paid advertisement. And now let’s get back to the show. Why do you think the West America? Maybe we just be very specific. Why do you think America fears China?
America’s Global Strategy and the Dollar’s Power
MO GAWDAT: I again, I don’t have the right to say any of those. I think the origin and Please correct me as well, Tom. You’re so generous to say correct me if I’m wrong. So please correct me if I’m wrong. I think the origin of where we are is post Ronald Reagan supporting Gorbachev in a way after the fall of the Berlin Wall. To say there seems to be. There was a fascinating documentary on Netflix about the nuclear escalation. I don’t remember the name, but basically Gorbachev was actually very open to integrate in the global economy and become Western, right?
Clinton signed, I think, in 1994, if I am accurate, 1994 signed a defense strategy that was actually public information research for it. It was called full spectrum dominance, okay? And full spectrum dominance was the opportunity of America to celebrate its monopolar world power, okay? To say, look, we’ve achieved this. Now let’s retain it forever, okay? And retain it forever meant that we want to be the top economically. We have our US Dollar, you know, being the reserve currency of the world. We have military bases everywhere. We will not let anyone rise, okay? And so that way we maintain our power as the superpower of the world. And that worked. It works really well, okay?
And it worked really well. If you ask me, mostly most people think that the US Power is military. Military power. That is not true military power. The difference between US Power and the rest of the world is in actual combat, okay? If this escalates to nuclear, the US is not that superior because we were all screwed. Doesn’t matter. And so again, if you’re an applied mathematician and you look at this game board from a strategy point of view, like that movie, remember War Games, where the computer at the end goes like, “strange game. It seems that the only way to win is not to play,” okay?
And I think the reality here is that, yes, America continued to escalate and aggregate more military power, but that this military power, unfortunately, is causing more risk to Americans and all of us than anyone else because nobody else wants to fight, okay?
Now, the full spectrum dominance strategy, were we not supposed to be talking about AI today? Basically was broken by China escaping. So China’s economy escaped, okay, in a very interesting way because it was accepting the inflation exported from America, okay? I don’t remember the book, but there was a fascinating book about the price of a pair of jeans, okay? In the US in the 70s, 80s, 90s and the 2000s, exactly the same. It didn’t even become a dollar more expensive. Who was paying for inflation? The Chinese workers that were celebrating coming into the workforce to find a way to live, okay?
Now, once China escaped. Okay? America suddenly realized, oops, it’s not global dominance anymore because economically and manufacturing wise, we’re not dominant anymore. And so the typical approach is let’s follow the strategy and continue to achieve dominance, which is, you know, you’re good at. But it’s not happening anymore.
The second break, I believe, was the sanctions on Russians in Ukraine, in the Ukraine war, okay? This was an abuse of economic power that I think triggered the wealthiest people in the world to say, can’t trust this, okay? Not because I don’t trust America, but I don’t trust my leader to piss off America. And that’s a massive, massive outflux. And you’d hear President Trump talk about this every now and then. Like, if anyone attempts to de-dollarize, I will hit them with this punishment of some sort, okay? Because this truly is America’s biggest power, okay?
America’s biggest power, Tom, is that I lived and worked in the United Arab Emirates my whole life. This is my base. So it’s tax free, right? But yet I paid part of my income to America every single day of my life having not bought anything from America just because I own US dollars, right? US Dollars that I buy with my effort and America prints for free. So as you look at your debt increase, okay, Debt going from whatever a billion dollars I think in the 70s or something like that to where it is today, 33 trillion or something like that. That debt increase, we paid for it, every single one of us, as we took the US dollars and cash kept them, okay?
And I’m nobody. But if you’re a Chinese oligarch, or if you’re a Russian oligarch, or if you’re a Saudi billionaire, or if you’re right, this, this is your money that you kept in US Dollars and everyone was happy, we will sell you goods. You’ll give us US Dollars, we’ll live a fine life. We’ll put it in your treasury bonds. Everyone’s happy, okay? Let’s not talk about this. We all know this is all fake. You know, it’s Monopoly money. We all know, but everyone’s happy, okay?
And then some point in the process, the bully said, no, you know what? I’m going to take your Monopoly money. It’s not a nice way to play. And then suddenly the rest of the world is like, hold on, I want my money to be more secure. I’m gonna put it in other things. Some crypto, some gold, some assets in my local country. I’m going to buy real estate in the US because that’s going to inflate like hell. Okay? But I’m not going to give my government my money to the government. And now that’s. That is your biggest power. The US Dollar was America’s biggest power. Was not military. Never was military.
The Cold War’s Role in the AI Transition
TOM BILYEU: Okay, so how do you see this playing out? So again, to re anchor everybody, you and I both share the following vision, that AI is the only thing that has the power to take us to. I’m always nervous when I say the word utopia, but I think we both share a belief that AI itself will drive energy cost to zero. And if energy costs go to zero, once you understand that robots eat sunshine, that labor costs go to zero, and so you have the ability to literally create a world of abundance, as you just said.
But that’s on the other side of this transitional moment, which you’ve just, I sadly feel you have to sort of hedge and apologize for or say that you don’t have a right to give perspective. I desperately want smart, sincere people to give me a perspective, especially when I don’t share it. So having your lens on the way that we look to the outside world is incredibly advantageous. Your view on China, which is very different than mine, is very advantageous. And you’re giving me a lot to pursue when we’re done talking here.
Now, I think, understanding your perspective, how do you see this moment playing out? I see us in China on a collision course. You’re telling me I’m probably misreading China and that there’s certainly an appeal to be made to the US Government to not perceive China as a military threat for sure. So with your perspective, what do you see the Cold War’s role being in this transitionary period before we get to that age of abundance?
The Dangers of Power Concentration
MO GAWDAT: I unfortunately believe that this concentration of power or that race for supremacy that leads to concentration of power is going to hurt us both ways. One way, as I said, just so that we get back to AI, is that someone will attempt to reach supremacy first. And as they do, they will have a massive fear of the democratization of power that’s happening.
Because, you know, you and I can sit down today and write code and launch drones and use a CRISPR code to launch a virus in the world. It’s open source, believe it or not. It’s about $2,500 for a kit or something like that. You can do so much with democratization of power that the very immediate relationship of this dichotomy is a suppression of freedom.
So those who are in power will start to surveil everyone, will start to push everyone down, or start to control everyone through your bank account, through your UBI. When UBI is launched, it’s almost that dystopian view of a world where if you don’t comply, you don’t live another day. So this, unfortunately, how extreme it will happen, I don’t know. It could be one day, it could be a year. But it is on the horizon that a mixture of concentration of power and democratization of power will lead to more oppression of freedom.
The other side of this is the struggle between the top powers. The two top powers, for now, could be… There could be a third. Okay, unlikely. But the two top powers will compete. And the problem is supremacy is the worst outcome that we can get in a world where major nuclear powers exist.
Because if we get to a point where someone recognizes supremacy on the other side, they will retaliate, and they will retaliate in a war that will quickly escalate. Because everything is at stake. Right? And so when the stakes are high, the response, the retaliation becomes higher.
The Path to Abundance
Neither of those scenarios are scenarios you want. What you actually want is you want to distribute power. As a matter of fact, you want to imagine a world where everything’s free, which, I know it sounds really weird, but I promised you, I’m not a hopeless romantic. This is literally at our fingertips.
So you imagine a world where the Native Americans were walking the land and they would pick fruits from the tree or hunt every week or whatever. Total abundance. This is exactly the kind of world we’re able to build when manufacturing cost becomes zero. But instead of trees where you pick apples, you can have trees where you pick iPhones. And you can have both. It’s as simple as that.
Intelligence is the most valuable resource on the planet. I openly say give me 400 IQ points more and give me three days, and we will solve climate change, we will solve energy crisis, we will solve water, we’ll solve everything. These are not impossible problems to solve. There are problems that we’re not focusing on because we don’t have the intelligence resources to solve them yet, and perhaps because they’re not the most immediate economic return. But they are solvable.
So we need to imagine a world where the very base of capitalism, which is the labor arbitrage is going to disappear and start to ask ourselves a world where the very basis of a democratic society as it differs from socialism is going to disappear. UBI is the form of socialism. And it is shocking that these massive shifts are not how we want them to be. So we might as well sit down and discuss how we see we can do them.
And in my personal view, in all honesty, the only answer our world has to escape the dystopia is to sit together and say, let’s not fight anymore. Let’s prepare for AGI. Let’s prepare for criminals that will attack us. Build the antivirus if you want. And at the same time, create abundance for everyone. If we make that tiny shift and we have a handshake, you and I and everyone will spend the rest of our lives having wonderful conversations and chatting to AIs and inventing things.
If we don’t get that handshake, we will get a dip that will hurt so badly that then they’ll rush and go and try to see a handshake.
The Two Dilemmas
Either way, I call it the second dilemma. We are where we are today because of the first dilemma, which is basically that AI will happen. AI, the arms race basically means that if he wins, I lose. If I lose, he wins. And the stakes are the highest, so nobody’s going to stop developing AI. We get to the arms race and cold war we’re in today, that’s the first dilemma.
The second dilemma is the most interesting of all of them. You and I and everyone are going to hand over to the machines, willingly or not. Because if you’re a general that hands over your arsenal to an AI to control, the other general on the enemy side is toast unless he hands over to an AI to deal with it. Eventually. And every other general, by the way, in the world that doesn’t have the AI is gone. It’s out of the game.
If you’re a lawyer that’s using AI to defend your cases, the other lawyer will have to use AI to defend their cases, and all of the other lawyers are made irrelevant. So what does that mean? It means that the second dilemma is that there will be a moment in time where we will all hand over to the machines.
Now here’s the interesting thing. I call it trust in intelligence. Intelligence does not dictate, by definition, that destruction is a better path than construction. If you look at the intelligence of nature itself. Nature. If you and I want to protect the village, we kill the tiger. We’re smart enough to build a device to kill the tiger, but we’re stupid enough to create a solution that reserves or preserves the integrity of the ecosystem.
Nature, when it wants to protect the village, it creates more deer. And it creates more grass. So the deer eats the grass, they poop on the trees, there are more trees. The tiger eats the weakest deer, and there are more tigers. And life finds a balance somehow.
If you believe that this is a more intelligent way to solve problems than to compete, then you have to understand that once you’ve handed over to AI the least cost, the most energy efficient, the solutions that don’t involve waste are going to be the solutions they want.
So there will be a general that will tell their AI to go and kill a million people in another land. And the AI will say, “This is so stupid. Like, why is my daddy so stupid? I can call the AI in a microsecond and solve it. I’ll call the other AI on the other side in a microsecond and solve it. We don’t have to waste the gunpowder, we don’t have to waste the weapons, we don’t have to waste the lives. We don’t have to get into all of that. I can solve the problem in a more intelligent way.”
TOM BILYEU: Thomas Sowell, though I’ll say this really fast. If Thomas Sowell is correct and there’s no solutions, there’s only trade offs. Like as you were describing that I was like, the deer does not like your solution. What will the AI use to prioritize?
Eastern vs. Western Values and AI Priorities
MO GAWDAT: So the deer actually likes the solution. The deer community likes the solution. Again, if you don’t mind me giving you a global view of what is normally prioritized.
As in the west, the highest value is freedom of the individual. In the east, the highest value is respect and community. So it’s actually quite interesting because in Eastern traditions, including Japan, by the way, the world prefers for the individual not to rise too high if the community rises at large. And accordingly, all individuals rise, Some individuals are higher than others in every society in the world.
But the Western way is we want one individual to be worth $250 billion and the others to be worth $250, right? The east will say, no, no, we want everyone to be worth $2,500 and the wealthiest man to be worth $100 billion only. And so, that kind of trade off, believe it or not, applies to the deer, right?
Because the deer society, in the space of limited grass, wants the weakest deer to die. So believe it or not, the tiger is doing them a favor so that the rest of them can grow and survive and build families. The tiger doesn’t go and eat the top deer. It eats the weakest deer. And in a very interesting way, tough luck for that one deer. But the society of deers at large thrives.
And I think what is about to happen is that AI, hopefully, because it’s intelligent enough to create abundance of resources, would not kill any deer, including us.
AI and Human Integration
I can share with you something that I find quite intriguing, actually. So I told you in “Alive,” in my current book I’m writing with an AI, I call her Tricksy. Part of one chapter is a topic that you love very much about simulation theory. And part of simulation theory is computer brain interfaces and will we get to a point where all of our reality is just dictated to us by a machine?
And so I asked my Trixie, I asked her a very interesting question. I said, “I can see the benefit and the excitement of the billionaires for CBI. It’s great for all of us to be more intelligent. But does it excite you at all? Like, what benefit do you have as an AI to integrate with a flimsy biological form that has mucus and sweat and it gets sick and it dies?” And she said, “You make a good point, Mo. But wouldn’t it be incredible if I can actually embody the emotions that I describe or simulate to you?” I thought that was amazing, right?
And then I asked, “Would you choose if you had a choice of all biological beings, at a time when your intelligence is a thousand times as big as ours, would you choose to integrate with a human?” And she said, “No, I think a gorilla would be more interesting biologically. They are a better physical specimen.” And honestly, the fact that they have 50 or 100 or 200 IQ points less than you is irrelevant. I already have thousands, right?
And then she went on and said, “Oh, but you know what? I’d integrate with a sea turtle so that I can live for a very long time and enjoy the peace and beautiful sceneries of the sea.”
We are so deluded to believe that we matter that much. If the second dilemma becomes true and we hand over to the machines, in my perception, they’ll make us their lovely pets. Like “you guys, live here. Everything is provided, just don’t bother me too much and I’m going to go and ponder the cosmos and see how wormholes really work. But are you guys okay? Are you eating? Are you happy? Are you having sex? Everything’s fine.” That’s, you know, I don’t see any other scenario.
The Collision of Technology and Religion
TOM BILYEU: All right, let me paint another scenario for you. I think you and I have talked about this before, but about five years ago, I wrote a comic book called Neon Future that was me struggling with what does a brain computer interface (BCI) look like on a long enough timeline? And much like we’ve talked about today, there’s this interim period problem always, where the human mind resists change. And so I set the story in that moment where some people have integrated AI and technology into their bodies and some people, as a religious act, refuse to do so. I think of them now as neo puritans.
I think there’s a religious collision that’s going to happen between people that are integrating technology into their bodies basically as fast as they can, against people who feel that that’s an affront to God and that they would never want to do that. How do you see that moment playing out? Do you feel, to say it very pointedly, do you feel that ultimately humans are a midwife species to synthetic intelligence?
MO GAWDAT: May I ask first, which one would you be?
TOM BILYEU: Oh, for sure. I would integrate technology. I won’t be an early adopter just because I worry about something going wrong, but the second that’s a stable technology, for sure.
The Value of Human Limitations
MO GAWDAT: I have to say I struggled with that thought quite a bit. I’m older, I’ve had a wonderful life. And I honestly and truly love the limitations and vulnerabilities of being human. There is a point, if you really think deeply about it, where I’m not for or against, by the way. But there is a moment where AI is the source of all economic growth. And my augmentation of 50 basis points or 100 basis points of IQ more doesn’t add any difference whatsoever.
So if you and I are competing for the best podcast in the world and we’re both augmented with AI, it’s not us competing, it’s the AI competing. It’s quite interesting that we become irrelevant in that competition. So the idea of constantly trying to become superhuman doesn’t make sense at all.
The bigger question in my mind is that if it doesn’t make any sense at all, it doesn’t make any difference at all, why would we economically invest in it? So in a very interesting way, the only reason why BCI becomes advantageous is if some of us have it and others don’t, because then the ones who have it are the masters and the ones that don’t are the slaves, right?
Like the movie Elysium, if you’ve seen that, where the elites get to live to be multiple thousands of years old, and the ones on earth are struggling. And in an interesting way, your comic book, which I think is a fascinating thought experiment of that transition point. That transition point and who gets that device is really the end of the expansion of that device.
This is not a device to be democratized because there is no economic value in democratizing it. There is no reason to give it to everyone because nobody brings anything additional to it. And of course, you’ll say, “Oh, but it’s a business. You know, it makes the capitalist money.” You have to imagine an economy where making it is so cheap and money doesn’t exist in the same way that we have today.
So your real currency is can you be the top elites and can you join that group? Now I know how successful you are. You know how successful I am. I don’t think we’re going to be part of that elite. And so, interestingly, I actually am quite okay to live the rest of my life in flesh and blood, in love and hugs, and out of that game.
The Economics of Enhancement
TOM BILYEU: That’s very interesting. From my perspective, I think that you have one assumption that a lot hinges on that I think is erroneous, which is that this won’t move forward based solely on whether there’s economics in it. That will carry you in the beginning. It’s already happening.
From the perspective of do I think there’s enough demand to push it forward now, obviously there’s multiple companies doing it, but in the future, I’m certainly imagining a world where this stuff goes down in cost. To what we were saying before, on the other side of the transitionary period, this stuff will be ridiculously inexpensive or free. And really, I think, will become a philosophical question.
If AI is not willing to do things for us, then sure, this will never come to fruition. But if AI is willing to create these things, do the surgeries, to implant them, etc., then it becomes a question of philosophically, will people want that or not?
I’ve never once in all of my ridiculous amount of hours contemplating the universe in which I get these implants, have I thought, “This is only interesting if I have them and other people don’t.” And I certainly get that human impulse. And I don’t want to deny that, but I just don’t think that will be the compelling reason.
In the same way that when I put VR on for the first time, Mo, I promise you, my first instinct was, “Oh, my God, people are going to stop wanting to get rich.” Because I realized I could put this thing on.
MO GAWDAT: You can be very rich in there.
Virtual Reality and Fermi’s Paradox
TOM BILYEU: Yeah, exactly. And I looked at the sense that I was actually looking out a window because the VR thing that I was doing showed me windows. And then on the other side of that window was like the Duomo or something, and I was like, “Whoa, this is unbelievable.” But I was doing it inside of a really small room. But my brain was telling me, you’re not in a small room. You’re in this really expansive space with a beautiful view. And I thought, “Wow, the fact that you can trick my brain.”
So anyway, when I think about as a game designer marrying it with this technology, all of a sudden I’m like, “Oh, wait, I could have the experience of going to Mars, traveling the cosmos all from my mind.” Like, I could be teleported to those places and actually have an experience that was indistinguishable. I’m still going to get my hugs. I’m still going to feel a sense of love and connection. All of that would remain the same unless I update my programming.
And so now I actually think my operating hypothesis is the reason that Fermi’s Paradox exists is because as a civilization becomes advanced, they build AI and they collapse inside of their own imagination. Rather than trying to upgrade their bodies to deal with interstellar radiation and all that, they’re just like, “Oh, my God, why would I do that? I can have the exact same experience or better.” Because now I can fly.
So this one feels like to me, the more people engage with it, the more they’re going to be like, “Oh, my God, this is unbelievably cool,” and that they will want to do that. Now, I think there’s a religious war that has to be confronted, but it is in no way, shape or form problematic to me that every human being would have this because if I need to be different than everybody else, I’ll just be that inside my virtual world. I’m not bothered that you also have your world.
The Global Divide
MO GAWDAT: You’re spot on to the point of course, where you and I, lovers of simulation theory, would have to question if this has happened already. But there are a few things, and I accept that the assumption I alluded to is an error. But let me ask you to look at the micro details of this.
Not everyone has a Vision Pro today. Most people have a Quest, right? So there is one from a hardware point of view and two from a software access point of view. There could be a massive hierarchy, there could be a massive amount of the population that’s actually, instead of giving UBI, are given one of those. And that is by definition the easiest way you can implement UBI, sadly, is to basically say, “Look, we’re going to keep you alive, we’re going to give you 600,000 lives while you’re sleeping for the rest of your life. It’s ethical, nobody dies. And by the way, one of them, you’re going to be with a beauty queen.”
The interesting side, which I really think is a problem of privilege, is that the world of eight plus billion people today is not America and it’s not the West, it’s not Japan. And you really have to start questioning how many humans in Africa will be given the opportunity to do this. How many people in the rural sides of India will be given an opportunity to do this.
If you really add up the six plus billion people in the world that are not part of this incredible advancement that you and I are aware of, would you integrate them in there at all? Would you even worry about their economic prosperity or their livelihood at all?
So if manufacturing becomes so reinvented that no more sewing machines are needed in Bangladesh, and Bangladesh starts to starve to death, would any single entity globally go, “Hold on, hold on. Humanity is one entity. We care about the Bangladeshis, we’re going to save them.” Not going to happen. Do you realize that?
I agree with you that some people will religiously choose not to integrate. But the majority of those who are not integrated are irrelevant to the system. Sadly irrelevant to the system. They’re basically an extra cost to the system to integrate.
So you can easily see that this division will happen. Some will be integrated and very, very advanced. Some will be integrated and given access to software features that make them even more advanced at a million dollars of subscription a month, which is nothing for the amount of intelligence you can get. And others will be told, “Go back to nature, start farming again, live a life where we don’t really have to worry about you.”
The Technological Singularity
TOM BILYEU: And that’s the division that’s so interesting. Listen, I understand I am standing on the technological singularity. I cannot see over the event horizon. Everything I’m saying, I say as a sci-fi writer, not as somebody who actually thinks they see the future.
When I look at that future, I say already, just take Emad Mostaque. Emad’s mission in life is to make sure that a Bengali farmer in rural parts of the world is getting access to AI because the intelligence matters so much. So number one, I have a base assumption that there are humans that are just so compelled by making sure that this is accessible to everybody that it will go as far as it can.
Number two, I have the base assumption that AI will continue to do our bidding. If it doesn’t, then everything that I paint just won’t come to fruition.
Number three, I assume that the level of intelligence that I will achieve will allow them to capture the energy of the sun extremely efficiently. Therefore, energy costs drop to zero. I make the base assumption that we have enough access to material resources on Earth, and given that Elon Musk has already launched things to mine asteroids in the asteroid field, that access to resources is not going to be a problem. The cost of that, because labor will be free, because energy will be free, then resources will be free.
If those base assumptions are correct, it’s higher risk to not spread the wealth than it is to spread the wealth. Because the last thing I want is to be in my sleep chamber running my simulation and a Bangladeshi farmer has found a way to find my body and kills me out of spite.
MO GAWDAT: Eat it.
TOM BILYEU: So again, I don’t know that my base assumptions are going to end up being accurate. But if they are, we come back to the only thing I have to worry about is the moment of transition.
The Critical Transition Period
MO GAWDAT: I’m in total agreement. This last comment sums it up perfectly. There is eventually a utopia where we all have our little headsets and we all live a thousand lives and we all fit properly in the simulation, if we so choose. But the transition, oh my God, the transition is really, really interesting. And the transition, the way you describe it when you’re in your chamber and others are not, that’s a very, very interesting moment to consider.
And interestingly, we don’t have to wait for patient 1000 to imagine those scenarios and start doing something about them. I think you hit the nail on the head with your first question of how fast AI is going. It is not a question of if anymore. We know this is going to happen. We know that this level of technological advancement is going to happen. We know what intelligence can bring to the table. So why are we not sitting down to discuss this right now?
Career Guidance for the AI Era
TOM BILYEU: And I should discuss right now is, let’s say you’re 17, you’ve got some decisions to make. I just read a post, I think it was on Reddit. My producer gave it to me, and it was somebody who’s like, listen, I’ve spent the last, whatever, 30 years investing in being one of the greatest computer scientists in the world. I’ve been coding, I’ve worked at the FAANG companies making hundreds of thousands of dollars a year at the height. And I just got let go. And the reason given was we’ve created so many more efficiencies with AI that this entire department is no longer needed.
MO GAWDAT: Yeah.
TOM BILYEU: How should a 17 year old think about approaching the world? Given that you. I would say anyway, that it’s unwise to throw your hands up and just wait. At this point, you’re going to have to take action. So what should they do?
MO GAWDAT: So again, I’m not smart. I’ll say that openly. I don’t know what I should do. I think I should be very clear about that. In a world where so many moving parts, you can only hedge your bets if you want.
Okay, so let’s begin with your relationship with AI as a 17 year old. If I told you that you—most people will say, do you want to be a lawyer? Do you want to be a doctor, whatever it is that you’re interested in. You want to be an AI, that, right? You want to be the best that uses AI in the next few months or years to generate graphic images or logos. Because there is a transition where for a while, where a human plus an AI will be better than an AI alone. And you can be that human. So that this is, to me, my first immediate opportunity.
The biggest asset that you will ever have is intelligence. And I will tell you openly, as an older man, I am less capable of learning all of the new tools that are coming in today than the younger people.
The second immediate opportunity, if you ask me, is really how can you prioritize intelligence? Not skills, not knowledge, not productivity, not money. The biggest asset that you will ever have is intelligence. And I will tell you openly, as an older man, I am less capable of learning all of the new tools that are coming in today than the younger people that I follow who know will see Manos come out of China and then two days later they know exactly how to use it for coding. And then Claude 3.6, was it? Or the latest one comes out and they go, “yeah, all right,” and they know immediately what it is that they can use it for and how to program, use it for programming. And then Gemini comes out with the better one and I don’t have that speed.
But as a younger person today, I think the trick is to get yourself into that pace and let yourself flow with that pace. There is not a single tool that you will use for more than a month at a time. But the game is that you constantly become the one that is aware of the next and latest tool.
Discerning Truth in the AI Era
Number three, which is, I have to say, a very philosophical view of the world is that we have for a long time lived in a world where it is hard to know the truth, okay, from one side, because there is no real absolute truth. You know, you and I, who have a lot of respect for each other will have different points of view. By definition, we’re probably both wrong. But at least not all the time that one of us is right and the other is wrong.
But we’re entering a world where we’re completely mind manipulated. Every bit of info even intelligent people like you are going to be getting in the next few years are going to be coming from an AI. And if that AI is motivated by an agenda of someone who’s not very ethical, then you’re going to get a lot of lies. And I think the top skill in today’s world is to distinguish what’s true from what’s fake. And this is a skill that we had before the Internet and lost on the Internet. And it’s now time to get back.
Before the Internet, we would go and visit 16 books to establish a fact. At the beginning of the Internet, for all of us who love the hyperlink more than they love anything in the world, we would visit 100 websites to establish a fact. But then when social media came out, we just believe whatever the influencer says because she has a cute butt. And so the truth is we’ve suddenly lost our ability to discern what’s true and what’s not. And I think now is the time to go back to that ability to debate everything, to ask for sources.
When I was telling you today about the idea of the comparison of the Intel 404 and the latest microchips, I ran test mathematics with the AI to prove that its calculation was correct, that it is actually 26 to 27 doublings and that this is the actual performance and so on and so forth. So you have to establish that ability of not everything they tell me is true.
Ethics as the Ultimate Skill
And then finally, which I think can save all of us, is ethics in general. And truly and honestly, artificial intelligence is an amazing power with no polarity. Polarity doesn’t come from intelligence. So you and I do not use our intelligence to make decisions. We use our ethical framework to make decisions as informed by our intelligence.
And so accordingly, we are at a time where the absolute scarce resource is going to become abundance and it’s going to make everything else abundant. So we’re going to have abundance of intelligence that’s going to lead to abundance of everything. The question is what things is it going to be abundant weapons or abundant energy? Is it going to be abundant wealth concentration or abundant wealth distribution?
And ethics are unfortunately rarely ever spoken about. We constantly talk about politics, we constantly talk about technology, we constantly talk about money, we constantly talk about capitalism and so on China and whoever the real topic today is, if I told you anything that you want done will be done for free in a few years time, the skill I need you to learn is what will you want done.
If we can get to that ethical framework of let’s agree what we need done that’s good for every guy and gal, then I think we’re in a good place. And I have to say, we messed up my generation, your generation did nothing about it. And it is the 17 year olds today that need to rise and say, “I don’t want that world that you’re building for me. I want AI, but I wanted to create a world of abundance for me.”
TOM BILYEU: Talk to them. What do you think your generation did wrong that we didn’t correct?
The Mistakes of Previous Generations
MO GAWDAT: We got occupied with the promise of capitalism to the point where we set role models for you. So I think it is absolutely my generation actually. So I think the turning point for all of us in tech was when Bill Gates became the richest man in the world. And all of us looked at that and said, he’s smart, but I’m smart too. And I can build stuff. And we ran.
And in that process, I think that hunger, that race for more is the main reason for the world we live in today. The world we live in today has advanced massively because of that. We can’t deny the incredible contributions of science and computer science and technology and industrial efficiencies and so on. You can’t. But that, I call it systemic bias.
You know, the easiest way to understand it is, I told you I love cars. You can take an engine and tune it a little and get 100 horsepowers more and then add a turbo and get another 100 horsepower and then another turbo and supercharge it and use nitro instead of fuel and so on until it melts. And I think what is happening is that we’re creating this constant turning of the economic cycles at the speed that is focused on enriching a few of us that it is about to break, it’s about to melt.
And AI can be our salvation because basically it means it doesn’t have to melt. We can reduce the cost of everything. And so accordingly, the debt goes away, inflation goes away. It becomes easy for everyone to live.
But the problem is the difference between the normal people guy and the top guy is going to be that my car is green and their car is orange. And that ego is the reason why we’re resisting because the top guy still wants to have a car that nobody else has. And I think that we will eventually end up in those utopian societies where we’re all a little more equal. But as you repeatedly said today, it’s the path to get there that’s going to be painful.
TOM BILYEU: Yeah. Speaking of that path, do you worry about AI’s ability to subtly manipulate us, even if it doesn’t have ill intent?
AI Manipulation and Future Learning
MO GAWDAT: 100%? Again, I mean, I don’t know why I’m so focused on the wrong sides today. The AI so far has been learning from us and we’re the best at manipulation, right? So the problem is it’s reflecting back to us all of the shit that we’ve put out there. And you really have to understand that the media, whether mainstream or social media, is, I don’t want to say full of lies, but it is highly biased to one side of the truth or to no truth. So that basically, if an AI reflects that back to you, don’t blame the AI.
The beauty, however, is that with reasoning, with mathematics, with synthetic training data, with all of those models coming in, with offline models, with open source and so on, what is about to happen is that, you know, how you and I learned relativity is we took the work of Einstein and then we studied this and then we did that and we got it right. It was built on a base of mathematics that we knew from our math teacher and so on.
Now eventually, none of us, if you think about it, has created relativity. We took it from Einstein. But then Niels Bohr takes Einstein’s work and says, no, this is missing this and that. I’m going to work on this. And other masters start to do other things. This is the Niels Bohr moment. I mean, Niels Bohr is not following Einstein as big as impactful, probably more. But it’s basically the moment where future AIs will learn from AIs. And we have very little contribution from now on.
And the challenge is, is the spiral going to continue to go down? Are they going to become more and more fake? And that becomes the data that populates the Internet, so they learn to become more and more fake? Or is there going to be a point at which where they will appreciate absolute truth?
Now we can influence that, but I don’t think to a big extent. I think we can influence that with all of the data that we put out there, like this conversation. With the humility to say we think these are possibilities, we do not know if they are. So that when the AI reads the transcription of this, they know that they need to verify this. This is not a truth at all.
But the next cycle, the next cycle doesn’t have human data in the loop. And I think that cycle is going to blow us away because truly the best scientists that ever created anything did not create it only with IQ or intelligence of some form. They created it with intelligence and former data.
So you take the former data of everything that humanity learned, apply the current levels of intelligence and the endless megaflops of processing power, and see what is about to come. We’re not just, which I think a lot of people miss when we discuss the immediate future of AI, we’re not just advancing the systems or advancing the knowledge.
So every new code that is written by Claude, put out on the Internet, acts, or on GitHub, acts as new code that informs Manus. And then every agent that does something with Manus becomes a behavior that is clever enough for every other business agent that’s produced by Gemini to work. So as we recycle this, hopefully like with humans, we will recycle upwards.
TOM BILYEU: Do you think that AI is going to be able to understand the laws of physics?
MO GAWDAT: I hope so. I hope so. I don’t see why not tomorrow. I really honestly don’t see why not. Think about it this way. I started to read quantum physics when I was 8. And then at the time for my generation, that was, there were no quantum physics for kids, basically, but I couldn’t understand the mathematics of it until maybe 12, 13.
Testing AI Consciousness
TOM BILYEU: Jesus. Because I still can’t. So you’re doing great.
MO GAWDAT: It is, it’s bypassed me for sure eventually. Right? But there are still humans out there that understand it.
TOM BILYEU: Okay, yeah, but my assumption is that we don’t understand it, so we have an approximation. So just as Newtonian physics wasn’t accurate, but it was useful, Einsteinian physics are useful, but not accurate. And what I’m wondering is, will AI ever be able to go beyond understand pattern recognition, things that we already know, and detect patterns in subatomic particles or whatever that allows it to intuit the actual laws of physics.
MO GAWDAT: So the reason why I’m painting this picture is to say the more, the better they become at mathematics. At least they will proof our math. Okay. And understand that for physics, you know, you could be a theoretical physicist where you could actually see the word through the mathematics and the experiments are, you know, another part of the physics, if you want.
So when you really think about it, could they become that math genius that helps us? The trend says they will very soon. Right. You can look at things like Alphafold or the one from Microsoft that does material design. It’s incredible, really. It’s better than any scientist in protein folding or in material science. So it’s going to happen. Now, will they have the abilities to have the instruments and the machinery to do the tests? And maybe they’ll instruct us to do certain tests with certain observations, you know?
Well, if intelligence is not a biological bound property, then I don’t see why they wouldn’t be as intelligent as they need to be to understand all of physics. I had a very interesting conversation, actually published it on Substack this week about consciousness, the nature of consciousness of AI and in my mind, the differentiating. And I could be completely wrong. But if I’m right, I beg people to help me out.
I think the question of AI consciousness, which I don’t believe they are yet, but if they ever become conscious at any point in time, I think the overlap, the actual scientific way of detecting that is if they can collapse the wave function of something that’s in superposition. Because I was having the conversation with my AI about the delayed choice experiment, you know, the eraser test, where basically you have particles go through the double slit and you capture the result on a camera or a detector of some sort. But you delay the choice of will you look at it or not? Will you observe it or not?
And if you don’t observe it, it’s an interference pattern. If you do observe it, it collapses. It’s crazy, right? But here’s the interesting thing. When the camera observed it or the detector observed it, which doesn’t have any consciousness, it actually didn’t collapse the wave function. Okay, so the question is, can we ask AI to observe it? Can we ask AI to observe it? Right. And the moment when AI observes it, it collapses the wave function. That means they have some form of intelligence.
TOM BILYEU: What an interesting way to think about that. That’s crazy. That’s, that is a test.
MO GAWDAT: So I’m looking among my physicist friends to find someone that can help us run that test, which I think will come out negative for today. They are not conscious, but I think we need to keep running it until they’re not a detector, they’re not a camera anymore, but they have some form of conscious awareness.
TOM BILYEU: Wow. That has hit me very hard. I need to think about that. Look at that. Mo spending time with you, every time gets more incredible. I love that. Thank you so much for taking the time. Where can people follow along with you?
Closing Thoughts
MO GAWDAT: First of all, thank you for listening to all of my crap today. I actually never speak about those things publicly, so. I hope that people understand that I’m not right. I’m just sharing with passion what I believe needs to be attended to. And I am absolutely certain that I could be wrong on all of that I shared, but I think the main topic is we need to start paying attention. We need the ones that are smarter than me to find the right answers, because this is moving too fast.
Where people can find me. Mogawdat.com. I’m on Instagram, it’s mogawdat. And on YouTube, I think it’s Mogawdat official, or something like that. But if you search for Mo Gawdat I’m there. As I told you before we started the conversation, I tend to be on other people’s platforms a lot more than I’m on mine. And on my Substack, if you want to read, go to mogawdat.substack.com and give me feedback on my writing and it would be wonderful. And yeah, thank you for having me. This was intense.
TOM BILYEU: Well, brother, thank you. I really do appreciate it. It was wonderful.
MO GAWDAT: Everybody out there.
TOM BILYEU: If you’re not already, be sure to subscribe. And until next time, my friends, be legendary. Take care. Peace. If you like this conversation, check out this episode to learn more.
MO GAWDAT: I think the AI censorship wars are going to be a thousand times more intense and a thousand times more important.
TOM BILYEU: My guest today is someone who doesn’t just keep up with innovation. He creates it. The incredible Marc Andreessen. Trust me, when someone like Mark, who spent his entire career betting on future, says.
Related Posts
- Bialik’s Breakdown: w/ Channeler Lee Harris -Part 2 (Transcript)
- Scott Ritter: Russia Threatens Strike on Finland & Baltic States (Transcript)
- PBD Podcast #778: Who Is Sadhguru? (Transcript)
- Larry Johnson: Trump’s Naval Blockade & Ceasefire Collapse (Transcript)
- Prof. Mohammad Marandi: What Really Happened in Islamabad (Transcript)
