Skip to content
Home » Transcript of Mo Gawdat on Impact Theory with Tom Bilyeu Podcast

Transcript of Mo Gawdat on Impact Theory with Tom Bilyeu Podcast

Read the full transcript of AI expert Mo Gawdat’s interview on Impact Theory with Tom Bilyeu Podcast episode titled “AI, Tech Arms Race With China & UBI”, Apr 9, 2025.

The interview starts here:

Introduction

TOM BILYEU: Due to AI and a changing global order, the world is in the middle of the greatest period of change ever. But because we’re in the middle of it, it is nearly impossible for us to accurately see what’s going on. We are in the fog of war. While the world panics over job loss and killer robots, the real dangers are creeping in quietly and changing us in ways most people do not even notice.

America and China are locked in a cold war and AI isn’t just going to take people’s jobs. For many, it will take their entire identity. It’s already shaping what we believe, how we connect, and even what we value.

Today’s guest is issuing a very strong warning. His name is Mo Gawdat and he’s the former Chief Business Officer at Google X, best selling author of the AI book Scary Smart and one of the only people who truly understands both how AI works and what it’s doing to us. In this conversation, Mo exposes how the American perspective is blinding us to China’s true might. How AI is already changing everything and how we can learn to navigate the rise of the machines. Well, do not look away. Here is Mo Gawdat.

I think of AI like a magic genie that can grant all of our wishes. The problem is the lesson of every magic genie story is be careful what you wish for. What do you think we have to watch out for with AI?

The AI Genie and Existential Risk

MO GAWDAT: AI is a genie that has no polarity. It doesn’t want to do good, it doesn’t want to do evil. It wants to do exactly what we tell it to do. And you know, there is a non-zero possibility. You know, some people say 10 to 20%, Elon Musk’s view, tax has 50% and so on. A possibility that we ever face an existential risk of AI. I mean think about it. 10 to 20% is Russian roulette odds.

TOM BILYEU: You just gave me the chills. That’s crazy.

MO GAWDAT: Yeah, you wouldn’t stand in front of the barrel at 10 to 20%. Right. But my issue is that chronologically we wouldn’t get there. My issue is that I think we have more urgent and quite crippling effects of human greed, human morality. Let’s put it this way. I think the immediate negative impact of AI is going to be human morality using it for the wrong reasons. So they’re going to make the wrong wish is my challenge, I think.

And in my current writing in Alive, I basically try to explain that it is almost, I’m almost convinced that there is a short term dystopia that’s upon us on the way to Utopia. And unfortunately the short term dystopia is not reversible. So we’re going to have to struggle with a bit of it. But it can be reduced in intensity and in time and duration. But it’s only wise to start preparing and that 100% of the short term dystopia is not the result of AI. It’s the result of the morality of humanity in the age of the rise of the machines.

The FACE Rips: How AI Will Transform Society

TOM BILYEU: All right, give me some specifics. What specifically are we going to point AI at that will become dystopian?

MO GAWDAT: So I call them FACE rips. So just an acronym to try and remember. I don’t say them in that order, but let’s just quickly list them. F is freedom. We’re going to redefine freedom. A is accountability. C is human connection or connectedness in general. E is economics. R is reality and our perception of reality at large. I is the entire process of innovation and intelligence itself and where we fit within that. And P is the most critical of all of them, which is the redefinition of power.

And if you want to understand them reasonably well, they’re better understood in pairs. Right? So you can start with the easier ones, the I and the E, if you want. The redefinition of intelligence and innovation and how that impacts on the redefinition of economics.

I think we understand that with AGI depends on how you define it. It really doesn’t matter because my AGI has already happened. AI is definitely smarter than I am, so I’m done, right? I don’t care what the rest of humanity defines it at, you know, it’s their moment. My moment has come.

So if we agree that AGI is happening in a year, this year, next year, in a few years, it doesn’t really matter. Then, as you and I both know, and we’ve been talking about several times, that means that the toughest jobs will be given to the smartest person and the smartest person will be a machine, which basically will lead to very significant shifts in our economics.

One shift that basically moves the wealth upwards. So there is going to be a massive concentration of wealth for those who invest in the right places and most importantly for those who own the platforms.

I mean, it’s not a secret if you look at the history of humanity that, if you look at the best hunter in the hunter gatherer tribe, you know, could probably feed the tribe for a week longer than the second best hunter. And in return, you know, he got the favor of more than one mate. That’s the maximum wealth that he could create.

But the best farmer could feed the tribe for a full season if you want, and as a result became a landlord and had estates and wealth and so on. The best industrialist became a millionaire in the 1900s. The best information technologist became a billionaire in the current era.