Skip to content
Home » Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)

Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)

Read the full transcript of former Chief Business Officer of Google Mo Gawdat’s speech on “How to Stay Human in the Age of AI”, Dragonfly Summit, Bangkok, October 25, 2025.

Introduction

MO GAWDAT: It’s very nice to meet you. I had not been sleeping well for a while, and I’m also getting old, so I asked for a chair, and they got me flowers, and it’s really lovely. And I actually decided to change everything I was going to talk about literally 20 minutes ago, so we will see how that goes. You guys are nice, so you’re going to be forgiving, I think.

First of all, what a lovely place, what a lovely, lovely, lovely organizing team. You guys are amazing. Thank you so much. And this has been a journey. I probably speak maybe 100 times a year, and there are very few events that I actually am looking forward to go to between you and I. Don’t say that in public. But we started talking like nine months ago, and such a wonderful team, so I’m absolutely certain that you’re enjoying this very much. I hope you are as much as I am.

Why Did I Want to Change?

I was going to talk to you about technology and AI and where the world is going, and I have to say it seems to me you may not feel it as much in Thailand, but the rest of the world, especially places like the Middle East, Russia and Ukraine, the US, Europe, and so on, the world is very uncertain recently. There are lots of, it’s unfamiliar, let’s put it this way, between the geopolitics of our world, where now we have quite a few dishonest leaders deciding without accountability, if you want, the fate of humanity at large. So this is one area where things are becoming unpredictable.

This is mainly, if you ask me, because of the economics of the world. We’ve had a specific economic order since 1945, I’d probably say, that is sort of breaking down a little bit. And as it’s breaking down, it’s getting that typical resistance of trying to keep things as they are, while they may not have been working as well for a while. So between geopolitics and economics, you get lots of headline news that if you have a heart, you feel empathy for a lot of people suffering around the world.

The Speed of AI Development

At the same time, my area of work with artificial intelligence is becoming quite shocking in terms of its speed, its evolution, and its impact on the way the world is going to look like within the next people, some people say 10 years, I would say three years at most. And the reason I say that is because if you’re an insider on artificial intelligence, and you know, like myself, I’m going to be speaking to you in English, but don’t be fooled, I am a total geek. So everything in my head is numbers and code, seriously, okay. So if you want to see the way I write, or I communicate, I turn things into algorithms into my head first, and then I explain them in English.

And if you take the algorithm of the development of artificial intelligence over the last 50 years, it looked like this, you know, we started in 1956. Most people don’t know that it was at zero all the way to the year 2000. Right. And then the year 2000, it started to we figured something out called deep learning. And since then, you know, it started to increase until now it is literally going vertically. And so the moment at which where artificial general intelligence, basically, machines becoming smarter than humans at everything humans can do, is, in my mind, a matter of months away. So 2026, it could be 27. But who cares? Right?

Intelligence as a Force Without Polarity

Now, there’s absolutely nothing wrong with that, believe it or not, intelligence is a force with no polarity. So there is nothing inherently good or evil about abundant intelligence. As a matter of fact, abundant intelligence can solve every problem known to humankind, right? You know, if you apply intelligence to good, you can create a utopia. And that’s what we should do. If you apply it to evil, you can create a dystopia, which is unfortunately what I believe will happen in the short term. And then, you know, if you really, really think deeply about it, it’s a question of choice.

But the choice is not being made in the direction that humanity should be going, simply because we haven’t ever seen anything like this before. So what does that mean? It means that we are going to see ahead of us, whether you feel it or not yet, you’re going to be affected within the next two to three years, with the world that’s very unfamiliar. Unfamiliar is a very interesting word, okay?

Because unfamiliar doesn’t necessarily mean it’s bad, it doesn’t necessarily mean it’s good, it doesn’t necessarily mean it’s manageable or not. It just means you’re not used to it. And when you’re not used to it, we humans, we feel very anxious, we feel very stressed, we feel very unsafe, if you want.

Which is the reason why I wanted to change my talk today to talk to you a little bit about that state of uncertainty, and how to come out of it on one side successful, but on the other side happy, because believe it or not, that actually really matters.

So I write one book about, I mean, since 2000, I’ve been writing one book about AI in the future, and then one book about well-being and then one book about AI in the future. And so in a way, I sort of scare people, then make them feel okay, and then scare them a little more than they deserve, which is really strategic, if you think about it. I mean it that way.

Writing Unstressable

But one of my favorite books is a book called Unstressable.