Read the full transcript of futurist Gerd Leonhard’s talk titled “The World by 2030 – On AI & Work” at GLMC 2025 on January 30, 2025.
Listen to the audio version here:
TRANSCRIPT:
GERD LEONHARD: Great to see you, dignitaries, excellencies, highnesses, great to be here. I have a great talk to you today about al-Mustaqbal, the future. This is an important word because the future is not like the present. The future is not an extension of the present. The future is not a version of the past. The future is entirely different. And I want to start off by saying I think the future is better than we think. There’s so much talk about the future today being bad, especially in Europe where I live in Switzerland. People in Europe are saying, well, the future, I probably won’t have children because the future is bad.
It’s different here, I know. But there’s a perception that everything is going down, right? You can feel the perception of this. So I’m going to speak about this and what happens.
Major Challenges
There’s two major things happening today. One is, of course, climate change and renewable energy. And this is bigger than we all think. This is the biggest change in industry and production and across the board in all industries in the next two decades. For this country, of course, very big topic. The other big topic, which impacts, of course, the labor markets also, is the convergence of humans and machines, intelligent systems. Machines that can pretend to be human, so-called artificial intelligence. What does that do to us? And what’s our future place?
I mean, my kids, they’re 30 and 35, they’re going to live in a world where we have machines that have the computing capacity than all of humanity combined.
The Pace of Technological Change
When you think about what that means for our future, which way we want to go, what our decisions are, and as we go into that future, it’s really quite clear. The numbers are mind-boggling. Look at this chart showing you that basically the time to edit, the TTE of a computer in translation, is now approaching the time of a human, which is one second.
So the translator here in the back, that’s how much time you have left for the time to edit to a computer. We can also safely say that technology is getting faster and cheaper all the time. You heard the news with the new AI engine from China, actually being much faster at lower cost. So everything is getting faster and more powered and more intense, and our pace is increasing.
It’s quite clear we’re going to head to the point in time where humans are sort of copied by machines. Where we have digital entities. And what does that mean for us? For our happiness, for our social structure, as we’re going into that future of kind of described in Blade Runner 1982.
Intelligent Assistants vs. AI
We have to distinguish between intelligent assistants, IA, that’s very important because most of today’s applications are really intelligent assistants. They’re smart software systems. I always say that computers are no longer stupid. That’s kind of the main story. They can learn, they can understand, they can predict.
That does not make them intelligent like you and me. It just means they’re no longer really stupid. But that’s a big start already.
So if we have intelligent systems, environmental systems, control systems, NATO air traffic control, and you name it, we’re going to save time and money and pollution and waste. Straightforward. AI is the next level. AI is defined by Demis Hassabis. Now DeepMind is computer systems that turn information and data into knowledge.
What business are you guys in? You’re in the knowledge business. We are knowledge workers. We don’t work in factories. I’m a knowledge worker. AI turns data information into knowledge. What about my knowledge? Is that different than the computer knowledge? And how do I have to develop to compete?
The Challenge of AGI
I’m not going to compete with computer knowledge. That is pretty clear. It gets worse because when we think about artificial general intelligence, definition here by Sam Altman of OpenAI, who’s gearing up to be the master of humanity, to define our future totally by themselves. It’s an interesting story because he says, the coming change will center around the most impressive, the ability to think, to create, to understand, and reason. Now let me ask you, do we really want a machine that can understand, create, and reason to be exactly like us or, let’s say, a cheap copy of us?
Do we want AGI? A few days ago, the American new administration, in their wisdom, announced a $500 billion project on AGI, on building infrastructure. Are we thinking about what this AGI will actually do?
Because if it does that, if it’s more than all of us combined, we’re going to come to the place, quite straightfully, obviously, that machines will surpass human capabilities in the majority of all applicable tasks. All applicable tasks. Not just knowledge work, production work, thinking work, creation work, patents, inventions.
I mean, if Einstein was speaking to you today, he has an IQ of 152, allegedly, if Einstein would speak to you about his theory, you would not understand a single word he’s saying. If we’re going to have a computer with an IQ of 500, it would be so far removed from anything that we can understand. There would be no hope of controlling it.
The Future of Work
We have to think about this. The scenario for general intelligence is like this. It’s got to be mind-boggling output. Productivity, efficiency, and no work left. That is the scenario of general intelligence. That’s why I’m saying what we should do is we should pursue intelligent assistants and AI as tools. Not to replace ourselves. That’s why I think the mission to build something that replaces us is kind of a suicide mission. In Silicon Valley, they are saying humanity’s knowledge is limited and our creation will depend on AI to create things like medication and solutions and science. That AI will make the science.
You heard rumors about the World Health Organization. People saying, well, we shouldn’t have this anymore because we can replace it by AI. We make those decisions based on a machine. I think that’s very questionable. Generate savants, that is specialized assistants that will work with you in whatever you do. An artist savant, a music savant, a physics savant, and so forth. Those savants will work with you to do research, drug discovery, solve problems, can do many things. Why five years? Because today we have all the components necessary.
David Schmidt, the former CEO of Google, right here in Riyadh just a few months ago at another conference, says we’re going to generate, to build, to create savants, which is a weird term, right? Basically wise people, right? People that understand polymaths. You know, they understand everything. We’re going to create AI that understands everything. Does that strike you as a good idea? What does that mean for the labor market? It means the end of the labor market. The end of the labor market. And a beginning of a new one, of course, right?
If we had a new one, it would be great. But you know what Gromsky says about change? He says the old one is breaking and the new one isn’t here. And we’re here to build the new one. Because one thing that could happen is that maybe we’re going to work a lot less. Maybe because of AI we’re going to end up in a place where we work 20 hours a week for the same money. But for that to happen, we have to change capitalism, right?
That’s definitely not capitalism. If we’re going to spread the money like this, that is a completely different cup of tea. That’s not the current economic logic. The logic is you work a lot, you get paid a lot. That won’t work here, right? We have to think about how we can change that and what the alternatives are.
Power Tools and Rules
As we’re going to this future, which is our future, we’re going to have power tools. A person with a power tool beats the person without a power tool. End of story. Always been true. Now with AI, I as a futurist, a researcher, a writer, filmmaker, I have power tools.
I’ll show you some in a minute. I’m very happy with the power tools, but I don’t sit down and adore the tool. The tool is not the purpose of my life. The tool is like a hammer. If I have a hammer, I can go and kill the neighbor with a hammer or I can build a house. It’s the same with AI. We’re going to need to have some rules around the power tools, right?
Because imagine if you have a power tool like a nail gun that shoots nails in construction. You don’t give that to your 60-year-old son to shoot some nails for fun. You have some rules. You have some things around the power tool that change it, that find a way to actually come up with this. The main thing about this is that AI will kill our routine. The stupid work, the monkey work, the commodity work. All of us do commodity work. File checking, fact checking, making sure data is synchronized in databases, writing simple code. Commodity work.
What happens if your job is 90% commodity work? The AI will take your job. Which jobs are 90%? Not many. Most of us do work where like, you know, 30-40%, like being a lawyer, probably some lawyers around here. You do commodity work, but not that much. The AI can do it. The bottom line is this. If you work like a robot, a robot will take your job. Also not you. Second bottom line, if you study like a robot in school or at university, you will work for the robot.
Because you will be like the robot. You just download information for later use.
AI Tools and Examples
So here’s a tool called Notebook LM. That’s a fantastic example for routine. Notebook LM is available from Google around the world. It’s the most amazing AI tool I use every day. I can upload any information, a thousand page PDF about Bitcoin or something, and then I can ask questions to the document. It’s like my own learning aid. I can make all these folders. I can upload YouTube videos and question the video. Think about that for a second. I think it doesn’t work very well in Arabic quite yet. But in English, if you upload, you can just query.
Fantastic tool. Great example for a useful tool. Will I believe everything it says? No. It’s like Google Maps. I don’t believe everything that Google Maps says. I try to be critical and think about that.
Namaskar. Namaskar. Namaskar. Namaskar. Namaskar. Namaskar. Namaskar. Namaskar. I speak Hindi. I didn’t know I spoke Hindi, but I used an AI to make this called RASC. And my YouTube channel has dubs in 14 languages. Very popular Spanish and Portuguese. The Arabic doesn’t sound good. That’s why I didn’t show you Arabic. It sounds really terrible. I didn’t want to insult you. But it works great in German and in French and so on.
It works great. So I can synchronize my video with me speaking with my voice in Hindi. And if it’s like 5% wrong, it’s not a big problem. We’re not talking about air traffic control here. We’re talking about simple stuff. So great example of how I use it.
AI Agents and Human Control
And in the end, we’re going to have AI agents that we speak to. We already kind of have that. And we tell the agent what we want. We want a trip to Santorini. And rather than going on the web and doing our own searching and putting it together, we tell the agent to go out, find the best deal, match my profile, and the agent will put together the trip in my folder. Expedia already does this to some degree. Imagine if you’re in supply chain management. You can sit down and say, I have the following problem. I need that stuff to be in Turkey next week, but I can’t figure out how. This agent can solve that for you and can actually get the job done.
Salesforce is one of the leaders in AI agents. And it’s really mind-boggling all the tools that it does and all the things that we can do to build an AI organization. Those are all great tools. But here’s one thing.
We need to keep the human in the loop. We need to keep the human in charge. I mean, imagine if I book a trip and the system just for some reason thinks that I’m traveling with my five-year-old son or something and I end up with a completely different booking. It was just an assumption. That’s called misalignment. We don’t want the AI to misalign what it assumes is normal. So we have to keep an eye on this.
We have to make sure that it works for us. We have to make sure that this is going to go right. Basically, we’re going from the part of humans doing routines to machines doing routines. That is inevitable. And that is probably a good thing. You know, many of us love our routines. We love to drive. We love to research. We love to fiddle with forms maybe. But, you know, at a certain point, we can let go of the routine.
I think that’s okay. If you work in a call center, 90% is routine. You’re not going to be very happy. And that’s a fact, right? 20 million people. That’s a change that’s coming. The machine can learn how to do this. It cannot be empathetic. It can pretend. It can mess up and all those things. But these things are coming.
The Future of Skills and Education
So as we go here, we’re going to a future where all of a sudden, technological, of course, skills, but social and emotional skills is where the action is. This is why women are going to gain in the future, right? Because allegedly they have more EQ, emotional intelligence, than men already. So this is what’s happening here in terms of education.
We need to focus on this. Look at the bottom line down there. You see the basic cognitive is shrinking. We need to think about a future where that is coming together, a future that holds things for us that go beyond this simple pyramid of the past. This is our educational pyramid, showing you basically we’ll go from data and information up to logic and wisdom. And now with AI, we’re doing this, right? We’re shifting.
Data and information is done by AI. That’s the perfect job for tech. Fill me with information and data. I do the rest. But we’re not prepared for this.
Our schools are teaching strictly only the lower part of this pyramid. Download information for later. That will not work. We need to think about how we’re going to change our educational system to be ready for this future where all of a sudden what I call the androrhythms, the human things, they’re growing.
That’s why it’s better if my son or my kids in general could learn how to program. That’s always good. But they can also build a sandcastle or have a fight and negotiate. I mean, this is the important thing about being human, emotion, imagination, creativity, design, all of those things, because we’re moving in a future where knowledge work, explicit knowledge, is done by machines. Explicit means hard-coded rules, objective, logical, codified, explicit. Machines are learning that. And we can sit here and lament about it, but this is a fact.
If you’re a doctor, you know this is a fact because the machine is learning the facts of how to act in medical situations. It does not know how to be sympathetic or how to translate it or so. So our future is here on the flip side of this.
Tacit Knowledge and Human Intelligence
It’s the tacit knowledge, the things that can’t be codified, the difficult-to-transfer things, the knowledge that’s not actually explained. The Polanyi paradox, we know more than we can tell. This is important. Keep that in mind. As humans, we know more than we can tell. That’s not true for AI. It knows less than it tells. That’s really what AI is all about. It always tells more than it knows. That’s just the nature of computing. AI is learning that part, and as we become digital, people are starting to say, wow, now we have IQ tests for O1 and for CHET-GPT-4 and for, you know, what’s the other one called, SEEC, something or the other, right?
You know, the new Chinese AI. It doesn’t really matter how intelligent the system is because here’s the bottom line, right? Intelligence alone is not enough. It takes a lot more to be a human than intelligence. You know, we have, as a human, eight different kinds of intelligence, social, cultural, emotional, musical, body, right?
Does a machine have a body intelligence, a kinesthetic intelligence? No. A machine does not know real life. We know real life, and it should stay that way. So we have to keep in mind as we go into that future, really what’s happening here is that our education has to go from the knowledge economy to the post-knowledge economy, the economy of understanding.
“Knowledge is limited. Imagination is limitless.” Einstein. This is where we are going. This is what we have to teach our kids and ourselves, and this is where the future work will come from in this regard.
Yeah, let’s hear this guy. Can you get some audio? Really what’s happening is that knowledge is becoming something that computers can have, and we have to go above the knowledge, which is tacit knowledge, quiet knowledge, understanding, intuition, imagination. Thank you, Gerhard Bott. We’ll skip that part because you’ve seen me in real life, so I don’t need the Bott. I just thought it was cool, so I show you that.
The Limitations of Machines
But anyway, so as we’re going here, quite clearly, we have to look at the bottom line. Machines don’t think. They don’t have hunches. They don’t understand. They don’t imagine. They certainly don’t care, and I don’t want them to. I want them to be getting the job done, to be competent. I want the machine to be competent, to be my slave, right, to get work done for me. I want it to think and to get me going. I don’t want to do my work like this. That’s why I don’t want AGI, general intelligence, because it would do that, and that worries me greatly as we’re going into the future. I think it’s a false promise.
To promise you a computer that can be like you, not just simulate you, but be like you, there’s a very big difference. That’s a false promise, and it’s also a promise we don’t need because it would be good enough if we had a machine that can just get the job done, which we currently don’t have. Let’s just get the job done and do the rest the way that we want. Let’s get the job done in this way. Cognification, augmentation, virtualization. I call this CAVA. This is really what AI does. Makes things smart, augments humans, automates some things. That is how we’re going to build a smart economy. We’re not going to build a smart economy by getting rid of people.
We’re not going to build a smart economy by giving our chance for the future of humanity to open AI, by saying, okay, we’ll give you more money to figure out how to make us superfluous. Why would we do that? Well, the answer is, because we can make more money. But this doesn’t make any sense.
Beyond Intelligence: Happiness and Global Challenges
I mean, beyond intelligence, we have happiness. I know it’s hard to explain what that is, right? It’s not enough to just be intelligent. We have to take the next step. Look at this chart here. It’s showing you all the current troubles that we have in the world. It’s all increasing. Climate emergency, population, AI, misinformation, geopolitical tension, all going up. Who is going to solve these issues, right? Is it these guys? Are these guys going to sit down and say, we are the tools, we’re going to solve these problems?
No. Tools make problems more pronounced. Technology makes things more efficient, including the problems. Now you can build weapons from AI.
What we need is to think about, when we think about the future of work, is how do we build the telos, the Greek word for wisdom, for understanding. That’s the most important. That is the future we need, is to figure out how we can add that value. How we can add the telos on top of the tools. Of course we have to understand the tools. If you don’t know how to use a tool, you’re in deep trouble. I mean, 50 years ago when I had my first car, I had to know how to fix the car, otherwise I couldn’t drive. So now you have to know how to use AI.
Big deal. Yeah, we can do that, but we still need to figure out what we are. Where do we want to take this? What’s the goal as we go into this future?
The Ethical Questions of Technology
This is why it takes center stage in the future, the good future, is to think about why we want things. The question is no longer, I guess that’s a good thing, if and how we can do something, because the answer is yes, we can. Can we build a machine that’s like a human? Yes. Can we upload our brain to the internet? Yes. Can we work on the human genome to change what we are? Yes. That is not the question. The question is this, why and who? And how do we collaborate?
Imagine if we start an arms race of artificial intelligence, like we did with the Manhattan Project for Nuclear Weapons, which is what, again, the American administration wants to do, wants to be number one in AGI. I think that goal is utterly flawed. We don’t want an arms race to AI. We want an arms race to make it inefficient and powerful and usable, not to take over. Very important as we go into this future to figure this out.
Balancing Technology and Humanity
So I’ll wrap up soon. Basically there’s one thing. We don’t want this curve. We don’t want a curve where technology helps us to bring up efficiency and productivity but reduces humanity. Because basically it would tell us what to do. We want this curve. We want it both, of course. That’s why we’re human. We want a curve where productivity goes up and humanity is boosted. Happiness is boosted. Self-realization is boosted. Democracy is boosted. Everything like this. Not one or the other.
This is why we need the tech companies to become responsible because this is what they do, the people that make the tools. I mean, we have responsibility for telecom companies, for all companies, but for tech? Where is that?
Again, the American administration says, we don’t need this. Let’s cut all these clauses from the AI Act and let them go free. We need to have a technocratic oath that says, I will do this only if human flourishing can be achieved. Everything else will be an arms race towards zero. And also, of course, zero jobs. Collaboration.
Our future is not in shutting down like they’re proposing in Germany, where I’m from originally. Let’s get rid of all the foreigners. Everything will be fine. No, that is just not true. And, of course, we know that immigration grows things for the most part. It’s complicated, yes. But what is the alternative?
Conclusion
Finally, the question. Ask yourself this simple question. What future do you want for your children? If you have children or plan to have children. That is the key question. When I look at my kids and I say, what future do I want them to have? I make better decisions about how I go on to the future because it’s about more than one thing what the future tells us.
So, I want to say shukran for your great attention and hopefully we’ll have the slides available and the video later.
Thank you very much. Have a good day. Thank you.
Related Posts
- Why eSIMs Are the Smart Choice for Mobile Businesses
- Transcript: Elon Musk and Jensen Huang Talk AI at U.S.-Saudi Investment Forum – 11/19/25
- The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)
- Transcript: NVIDIA CEO Jensen Huang’s Keynote At GTC 2025
- Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)