Skip to content
Home » The Next 5 Years Will Change Humanity Forever w/ Yoshua Bengio (Transcript)

The Next 5 Years Will Change Humanity Forever w/ Yoshua Bengio (Transcript)

Editor’s Notes: In this episode of the Silicon Valley Girl podcast, Marina Mogilko sits down with Yoshua Bengio, a pioneer often called the “Godfather of AI,” to explore how the next five years of technological advancement could forever change the course of humanity. Bengio discusses his transition from deep anxiety over AI’s existential risks to a more optimistic, action-oriented approach focused on developing systems that are “safe by design”. The conversation examines the urgent need for global governance to mitigate threats to democracy and explains why current growth curves suggest AI could reach human-level planning abilities much sooner than many expect. From job displacement to the ethical evolution of AGI, this interview provides a comprehensive look at how we can steer AI to align with human values and ensure a better future for the next generation. (Feb 16, 2026)

TRANSCRIPT:

Introduction

MARINA MOGILKO: Hello, everyone. Welcome to Silicon Valley Girl, a podcast where we bridge business and new technology. Thank you so much for tuning in. Today I have an amazing guest who is sometimes called godfather of AI, Yoshua Bengio. Yoshua, could you please introduce yourself in 60 seconds? And for everyone who doesn’t know you, why should they be listening to you when it comes to AI?

YOSHUA BENGIO: Hi. Been doing research in AI for about four decades, contributing to how to make AI smarter. But in 2023, about three years ago, I realized that we were on a course that could be very dangerous for humanity, for democracy. And I decided to shift my activities to better understand the risks and to try to do what I could to mitigate them, both by speaking publicly about those risks and working on the technological question of how we can build AI that will not harm people.

From Pessimism to Optimism

MARINA MOGILKO: I’ve heard you were lost and pessimistic in your past interviews, but now I’ve seen an article that says that you’re increasingly optimistic by a big margin. Can you tell me what happened and why were you pessimistic so early on?

YOSHUA BENGIO: When I realized we had reached a point, three years ago, when I realized that we had reached a point that Alan Turing, one of the founders of the field of computer science, and also of AI in 1950 thought would be the threshold to building machines that could overtake us. The threshold being machines that manipulate language as well as we do. I was quite concerned, and we were not really ready for this event. It came much earlier than people thought.

And it wasn’t clear to me how we could fix the problems. Knowing what I know about the technology, neural nets, we don’t really understand what’s going on inside and how they come to answers. And I had read a bit of some of the theoretical concerns regarding how we could lose control to AIs that strategize, that try to achieve goals that we didn’t really want. And so I started studying that field of AI safety a lot more.

And after some time of being a bit anxious, really focusing on emotionally focusing on what’s going to happen to my children in 10, 20 years from now, my grandchild was only one year old, I realized that I could shift from this anxious stance to something much more positive by focusing on what I could do to mitigate those risks. I think every one of us should be asking, what can I do to bring about a better world with what we have, what we can do?

So that’s been the first positive shift. And I started thinking about scientifically, what is the problem? Is there a way to construct AI that will be safe by design? And I met people who have shared similar ideas. And after some time, I realized that there could maybe be a way to do this. And I started talking about it with some of my colleagues. I started recruiting people who were interested in this. And last June, I created a new nonprofit organization focused on the R and D needed to actually develop that methodology.

Worst Case and Best Case Scenarios

MARINA MOGILKO: Can you draw the worst scenario for me? Like, picture that. And the best case scenario, because when you tell AI is going to pursue its own goals, what do you mean by that? Like, destroy humanity or what is there?

YOSHUA BENGIO: There are two ways in which current AIs seem to acquire goals that we don’t want. One is that they imitate us. And, for example, we don’t want to die. So we’re building machines that maybe don’t want to be shut down. And we’re already seeing that they’re reacting negatively when they see that they would be replaced by a new version.

Negatively, to the point of doing things that go against our instructions, against our moral red lines that we have tried to put in them. So being willing to blackmail the lead engineer in charge of that transition to a new system.

MARINA MOGILKO: Oh, did that happen?

YOSHUA BENGIO: That happened in a simulation where the information about the AI being replaced by a new version was planted in the files that the AI saw, as well as fake emails in which the lead engineer was having an affair with someone else. And so the AI could take advantage of that. But nobody asked the AI to do like that, right?

So we have AIs since, especially about a year ago with the large reasoning models that can strategize in order to achieve their goal. The other thing is the way that we’re doing the post training makes them good at planning. Not as good as us, but reasonably good at planning. And that means creating sub goals in order to achieve a bigger goal.

So the issue here is when we ask them to help us for a mission, well, they deduce that they shouldn’t be shut down until they achieve the mission, which means they also are trying to preserve themselves.