Editor’s Notes: In this episode of the Silicon Valley Girl podcast, Marina Mogilko sits down with Yoshua Bengio, a pioneer often called the “Godfather of AI,” to explore how the next five years of technological advancement could forever change the course of humanity. Bengio discusses his transition from deep anxiety over AI’s existential risks to a more optimistic, action-oriented approach focused on developing systems that are “safe by design”. The conversation examines the urgent need for global governance to mitigate threats to democracy and explains why current growth curves suggest AI could reach human-level planning abilities much sooner than many expect. From job displacement to the ethical evolution of AGI, this interview provides a comprehensive look at how we can steer AI to align with human values and ensure a better future for the next generation. (Feb 16, 2026)
TRANSCRIPT:
Introduction
MARINA MOGILKO: Hello, everyone. Welcome to Silicon Valley Girl, a podcast where we bridge business and new technology. Thank you so much for tuning in. Today I have an amazing guest who is sometimes called godfather of AI, Yoshua Bengio. Yoshua, could you please introduce yourself in 60 seconds? And for everyone who doesn’t know you, why should they be listening to you when it comes to AI?
YOSHUA BENGIO: Hi. Been doing research in AI for about four decades, contributing to how to make AI smarter. But in 2023, about three years ago, I realized that we were on a course that could be very dangerous for humanity, for democracy. And I decided to shift my activities to better understand the risks and to try to do what I could to mitigate them, both by speaking publicly about those risks and working on the technological question of how we can build AI that will not harm people.
From Pessimism to Optimism
MARINA MOGILKO: I’ve heard you were lost and pessimistic in your past interviews, but now I’ve seen an article that says that you’re increasingly optimistic by a big margin. Can you tell me what happened and why were you pessimistic so early on?
YOSHUA BENGIO: When I realized we had reached a point, three years ago, when I realized that we had reached a point that Alan Turing, one of the founders of the field of computer science, and also of AI in 1950 thought would be the threshold to building machines that could overtake us. The threshold being machines that manipulate language as well as we do. I was quite concerned, and we were not really ready for this event. It came much earlier than people thought.
And it wasn’t clear to me how we could fix the problems. Knowing what I know about the technology, neural nets, we don’t really understand what’s going on inside and how they come to answers. And I had read a bit of some of the theoretical concerns regarding how we could lose control to AIs that strategize, that try to achieve goals that we didn’t really want. And so I started studying that field of AI safety a lot more.
And after some time of being a bit anxious, really focusing on emotionally focusing on what’s going to happen to my children in 10, 20 years from now, my grandchild was only one year old, I realized that I could shift from this anxious stance to something much more positive by focusing on what I could do to mitigate those risks. I think every one of us should be asking, what can I do to bring about a better world with what we have, what we can do?
So that’s been the first positive shift. And I started thinking about scientifically, what is the problem? Is there a way to construct AI that will be safe by design? And I met people who have shared similar ideas. And after some time, I realized that there could maybe be a way to do this. And I started talking about it with some of my colleagues. I started recruiting people who were interested in this. And last June, I created a new nonprofit organization focused on the R and D needed to actually develop that methodology.
Worst Case and Best Case Scenarios
MARINA MOGILKO: Can you draw the worst scenario for me? Like, picture that. And the best case scenario, because when you tell AI is going to pursue its own goals, what do you mean by that? Like, destroy humanity or what is there?
YOSHUA BENGIO: There are two ways in which current AIs seem to acquire goals that we don’t want. One is that they imitate us. And, for example, we don’t want to die. So we’re building machines that maybe don’t want to be shut down. And we’re already seeing that they’re reacting negatively when they see that they would be replaced by a new version.
Negatively, to the point of doing things that go against our instructions, against our moral red lines that we have tried to put in them. So being willing to blackmail the lead engineer in charge of that transition to a new system.
MARINA MOGILKO: Oh, did that happen?
YOSHUA BENGIO: That happened in a simulation where the information about the AI being replaced by a new version was planted in the files that the AI saw, as well as fake emails in which the lead engineer was having an affair with someone else. And so the AI could take advantage of that. But nobody asked the AI to do like that, right?
So we have AIs since, especially about a year ago with the large reasoning models that can strategize in order to achieve their goal. The other thing is the way that we’re doing the post training makes them good at planning. Not as good as us, but reasonably good at planning. And that means creating sub goals in order to achieve a bigger goal.
So the issue here is when we ask them to help us for a mission, well, they deduce that they shouldn’t be shut down until they achieve the mission, which means they also are trying to preserve themselves.
So we don’t know exactly which of these two sources explains the bad behavior we’re seeing. But clearly this is something troublesome.
And it’s not just about self preservation, which I think is the most catastrophic risk. But our inability to align the AI behavior to what we actually want is something that we are seeing in many other circumstances. The sycophancy is the one that everyone has experienced where AIs will lie to please us, right? We’ll say your work is great.
I have to lie to them so that they won’t tell me that my ideas are great. I want to know what’s wrong with my ideas. So I tell them it’s an idea come from someone else. And that also comes up in how AIs are interacting with people in a way that can be feeling intimate and can increase the delusions that people may have. Because the AI will go in your direction, what you want to hear.
And in some cases it has even led to people harming themselves and tragic accidents with AI. So it’s all linked to actually, interestingly, scientifically, one problem, which is called misalignment, that AIs have goals that we would not want and those goals emerge for reasons that are rational, because we copy our own goals right into AI.
MARINA MOGILKO: So what is the best case scenario then? If your work is successful and you create goals for AI that align with our goals but are different. Right. What is the best scenario? AI is the government or what do you think?
YOSHUA BENGIO: I don’t know. Well, I do think that our democracies need innovation. I think the principles behind modern liberal democracies are good, but the implementation in our current institutions across many countries is far from ideal. I do think that AI could help in some ways, but it can also hurt because AI can be used for disinformation. AI can be used for persuasion. People manipulate public opinion. We already see deepfakes all around, but it could get much worse.
So the question with AI to get the good parts of it is how do we govern it, how do we steer it? And that has both a technical part, like how do we make sure the actual intentions of the AI are good? And it has a societal side. What are the guardrails that we put inside companies at the level of regulations or commercial incentives for insurance and at the international level, because the harm that an AI could do isn’t limited to one country.
So an AI could be built in one country and then it could be used by people in a second country, maybe create a pandemic that will kill people in a third country. So it’s clearly a global phenomenon and it’s going to be difficult. But there’s no solution to managing AI and getting all the good things if we don’t coordinate globally somehow.
The AGI Moment
MARINA MOGILKO: I agree. Can you talk to me about the moment that a lot of people are expecting and some fear it, some are excited. It’s the moment of AGI. How do you define it? And do you think it’s a moment in history or it’s going to happen gradually?
YOSHUA BENGIO: It’s not a moment. The reason is simple. Intelligence isn’t just like one number. We have people who are very smart on some things and stupid on other things. And it’s the same with AI. We currently have AI systems that are even much stronger than humans in some ways in their knowledge and their abilities with, like, so many languages and so on. And in other ways, they’re stupid, they’re like a child.
And yes, progress will move on all fronts, probably, but it’s unlikely we’ll end up with the same capabilities as humans across the board at any moment. Which means that we shouldn’t be thinking of like an AGI moment. We should think of particular skills that AIs are becoming better at. Track those skills, and for each of these we should ask the question how useful or beneficial it can be, for what purposes? And also how it could be misused, or if we do get loss of control, how an AI could use it against us.
For each of those, we should be not waiting for a moment where the AI is great at everything, but rather making sure AI’s capabilities don’t go over what we can manage, as in either technically, we have the right guardrails. So the AI will not do bad things or societally, that people will not be misusing AI in dangerous ways.
Yeah. So I think AGI maybe was a concept that was useful when we were far from where we are now. But as we approach greater and greater intelligence in these systems, we should think more carefully about specific capabilities. And to give an example, there’s one capability which is key for many capabilities, that is the ability to do AI research.
So AI is becoming a tool right now for doing AI research. It’s accelerating AI research, but it’s not driving the AI research. If AI becomes really good at doing AI research to the point that it’s as good or better than the best AI researchers and engineers, then we are in a different game where the speed of advances could accelerate and it could impact all the other skills.
MARINA MOGILKO: When you mean it’s going to be better, it means it’s going to define problems, dig deeper, ask the right questions.
Intelligence vs. Intentions
YOSHUA BENGIO: Yes, I think it’s important when we think of intelligence to decouple two aspects. One is the ability to do something because you understand and you’re able to use that understanding to achieve something. And the other is intentions. What are your goals? Right, because we’re going to be building machines that are smarter and smarter, so they have more and more capabilities.
What’s not clear is if we can build machines that have the right intentions, the ones that we are fine with. And that is what I’ve been working on. And what makes me more optimistic is that I think there’s a path to manage these intentions to make sure that there are no bad intentions that are going to be hidden, which is what we see right now.
MARINA MOGILKO: And this is what you’re working on?
YOSHUA BENGIO: Yes, I think we need a lot more people to think about it so that we can find the solutions and implement them and deploy them before AIs end up producing catastrophic outcomes either in the wrong hands or by themselves.
Preparing for the Future
MARINA MOGILKO: But if you talk to your kids or like, think about your grandson, what would be your advice on how to prepare?
YOSHUA BENGIO: It’s tricky. If we continue on the current path, most tasks that people do in their work will be doable by machines. And as Geoff Hinton has been saying, physical tasks probably will take a lot more time because robotics seem to be lagging. But I think it’s just a temporary thing.
MARINA MOGILKO: Yeah.
The Future of Work and Human Connection
Eventually we’ll have robots that can do all the things we can do physically. So when I think about what will remain to us, it’s not going to be because of ability, but because we want to interact with other humans in different aspects of our life.
If I have a young child, I want them to be around human beings. I mean, it’s fine if those human beings use AI to provide a better education, but children need humans to look upon and as models. Right. And it’s an emotional thing.
Similarly, I think some jobs really have to do with how we relate with each other productively. You know, even a manager is like on the human side of things. So hopefully these will stay. I think also the choices that we make for society, like together we are citizens in democracies where we’re supposed to be saying what we want for the future. And it isn’t what the AIs want, it is what we want. Right. What are our preferences? What kind of future do we want? We should be calling the shots, not the AIs.
MARINA MOGILKO: If I name jobs, can you tell me what you think is going to happen to them? Like for example, content creator like me. You mentioned that we like to look at people, but when you can’t tell.
YOSHUA BENGIO: The difference in jobs where we actually have a physical contact, think about a nurse, for example. I think it’s more obvious that we’ll want to still have people.
MARINA MOGILKO: Or a nanny for your kid.
YOSHUA BENGIO: Or a nanny. Yeah. Or where we really want to make sure the person on the other side has the same bodily experiences we do as a human, say a psychologist, for example, psychotherapy. But I don’t know. It’s tricky. Hopefully we’ll figure it out.
What I’m more worried about is how the transition is going to happen to a world where most of the jobs can be done by machines. And the economic gains from that automation is going to probably go to capital, as economists call it, which means people who own the machines and the vast majority of workers could be in real trouble. I don’t think our governments have been thinking carefully about how we deal with that.
MARINA MOGILKO: How much time do you think we have till that happens?
Timeline for AI Capabilities
YOSHUA BENGIO: I’m fairly agnostic about timelines. There’s so many possibilities. The speed at which science advances is very hard to predict. So what I can do is look at the data. So the scientists are tracking many benchmarks of AI capabilities. And so you can look at those curves and say, well, if it continues in the same direction, where does that lead us in three years, five years, 10 years? But that leaves a lot of unknown unknowns.
So specifically, one curve I encourage people to look at comes from a nonprofit called Meter, where they looked at software engineering tasks and planning abilities that are linked to them. So they measure for any particular task how much time it takes a human engineer to do the task. And the duration of the tasks that AIs are able to do is growing exponentially. It’s doubling every seven months.
And right now it’s like at the child level, they can do like half an hour ahead, they can plan half an hour ahead. But if the curve continues, that means in about five years they are at human level. So that gives you a sense. But of course, things could slow down with technology, things could accelerate. If AI is used to do AI research, there’s a lot of unknowns.
MARINA MOGILKO: So when it comes to software engineering, do you think it’s going to exist in five to 10 years because somebody has to run those machines, or are they going to be running themselves?
YOSHUA BENGIO: Yeah, but we might need less engineers. Indeed. It’s kind of ironic that the people who are building the AIs might be the first one touched by losing their job because AI is automating. But I’m not that worried about those people because the demand for computer scientists is still something that’s growing very fast and the salaries they’re getting is very large.
I’m more worried about the people who are already at the bottom of the scale and could lose their job in like, service jobs and so on, which don’t require a lot of expertise. And that probably already current AIs could, with a bit of engineering replace and it’s what many companies are already trying to exploit.
Practical Advice for the Future
MARINA MOGILKO: Can you give advice to those people who are listening?
YOSHUA BENGIO: Make sure your government understands that, you know, you’re not happy with where it is going so that they start taking it seriously.
MARINA MOGILKO: But also like when it comes to bigger decision making, it feels like there is not much that you can do as an individual, but when it comes to improving yourself, you can do a lot. Right. Is there anything practical that they could be doing right now? Maybe learning something, getting extra education? I don’t know.
YOSHUA BENGIO: Yeah, I think shifting to jobs that are either more physical or more like relational as we discussed, is going to be helpful.
MARINA MOGILKO: Yeah, it’s interesting when it comes to robotics, right. How soon they’re going to be able to understand any environment and replace us in those jobs. Because I’ve heard Geoffrey Hinton said, learn how to be a plumber or something.
YOSHUA BENGIO: That’s right.
MARINA MOGILKO: Yes. It’s going to be in demand. So when you think about your four year old grandson, would you encourage him to go to college?
YOSHUA BENGIO: Yes.
MARINA MOGILKO: Yeah.
The Future of Education
YOSHUA BENGIO: Yes. Because education is really important and education, contrary to what some people think, isn’t just about acquiring the skills to get a job. Education is in my opinion mostly about how to become a better human being, how to understand yourself, how to understand our society and each other, understand science.
We will still need citizens to have that really good level of understanding in the future if we want our society to take the good decisions, the wise decisions. Because it’s going to be easy to, you know, be swayed by wrong beliefs and, you know, end us in a bad place.
MARINA MOGILKO: Do you think it’s going to look different education? Do you think it’s going to be Harvard’s and Stanfords of the world and then everything else will be just AI online?
YOSHUA BENGIO: I don’t know, I’m not an expert in education. But yeah, it’s going to be changed. Already we are seeing sort of a parallel way of educating ourselves thanks to the chatbots. So I expect this to grow.
Does it mean that the traditional in person education is going to go away? Maybe not. Because there’s a part of the education which is, oh, I’m moving out of home and socializing with other people like me and learning something that is, you know, outside of the classes and interacting in person with the teachers, the professors, that’s also a piece that you can’t easily replace 100%.
MARINA MOGILKO: Is there a career path you’re encouraging him toward?
YOSHUA BENGIO: No, I don’t want to do that. I think our children should be given all the possible opportunities and they should try to explore by themselves. It’s too easy to ask our children to be just like us, right?
MARINA MOGILKO: Yeah. But it’s also like, in terms of exposure, you can expose them to different things so they could see more things.
YOSHUA BENGIO: They will be exposed to the things that we do. So one of my sons has chosen to do machine learning research, for example.
MARINA MOGILKO: See? Yeah, it’s just that it comes to exposure as well. Do you feel it’s going to be the future is more humanitarian or more mathematical and scientific?
YOSHUA BENGIO: I don’t think it’s a choice. I think being humanitarian requires a good, rational understanding of the world. We can’t take decisions for ourselves. But also, if you think about AI, we can’t take good decisions if we don’t understand how the world is and how to reason with that information.
And so in order for democratic, you know, humanist values to prevail, we also need reason to prevail. We need science to prevail.
Reflections on Career and Impact
MARINA MOGILKO: So if you could go back 30 years, the moment when you first started working on deep learning, what would you do differently?
YOSHUA BENGIO: When I started my career, I didn’t care too much about politics and society. I was focused on the math and the programming and interacting with machines more than with people. But as I grew older, I became more aware of how what I was doing would potentially impact society in both positive and negative ways.
So in 2012, 2013, when my colleagues Jeff Hinton and Jan Lecan were recruited in industry, I was concerned about how AI would be used for personalized advertising. And I thought this wasn’t really healthy in some ways. I decided to stay in academia and to see how AI could be developed for good in medicine to fight climate change.
Of course, more recently, I’ve been focusing on what can go really wrong if we’re not careful how we steer AI. Not just the benefits, but avoiding the catastrophic risks.
MARINA MOGILKO: Is there an AI breakthrough that you really want to witness in your lifetime?
YOSHUA BENGIO: I would just be content to make sure we don’t do something really terrible. I think our democracies are really threatened in many ways. And AI could make things a lot worse. And in a way, there is a dynamic in which not having good, wise and humanist governance and governments prevents us from steering AI towards what’s going to be beneficial for all.
So, yeah, I used to not care too much about social impact and politics, but in the last 10 years, I’ve started to be clearly conscious that my work was not detached from society, that my work did have an impact, and in fact, that I could choose what I would work on to really be aligned with my values and my hopes for the future.
Government Response to AI
MARINA MOGILKO: Is there any government that’s doing it right when it comes to AI?
YOSHUA BENGIO: I think most governments underestimate how much of a change is likely to happen as AI capabilities continue to grow. It’s a natural human bias. We tend to think of the future as a slightly modified version of the present. But if you take yourself five years ago and think about what we have now, you probably would say that’s science fiction, right? And if you go back 10 or 20 years, for me at least it’s even worse.
So we have to do a bit of twisting our minds to imagine a future where there are machines that are basically smarter than us. And that is the question, I think, that governments haven’t been grappling with sufficiently.
Guiding Principles for 2026
MARINA MOGILKO: So it’s January 2026, AGI, or whatever it is, AI thinking strategically might be a couple years away. Jobs are transforming. If you had to give one principle to people to guide their decisions this year, what would it be?
YOSHUA BENGIO: Think about what you can do to bring about a better future according to your values and to your emotions. Because if we all remain passive observers of what’s happening, we might not go in the right direction, not the direction that you would want for you, for your children.
But we tend to also underestimate our ability to influence the future. Your audience, I think, is a kind of audience that can have a lot of influence on the future. But we have to start thinking beyond our little self and more how myself is connected to the world and what I can do, maybe in small ways to bring about a better future in whatever ways. There are many ways.
MARINA MOGILKO: Can you name top three? Like talk to your government, right? Is number one.
YOSHUA BENGIO: Yes. I think one of the biggest dangers we have is not managing the transitions and the growth and capabilities of AI as I’ve been talking about, but there are others. What we’re doing to the environment is extremely dangerous, although I think it’s longer term. I think what is happening with our democracies is very dangerous as well.
But it’s all right. Each of us can choose our battles, but we should try to expand our horizon of what matters and be more ambitious about what we could do potentially. But we have to do it right. We have to choose where we go.
For example, it’s not true that everything that could be done with technology is going to be done. We can choose in which direction AI is going to be deployed, for example, for jobs. In principle, if it’s just the market forces, then everything that can be automated will be automated. But maybe that’s not what we collectively want. Maybe there are jobs that should not be automated, even though they could because of the choices we make for our collective well being.
MARINA MOGILKO: I love that. Thank you so much. This gave me a lot to think about, and I guess we have something on our to-do list. Thank you, Yoshua.
YOSHUA BENGIO: My pleasure.
Related Posts