Skip to content
Home » The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)

The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)

On November 6, 2025, Jensen Huang, Yoshua Bengio, Geoffrey Hinton, Fei-Fei Li, Yann LeCun, and Bill Dally spoke with the FT’s AI editor, Madhumita Murgia at the FT Future of AI Summit in London. Following is the full transcript of the conversation.

Introduction

Madhumita Murgia: Hello, everybody. Good afternoon, good morning. And I am delighted to be the one chosen to introduce to you this really distinguished group of people that we’ve got here sitting around the table. Six, I think, of the most brilliant, most consequential people on the planet today. And I don’t think that’s an overstatement.

So these are the winners of the 2025 Queen Elizabeth Prize for Engineering. And it honors the laureates here that we see today for their singular impact on today’s artificial intelligence technology. Given your pioneering achievements in advanced machine learning and AI, and how the innovations that you’ve helped build are shaping our lives today, I think it’s clear to everyone why this is a really rare and exciting opportunity to have you together around the table.

For me personally, I’m really excited to hear you reflect on this present moment that we’re in, the one that everybody’s trying to get ahead of and understand. And your journey, the journey that brought you here today.

But also to understand how your work and you as individuals have influenced and impacted one another, and the companies and the technologies that you’ve built. And finally, I’d love to hear from you to look ahead and to help us all see a bit more clearly what is to come, which you are in the best position to do.

So I’m so pleased to have you all with us today and looking forward to the discussion.

Personal Moments of Awakening

Madhumita Murgia: So I’m going to start going from the zooming out to the very personal. I want to hear from each of you a personal moment in your career that you’ve had that you think has impacted the work that you’ve done, or was a turning point for you that brought you on this path to why you’re sitting here today? Whether it was kind of early in your career and your research or much more recently, what was your personal moment of awakening that has impacted the technology? Should we start here with you, Yoshua?

# Yoshua Bengio’s Two Defining Moments

Yoshua Bengio: Thank you. Yes, with pleasure. I would go to two moments.

One, when I was a grad student and I was looking for something interesting to research on, and I read some of Jeff Hinton’s early papers, and I thought, “Wow, this is so exciting. Maybe there are a few simple principles like the laws of physics that could help us understand human intelligence and help us build intelligent machines.”

And then the second moment I want to talk about is two and a half years ago after ChatGPT came out, and I realized, “Uh-oh, what are we doing? What will happen if we build machines that understand language, have goals, and we don’t control those goals? What happens if they are smarter than us? What happens if people abuse that power?”

So that’s why I decided to completely shift my research agenda and my career to try to do whatever I could about it.

Madhumita Murgia: That’s two kind of very diverting things, very interesting.

# Bill Dally on Building AI Infrastructure

Madhumita Murgia: Tell us about your moment of building the infrastructure that’s fueling what we have.

Bill Dally: I’ll give two moments as well. So the first was, in the late ’90s I was at Stanford trying to figure out how to overcome what was at the time called the memory wall. The fact that accessing data from memory is far more costly in energy and time than doing arithmetic on it.

And it sort of struck me to organize computations into these kernels connected by streams so you could do a lot of arithmetic without having to do very much memory access. And that basically led the way to what became called stream processing, ultimately GPU computing. And we originally built that thinking we could apply GPUs not just for graphics, but to general scientific computations.

So the second moment was I was having breakfast with my colleague Andrew Ng at Stanford. And at the time he was working at Google finding cats on the internet using sixteen thousand CPUs in this technology called neural networks, which they say had something to do with those. And he basically convinced me this is a great technology.

So I, with Brian Catanzaro, repeated the experiment on forty-eight GPUs at NVIDIA. And when I saw the results of that, I was absolutely convinced that this is what NVIDIA should be doing. We should be building our GPUs to do deep learning because this has huge applications in all sorts of fields beyond finding cats. And that was kind of a moment to really start working very hard on specializing the GPUs for deep learning and to make them more effective.

Madhumita Murgia: And when was that, what year?

Bill Dally: The breakfast was in 2010, and I think we repeated the experiment in 2011.

Madhumita Murgia: Okay.

# Geoffrey Hinton’s Early Language Model

Madhumita Murgia: Yeah. Geoff, tell us about your work.

Geoffrey Hinton: Very important moment was when I in about 1984, I tried using back propagation to learn the next word in a sequence of words. So it was a tiny language model and discovered it would learn interesting features for the meaning of the words.

So just giving it a string of symbols, it just by trying to predict the next word in a string of symbols, it could learn how to convert words into sets of features that capture the meaning of the word and have interactions between those features predict the features of the next word.

So that was actually a tiny language model from late 1984 that I think of as a precursor for these big language models. The basic principles were the same. It was just tiny.