Skip to content
Home » Transcript of Demis Hassabis + Google Co-Founder Sergey Brin: AGI by 2030?

Transcript of Demis Hassabis + Google Co-Founder Sergey Brin: AGI by 2030?

The following is the full transcript of a live interview at Google’s IO developer conference featuring Demis Hassabis, CEO of Google DeepMind, and Sergey Brin, co-founder of Google, discussing the frontiers of AI research with host Alex Kantrowitz.

Listen to the audio version here:

Introduction

ALEX KANTROWITZ: Alright, everybody, we have an amazing crowd here today. We’re going to be live streaming this, so let’s hear you. Make some noise. Everybody can hear that you’re here. Let’s go. Woo!

I’m Alex Kantrowitz. I’m the host of Big Technology Podcast. And I’m here to speak with you about the frontiers of AI with two amazing guests. Demis Hassabis, the CEO of Google DeepMind is here. Good to see you, Demis.

DEMIS HASSABIS: Good to see you, too.

ALEX KANTROWITZ: And we have a special guest. Sergey Brin, the co-founder of Google, is also here.

The Future of Frontier Models

ALEX KANTROWITZ: All right, so this is going to be fun. Let’s start with the frontier models. Demis, this is for you. With what we know today about frontier models, how much improvement is there left to be unlocked? And why do you think so many smart people are saying that the gains are about to level off?

DEMIS HASSABIS: I think we’re seeing incredible progress. You’ve all seen it today, all the amazing stuff we showed in the keynote. So I think we’re seeing incredible gains with the existing techniques, pushing them to the limit. But we’re also inventing new things all the time as well. And I think to get all the way to something like AGI, I think, may require one or two more new breakthroughs. And I think we have lots of promising ideas that we’re cooking up. And we hope to bring into the main branch of the Gemini branch.

ALEX KANTROWITZ: All right, and so there’s been this discussion about scale. Does scale solve all problems, or does it not? So I want to ask you, in terms of the improvement that’s available today, is scale still the star, or is it a supporting actor?

DEMIS HASSABIS: I think I’ve always been of the opinion you need both. You need to scale to the maximum the techniques that you know about. You want to exploit them to the limit, whether that’s data or compute scale. And at the same time, you want to spend a bunch of effort on what’s coming next, maybe six months, a year down the line, so you have the next innovation that might do a 10x leap in some way to kind of intersect with the scale. So you want both, in my opinion. But I don’t know, Sergey, what do you think?

SERGEY BRIN: I mean, I agree. It takes both. You can have algorithmic improvements and simply compute improvements. Better chips, more chips, more power, bigger data centers. I think that historically, if you look at things like the n-body problem and simulating just gravitational bodies and things like that, as you plot it, the algorithmic advances have actually beaten out the computational advances, even with Moore’s law. If I had to guess, I would say the algorithmic advances are probably going to be even more significant than the computational advances. But both of them are coming up now. So we’re kind of getting the benefits of both.

The Future of Data Centers

ALEX KANTROWITZ: And, Demis, do you think the majority of your improvement is coming from building bigger data centers and using more chips? Like, there’s talk about how the world will be just wallpapered with data centers. Is that your vision?

DEMIS HASSABIS: Well, no. Look, I mean, we’re definitely going to need a lot more data centers. It’s amazing that, you know, it still amazes me, from a scientific point of view, we turn sand into thinking machines. It’s pretty incredible. But actually, it’s not just for the training. Now we’ve got these models that everyone wants to use. And actually, we’re seeing incredible demand for 2.5 Pro. And I think Flash, we’re really excited about how performant that is for the incredible sort of low cost.

I think the whole world’s going to want to use these things. And so we’re going to need a lot of data centers for serving. And also for inference time compute. Giving, you know, you saw DeepThink today, 2.5 Pro DeepThink. The more time you give it, the better it will be. And certain tasks, very high value, very difficult tasks, it will be worth letting it think for a very long time. And we’re thinking about how to push that even further. And again, that’s going to require a lot of chips at runtime.

The Power of AI Reasoning

ALEX KANTROWITZ: OK, so you brought up test time compute. We’ve been about a year into this reasoning paradigm. And you and I have spoken about it twice in the past, as something that you might be able to add on to traditional LLMs to get gains. So I think this is like a pretty good time for me to be like, what’s happening? Can you help us contextualize the magnitude of improvement we’re seeing from reasoning?

DEMIS HASSABIS: No, well, we’ve always been big believers in what we’re now calling this thinking paradigm. If you go back to our very early work on things like AlphaGo and AlphaZero, our agent work on playing games, they will all have this type of attribute of a thinking system on top of a model.

And actually, you can quantify how much difference that makes if you look at a game like chess or Go. You know, we had versions of AlphaGo and AlphaZero with the thinking turned off. So it was just the model telling you its first idea. And you know, it’s not bad. It’s maybe like master level, something like that. But then if you turn the thinking on, it’s way beyond world champion level. You know, it’s like a 600 Elo plus difference between the two versions. So you can see that in games, let alone for the real world, which is way more complicated.