Here is the full transcript of DeepMind Technologies’ CEO Demis Hassabis’ lecture titled “Accelerating Scientific Discovery with AI”, at Cambridge in March 2025.
Introduction by Alastair Beresford
ALASTAIR BERESFORD: Welcome, everybody. I’m Alastair Beresford, the current Head of the Department of Computer Science and Technology, also known as the Computer Laboratory. It’s my great pleasure this afternoon to welcome Demis back to Cambridge.
Demis studied computer science here in Cambridge in the 1990s at the time when the lab was based just next to this lecture hall and where Robin Walker, who I’m pleased to say is here today, was Demis’ Director of Studies at Queen’s College. I was discussing earlier with Demis, we think this is where he had his first Cambridge lecture, Maths at nine a.m. on the first Thursday of Michaelmas term. So this seems a fitting place for him to return to.
Demis had already made several incredible achievements by the time he arrived in Cambridge. He was a chess master and second highest rated 14-year-old player in the world. And after completing his schooling a year early, instead of backpacking around Europe, he took a job in the computer games industry, where he co-designed and was the lead programmer for the computer game Theme Park.
After graduating from Cambridge with a first class degree, Demis returned to the games industry, first working at Lionhead Studios and subsequently forming his own company. However, there was clearly a passion in him for fundamental scientific research. And so Demis returned to academia, this time to UCL, where he studied for a PhD in cognitive neuroscience, graduating in 2009. He stayed on at UCL until 2011 when he left to cofound DeepMind, an AI research lab, which was acquired by Google in 2014.
Demis and colleagues at Google DeepMind have gone on to make several seminal contributions to science.
Now alongside his incredible intellectual contributions over this period, he’s also been a fantastic supporter of the university, including funding for academic positions and significant support for students from underrepresented groups, both in the computer lab and at Queen’s College. And Demis’ passion and support for the next generation of computer scientists is the motivation for our lecture today.
I’m sure he will not only help us understand how to accelerate scientific discovery with AI, but also inspire the next generation of students in the room to change the world, too. And with that, I would like to welcome Demis to the stage.
Early Inspirations and Cambridge Years
DEMIS HASSABIS: Thanks, Alastair, for that lovely introduction. And it’s so great to be back at Cambridge. I always have a warm feeling when I sort of make my homecoming back to Cambridge. And specifically, this lecture hall, as Alastair reminded me, I think it is the first lecture hall I was in. It’s always been my favorite lecture hall.
I remember telling—and I see a lot of my old friends here from my Cambridge days, Aaron, I think—about that one day maybe I’d come back to give a lecture in here and talk about announcing AGI and maybe a robot would walk on and astound everyone. I’m not going to do that today to disappoint you, but maybe in a few years’ time, I’ll come back again and I’ll give that lecture. But it’s an amazing place. It’s such an inspiring place. And I’m going to talk a little bit about how Cambridge has inspired my whole career actually and hopefully is going to do the same for many of you and the students in the room.
For me, my journey on AI started with games and specifically chess, as Alastair mentioned. So I was playing chess from the age of four years old and very seriously for the England junior teams and things like that. And it got me thinking about thinking itself. How does our mind come up with these plans, with these ideas? How do we problem solve? And how can we improve? Obviously, when you’re playing chess at a young age and you’re trying to play competitively, you’re trying to improve that process. And it was fascinating to me, perhaps more fascinating than even the games I was playing was the actual mental processes behind it.
In fact, AI and computers, I came across computers and AI for the first time in the context of chess and trying to use very early chess computers like the one on the right here, I think this was my first ever chess computer. There were physical boards where you had to actually press the squares down to move the pieces. And of course, we were supposed to be using these chess computers to train opening theory and learn more about chess. But I remember being fascinated by the fact that someone had programmed this lump of inanimate plastic to actually play chess really well against you. And I was sort of really fascinated by how that was done and how someone could program something like that.
And I ended up experimenting myself in my earliest teenage years with an Amiga 500 computer, amazing home computer back in the late 80s and early 90s and building those kinds of AI programs myself to play games like Othello. And really that was my first taste of AI and I was hooked from then on. I decided from very early on that I would spend my entire career trying to push the frontiers of this technology.
So then that led me to Cambridge, which was really my three years here were incredibly formative for me. And I went to a comprehensive school in North London. No one had ever gone to Oxbridge in sort of living memory. And the reason I wanted to come to Cambridge was all these inspiring stories that I’d heard about what happened at Cambridge. All these amazing people that I used to read about, their biographies and the work they’ve done, especially people like Crick and Watson in top left there.
And I remember particularly a film, The Race for the Double Helix, which was an amazing film from the 80s, if you haven’t seen it, with Jeff Goldblum, one of his early parts, with all the enthusiasm that he plays all of his parts as he was Watson. And they were having an amazing time discovering, roaming around Cambridge, working on things like DNA. And I thought, look, that’s what I want—a piece of that. And I want to feel what it’s like to be at the frontier of discovery and what could be more exhilarating. And that film actually really brought to life what that might be like.
And then of course all of my heroes, my scientific heroes, a lot of them had gone through Cambridge. People like Alan Turing and Charles Babbage of course in the lecture hall that we now sit in. Even places like the Eagle Pub where if you start at Queen’s College one of the tours they give you on the first day is to go and show you the table that they were discussing the DNA structure around. And you can’t help but be inspired by that and walking down Kings Parade. And I almost felt like the intellectual giants of the past were almost speaking to you from the stones. That’s how I felt going for a late night burger at Gardenia’s. That was what was inspiring me around, all of these amazing people that had walked those same steps over hundreds of years.
And that’s the history that is sort of unrivaled here at Cambridge that I think we can still draw on and take inspiration from today. And then there’s a picture of me and Aaron there, one of my best friends from Queen’s, obviously on the mathematical bridge there.
And then finally, as Alastair mentioned obviously the Nobel Prize and it was an honor of a lifetime to go and collect that in Stockholm in December, amazing one week activities. But my favorite activity is when you get to sign the Nobel book in the Nobel Foundation. That’s the book there. One of my pictures of it and I started sort of leafing through the book. You sign your name and then you leaf back. You wonder if Crick is in there? And of course he is. And then you go back further and then Einstein’s signature is there and it’s just mind blowing really. And I started to spend an hour just photographing every page of the book. So it’s full circle for me of that picture and then really seeing that film in the late 1980s.
The Founding of DeepMind and AI Approach
So then in 2010 we started DeepMind in London as really at the time it was a kind of an Apollo program effort is the way we thought of it for trying to build artificial general intelligence, AI that was truly general and could perform all the cognitive capabilities that humans are capable of. So it would be a truly general AI system. In fact, the idea for that really comes from Turing and Turing machines. So something that’s able to compute anything that is computable as Turing showed with his Turing machines. And really that’s been the foundation for me and one of the main things that I carried with me from the lectures here at Cambridge was all these theoretical underpinnings of computer science and computation theory that people like Turing and Shannon famously did in the 1940s and 1950s.
So we started in 2010 and it’s amazing that it’s fifteen years ago, which in some ways isn’t that long ago. But when we started out DeepMind almost nobody was working on AI, which is hard to believe today given that almost everyone seems to be working on AI today. In just a matter of just over a decade, things have accelerated incredibly. Obviously we’ve been part of that very exciting journey.
So our mission at DeepMind from the beginning was we talk about building AI responsibly to benefit humanity. But the way we used to articulate it when we started out was in a two step process. Step one, solve intelligence. Step two, use it to solve everything else.
It seemed very outlandish at the time in 2010. And you could imagine trying to pitch a venture capitalist on that on the basis of that mission. It seemed pretty crazy. But I still fundamentally believe in that today. And I think more and more people are realizing that AI built in this general way could have these kinds of profound and transformative impact on almost any field, which is obviously the second part of that mission statement. And I think for me that involves accelerating scientific discovery itself and medicine and advancing our understanding of the universe around us.
So back when we started out, there were basically two ways. And in fact, when I was studying here in the 90s, there were two ways to build AI broadly speaking. There’s the expert system way, which is you kind of preprogram an expert system directly with the solution. Things like Deep Blue that beat Garry Kasparov at chess very famously in the 1990s actually while I was studying here. That would be the pinnacle example probably of an expert system.
But the problem with these expert systems was and why they never really scaled to full general intelligence is they can’t deal with the unexpected. If something unexpected happens that you didn’t already cater for, there’s nothing in the system that will allow it to deal with that. And they sort of were inspired by logic systems and they were quite rigid and fragile and brittle because of that.
Whereas the modern day approaches are built on learning systems. So these are systems that are able to learn for themselves and learn directly from experience or data from first principles and really inspired by more neuroscience ideas. And obviously the promise of these systems that we have today is that they can go beyond the knowledge potentially that we as the programmers or the system designers already know how to solve. And of course that’s extremely valuable in areas like scientific discovery.
From Games to Scientific Breakthroughs
So we started in early 2010s with games, of course, and I’ve used games many times in my life. First of all, to train my own mind, then I used to build games and AI for computer games. And then finally in a third way to train up our AI systems. And games are the perfect proving ground for AI systems. You can start with very simple games like Atari games from the 1970s.
And really this system DQN was the first time anyone had built an end to end learning system that could learn directly from raw data. So in this case, the raw pixels on the screen and it’s not told anything about the games or anything about what it’s controlling. It’s just told to maximize the score based on this video stream input, pixel stream input. So we were able to master all different Atari games in sort of around 2013.
Then we took these systems and we scaled them up to really the, I would say the grand challenge of games AI, which is, can you create systems that can play the game of Go at world champion level or beyond? And Go of course is probably the most complex game that humans have ever invented. It’s thousands of years old. It’s also the oldest game and one of the most elegant games.
But one of the ways you can just see the complexity of Go is that there are 10 to the power of 170 possible positions in Go. So that’s way more than there are atoms in the observable universe. And the important point about that is that you cannot come up with a strategy in Go using brute force techniques. It would be impossible. It would be totally intractable. So you have to do something much smarter.
Famously in 2016 AlphaGo won a $1 million challenge match against 18-time world champion Lee Sedol, one of the legends of the game, a South Korean grandmaster, and was watched by 200 million people around the world. And not only did AlphaGo, our system, win that match, importantly it actually came up with new original Go strategies. Even though we’ve played Go for thousands of years and professionally for hundreds of years, it was still able to find never seen before strategies.
# Accelerating Scientific Discovery with AI
AlphaGo’s Revolutionary Move 37
Most famously this Move 37 move here in red during game two, which if you watch the documentary on this, which is on YouTube, you’ll see how surprised the best players in the world were. They were commentating on the game about this move. It was sort of an unthinkable move. And yet it decided to get this game too in favor of AlphaGo a hundred moves later. So again, that told me about the potential for these types of systems to invent and discover new knowledge.
So here of course we’re just talking about game knowledge, but obviously my dream was to generalize this to all areas of scientific discovery.
How AI Systems Learn Through Self-Play
How do these systems work? We basically train up these neural networks through a system of self play. This is actually AlphaGo and also subsequent systems like AlphaGo Zero and AlphaZero that generalized what we’ve done for Go to play any two player game from scratch. You start off with a version one of the system that doesn’t really know anything about the game, just the rules, and it plays randomly.
And you play, say, 100,000 games against itself of this system. That creates a new database of game positions from those 100,000 games. From that you train a second version, a slightly better version of the model version two, that’s trained to predict what likely moves are to be played in any one position and also who is more likely to win, which side black or white is likely to win from that position and what percentage chance do they have of winning. Then you can use that version two to play against version one in a 100 game match off. If it wins by significant margin, so in this case 55% win rate, you replace the version one with version two and you create a new database of games that are slightly higher quality and then you learn a version three system.
If you do this and repeat it around seventeen, eighteen times, you go from playing randomly in the morning to twenty-four hours or less later, by the version 17 or 18 you’re stronger than the world champion level. So it’s quite an incredible process to see this self-improvement playing out in a very, very short amount of time.
What these neural networks are doing is reducing down this intractable search space of 10 to the power of 170 possibilities down to something that’s much more tractable in a few minutes of compute time. It’s doing this by narrowing down using the neural network to efficiently guide the search mechanism. If you think about this tree of possibilities as each of the nodes in this tree is a go position, then instead of having to look at every possibility, you can actually use the neural network to guide you just down the most interesting and most useful lines to examine.
So in this case, the ones in blue. And then after you’ve run out of thinking time, you pick the best line, the most promising line that you’ve seen thus far. So in this case, this particular line in purple.
Beyond Go: AlphaZero’s Chess Innovations
We then did not just Go, but any two player perfect information game. It was even able to discover new strategies and new styles of playing chess, which is kind of extraordinary, given that chess computers were so strong already.
Programs like Stockfish were already extremely powerful. AlphaZero was able to beat Stockfish at the time at chess, which was almost impossible to do. And not only did it beat Stockfish, but in this particular position, one of the most famous games that AlphaZero played is called the Immortal Zugzwang game. White is winning here because it favors mobility over material.
Most chess computers favor material and you’ll see that Black, those of you who play chess, you’ll see that Black has more material, but actually can’t move any of its pieces. They’re all stuck in the corner. AlphaZero sacrificed material for this mobility. For human grandmasters and top chess players, this is not only very effective style, it’s very beautiful aesthetic style to play chess in. It’s kind of amazing that AlphaZero was able to discover this new way, this new dynamic way of playing.
In fact, some of the top chess players in the world commented about this. Gary Kasparov, my all-time favorite chess player, said that “programs usually reflect priorities and prejudices of the programmer. But because AlphaZero learns for itself, I would say that its style reflects the truth.” And then the current world champion at the time, Magnus Carlsen, read and looked at these games and read about the books that were written about AlphaZero and said, “I’ve been influenced by one of my heroes recently, one of which is AlphaZero.”
He actually incorporated a lot of these ideas into his own game to dominate the chess scene for almost a decade now.
From Games to Real-World Problems
We did all these landmark breakthroughs in Games AI over the first decade of DeepMind’s existence. But of course, these were just the training ground for what we wanted to do and was just a means to an end. It wasn’t the end in itself to play these games much as I love games. It was to create these algorithms that could be generally useful for tackling real world problems.
What we look for in real world problems, not only scientific problems, but actually industrial problems as well, are three different criteria that make the problem suitable to be tackled by these types of AI systems and ideas and algorithms that we developed for playing games:
1. We look for problems that can be described as massive combinatorial search spaces – usually far too complex, far too many combinations to brute force the solution. But maybe there’s some kind of structure that we can learn about with our neural networks that can guide that search very efficiently.
2. We look for problems that can be described with a clear objective function or some sort of metric that you can optimize against. In games, that’s very easy. It’s things like maximizing the score or winning the game. But actually there’s a lot of real world problems that you can boil down to a few metrics or a few objective functions that you’re trying to maximize.
3. Finally, of course you need quite a lot of data or experience to learn from and/or ideally an accurate and efficient simulator so you can generate more synthetic data to augment the real data that you have.
It turns out that there are a lot of problems that can be couched in these terms if you’re looking at the problem from this angle, including many important problems in science.
The Protein Folding Problem
The one that I always had in mind actually from my days of first seeing the problem here at Cambridge as an undergrad is the protein folding problem, which I’ll just quickly describe to you for those that don’t know about biology and proteins.
Proteins are incredibly important. They’re the building blocks of life. Pretty much every function in the living body depends on proteins from your neurons firing to your muscle fibers twitching. Really proteins are what makes life possible.
The protein folding problem is really easy to describe. A protein is defined by its gene sequence, its genetic sequence, which then specifies an amino acid sequence, which in nature then folds up spontaneously into usually a very beautiful protein structure. So you go from this genetic sequence to a protein structure. The reason the protein structure, the 3D structure is very important is it goes a long way to defining what function it has, what it does in the body.
So it doesn’t totally describe the function, but it has a big part to play in what it actually does in nature. The protein folding problem then is this problem of can you predict the protein structure directly from this one dimensional amino acid sequence? Can you predict that computationally, that incredible 3D structure from that sequence?
Why is this such a hard problem? Well, Levinthal, a famous protein researcher in the 1960s described this conjecture known as Levinthal’s Paradox which is that he calculated there are roughly 10 to the 300 possible shapes that an average protein can take. And yet somehow in nature and in the body, these proteins fold up spontaneously in a matter of milliseconds.
So that’s the paradox. If there’s so many possibilities, how does nature do this? Basically how does physics achieve this? And that gives you hope that this must be tractable computationally in some reasonable amount of time because physics does solve this problem billions of times a second in the body.
AlphaFold and CASP Competition
Furthermore, what attracted me to this problem was that there was a biannual competition called the CASP competition. You can think of it as the Olympics for protein folding. It happens every two years and it’s run by some amazing people led by Professor John Moult, University of Maryland. It’s been running since 1994.
It’s a great competition because they work with experimentalists who painstakingly find these structures using very exotic and expensive equipment like electron microscopes. They use newly discovered structures that haven’t been published yet. The competition organizers know what the ground truth is, but computational teams, hundreds of teams enter every competition every couple of years. They try with their computational methods to predict those structures. It’s usually around 100 proteins that are in the competition. At the end of the summer, they reveal what the true structures are and you can compare the predicted ones and their distances, the error in the predictions to the real structures.
We entered AlphaFold one for the first time in 2018 and we started this AlphaFold project in 2016, actually pretty much the day after we got back from the AlphaGo match in Seoul in Korea. We felt that we were ready, we had the techniques that were mature enough and ready to now be applied outside of games and to try and tackle really meaningful problems. We call them kind of “root node problems” because if they could be solved, they open up whole new branches and avenues of discovery that can be built on top. And protein folding was a prime example of that.
So we started working in 2016. AlphaFold one was ready after a couple of years and we entered it into the CASP13 competition. You can see for the decade prior these bar charts are showing the winning score of the winning team in the hardest category, the hardest proteins that were being predicted. You can think of it as a percentage accuracy of how many of these amino acids have you got in the right position within a certain tolerance, within the sort of width of an atom you need to predict it within. You can see there was not much progress for a decade and we were stuck at this 60 points level. If you got to 90 you would be within the width of an atom.
So you’d be at atomic accuracy. And that’s what we were told by experimentalists was the accuracy you had to reach so that it was competitive with experimental methods. So that experimentalists could actually rely on these predictions rather than having to necessarily do the laborious painstaking work to find that structure. Just as a rule of thumb, my biologist friends would always tell me that it takes a PhD student their entire PhD, so four, five years to find the structure of just one protein. There are 200 million proteins known to science and 20,000 proteins in the human proteome.
With AlphaFold1, we were able to win this competition and be better by almost 50% than the next best system. AlphaFold1 for the first time introduced machine learning techniques as the main component of the system. But it was not enough to reach this atomic accuracy. We actually had to go back to the drawing board with what we’d learned and we architect it for AlphaFold two from scratch using all the learnings we had from AlphaFold one to finally reach this atomic accuracy. And that led the organizers to declare that the problem had been solved at the end of 2020.
How AlphaFold Works
This is an example of how AlphaFold works visually. On the left hand side here is a very complex protein. The ground truth is in green. The predicted structure is in blue. And you can see how closely the blue overlaps the green.
On the right hand side, you can see how AlphaFold two works. It builds up that structure in an iterative process. It kind of recycles itself actually over 192 steps and then builds out, starts as a scrunched ball of protein matter, amino acids and then it builds out a more and more plausible structure. At the end, it sort of refines the last parts of it until it has the finished prediction.
The AlphaFold Database
We immediately realized that because AlphaFold is so accurate and extremely fast, predicting proteins in a matter of seconds, we could actually fold all 200 million proteins known to science. Over the course of a year, we used a lot of computers on the Google Cloud to fold all of them and then put them out freely on a database with our colleagues at EMBL-EBI just up the road at the Sanger Center just outside of Cambridge. We provided that for free unrestricted access to anyone in the world to use it.
That 200 million proteins, if you think about how long it takes to do that experimentally – four, five years per protein – it’s kind of like a billion years of PhD time done in one year. It’s amazing to think about how much science could be accelerated.
AlphaFold’s Impact on Scientific Research
And it opened up whole new avenues of exploration because many of these structures, especially for the less well studied organisms like certain types of plants, are very important for science and agricultural research. Almost none of those structures would have been found and available otherwise. Now those are all accessible. And with 200 million structures, you can look at them at an aggregate level, examining structures across species and meta-structures to see the commonalities through evolution. There are really interesting new branches of structural biology now being explored because of this work.
We thought about safety from the beginning and take our responsibility very seriously as leaders at the forefront of AI. In this case, we consulted with over 30 biosecurity and bioethics experts to ensure that what we were putting out into the world had benefits that far outweighed any associated risks. I’m very proud to say that over 2 million researchers are using it from pretty much every country in the world. It’s been cited over 30,000 times and has become a standard tool in biology research. Many of you in the audience who are PhD students are hopefully using it and making use of it. It’s just part of the standard canon now used for biology research.
Real-World Applications
It’s been amazing to see what other researchers have done with all of this technology and these structures. I’ve just called out six of my favorite examples. People at University of Portsmouth are using it to tackle plastic pollution in the environment, trying to design new enzymes, which are types of proteins that can digest plastic. We’re working with the Fleming Center on antibiotic resistance and neglected diseases like tropical diseases that affect the poorer parts of the world. We work with Drugs for Neglected Diseases Institute.
Here’s a good example of where we can accelerate research in areas like malaria, leishmaniasis, and Zika virus, where many structures were not known. Now researchers can go straight to drug discovery because they have much of the information about the structures for those viruses and bacteria. There’s been a lot of fundamental research being done on things like finding the structure of the nanopore complex, which is a very important protein that lets nutrients in and out of the nuclear pore of the cell. There’s amazing work at the Broad Institute on drug delivery, designing molecular syringes, redesigning proteins that can deliver drugs targeted to a particular part of the body. It’s even being used in examining mechanisms of fertility.
The system is being used in almost every area of biology and medical research now.
AlphaFold Evolution
We’ve continued in the last few years to develop more improvements to the systems. We released AlphaFold 3 earlier this year for academics to use, and we’ve extended it to deal with interactions. You can think of AlphaFold 2 as a picture of the static protein structure, but really biology is a dynamic process. So you need to understand how different biological elements interact with each other.
This includes proteins with other proteins, but also proteins with other molecules important to life, things like DNA and RNA, and also ligands. These are small molecules, which include drug compounds—how does the protein bind with that compound? And then we have a separate set of work, Alpha Proteo, which is doing the reverse of AlphaFold while still making use of the AlphaFold techniques. If you want to design a novel protein, maybe one that doesn’t exist in nature for a particular job or function, what is the amino acid sequence and the genetic sequence that will give you that structure? It’s kind of like running it in reverse and trying to design new structures that will do novel things. Again, this could be extremely useful for designing drugs, antibiotics, and antibodies.
Making Complex Search Tractable
Taking a step back then, looking at all the work we’ve done in the last fifteen years, what are the implications for science and machine learning? If you think about what we’ve done with our games work and now with the scientific work that we’ve been pursuing, of which AlphaFold is our best example, it’s all about making this search tractable. You have this incredibly complex problem with many, many possible solutions, and you’ve got to find the optimal solution—a needle in the haystack of that enormous combinatorial search space. And you can’t do it by brute force.
So you have to learn this neural network model that learns about the topology of the problem so that you can efficiently guide the search to reach your goal—to maximize or find the optimal solution to the objective that you have in mind. I think this is an incredibly general solution and approach to a whole myriad of problems. Going back to the Go example: we were trying to use these systems to find the best Go move, but you could also change those nodes to be chemical compounds.
Now you’re trying to find the best molecule in chemistry space, in chemical space—the best molecule that will bind specifically to the target you’re interested in, but nothing else. This reduces the side effects and the toxicity of that compound. We’re using very similar techniques to design these molecules now as we move more and more into drug discovery.
The Era of Digital Biology
I think in biology at least, I feel like we’re entering a new era now of what I like to call digital biology. I think of biology at its most fundamental level as an information processing system that’s trying to resist entropy around it.
I think that’s basically what life is. Of course, it’s a phenomenally complex and emergent information processing system. And I think that’s where AI comes in. Just like mathematics was the perfect description language for physics and physical phenomena, I think that AI is the perfect description language for biology.
It’s perfect for dealing with the complexities of the emergent behaviors and interactions that you get in a dynamic system like biology. And I think AlphaFold is a proof point of that. I hope when we look back in ten years’ time it won’t be an isolated breakthrough but will have heralded in this new golden era of digital biology.
We’re trying to progress that ourselves. We started a new spin-out company, Isomorphic Labs, to build on our AlphaFold technology and move more into the chemistry space that I was just talking about and actually try to reimagine drug discovery from first principles with AI.
Right now it takes an average of ten years for a drug to be developed and it’s extraordinarily expensive. It costs billions and billions of dollars. I’m thinking: why can’t we use these techniques to reduce that down from years to months, maybe even one day weeks, just like we reduced the discovery of protein structures from potentially years down to now minutes and seconds? We think of this as doing science at digital speed—trying to bring the best of what we do in the technology area to the natural sciences.
My dream one day is to be able to create a kind of virtual cell, a computational cell, perhaps of something very simple like a yeast cell, that you can actually run experiments on in silico. The predictions that you get out of the virtual cell will actually inform your real-world experiments in the lab. You can reduce much of the search that’s done in the wet lab and actually use the wet lab more for validation steps rather than the very expensive and slow search process.
AI’s Broader Scientific Impact
Of course, we’ve been using AI not just in biology but across science, mathematics, and medicine more generally. We’ve had a whole range of breakthroughs not just in the biological sciences but in areas like health—identifying eye disease from retinal scans, discovering new materials, helping with plasma container fusion reactors, faster algorithms.
AI is discovering better algorithms for itself like faster matrix multiplication, doing weather prediction, and even helping with quantum computers and error correction in quantum computing. And that’s just a small example of some of the work we’ve been doing in the last two or three years. I think AI will be applicable to pretty much every field. I always encourage universities to start thinking very seriously about multidisciplinary work where you apply AI to the right questions in a particular specialist field. I think there are many, many advances to be made over the next five to ten years by doing that.
The Path to AGI
I’ll just end with a more general view about not just AI for science, but the path to AGI and how close we are to that and our more general work on the original mission of AGI. We’ve been making a lot of advances in all areas of general understanding of the world. We sometimes call them world models.
We’re particularly proud of our new video model called VO2, which was just released at the end of last year. It’s state-of-the-art video generation and it’s able to generate videos just from a text description or a single static image.
Although some of these videos may not seem that impressive, if you think about this “chopping the tomato” one, this is like the Turing test for video models because usually you get the tomato magically coming back together, or you’re chopping through the fingers, or the knife moves off somewhere. If you think about what the systems had to do to really understand the physics of the world—or the bubbles around this blueberry here, just generating that from text, “blueberries dropping into a glass of water”—it’s doing all the physics correctly, or the motion of these little cartoon characters or the bee. It’s kind of mind-blowing really.
Even if you told me five years ago that this would be possible without building in some special understanding of physics, I would have told you that seems unlikely. But somehow these learning systems are able to learn about real-world physics just from watching many, many YouTube videos. It’s pretty remarkable that’s possible.
We’ve gone a step further with Genie 2, bringing my games background back in. This is taking those video models a step further. Now with a text instruction, you can generate a whole game.
At the bottom here we said “generate a playable world as a robot in a futuristic city” and it just comes up with this, and you can control it with QWE keys and the arrow keys. At the moment it’s only consistent for a few seconds, but we’re working to extend that so that the consistency of the game world lasts for many minutes. Then you’ve really got what I would call a world model—a real understanding of the world and how interactions in that real world work and the physics of the real world.
Safety and Responsibility
Of course, we’ve been working very hard on the safety aspects of this. From the very beginning in 2010, we were working on planning for success even though almost nobody was working on AI back then.
We imagined that it would be a twenty-year mission and amazingly we’re sort of on track fifteen years in. We were planning for success—if we were to build these kinds of transformative systems and technologies, it would come with a lot of responsibility to make sure they get deployed in a safe and responsible way.
One of the systems we built is called Synth ID, which invisibly watermarks content using an AI system—an adversarial AI system—to slightly adjust the pixels or the text or audio imperceptibly to the human ear or eye. But it can be detected by a detection system that these were synthetically generated, whether that’s audio, image, or video. It’s going to become increasingly important as these technologies become widely deployed that we are able to easily distinguish between synthetically generated content and real content.
AI has incredible potential to help with our greatest challenges from climate to health. But obviously this is going to affect everyone. I think it’s really important that we engage not just the technologists in deciding this, but that we engage with a wide range of stakeholders from society.
I’ve been really pleased in the last couple of years that one of the consequences of AI becoming mainstream is that many governments have become interested in it and all parts of society. It’s been great to see these international summits. The UK hosted the first one in Bletchley Park a couple of years ago, bringing together heads of government with academia and civil society to discuss these technologies, how to put the right guardrails on it, how to make sure we embrace the opportunities but mitigate the risks that are coming down the line. I think that’s going to become increasingly important given the exponential improvement that we’re seeing with these technologies.
My shorthand for this is to say that while Silicon Valley’s mantra is “move fast and break things”—and of course that’s created a lot of advances and many of the technologies we all use every day—I think it’s not appropriate in my opinion for this type of transformative technology.
# Accelerating Scientific Discovery with AI
I think instead we should be trying to use the scientific method and approach it with a kind of humility and respect that this kind of technology deserves. And we don’t know a lot of things. There are a lot of unknowns around how this technology is going to develop. It’s so new. And I think with exceptional sort of care and foresight, we can get all the benefits and minimize the downsides of this.
But I think only if we start the research and the debate about that now. So just to end then, we’re now building our own big multimodal models that try and take the best of all of these different models I’ve shown you and put it into one system. We call it a Gemini series. Our latest one is Gemini 2.0 that some of you may have tried, which is state of the art across many leading benchmarks. We’re using it to further I’m very excited about the next generation of assistants.
I call it universal assistance. We call it project Astra where actually you have it on your phone or some other devices, maybe glasses and it starts as being an assistant that you can take around with you in the real world. And it helps you in everyday life to enrich your life or to make you more productive. And the next step then in AI is combining what I’ve shown you with AlphaGo, these kinds of agent based models that are able to efficiently search through and find a good solution to a problem in a limited domain, in this case in games. But we want to actually build those types of search systems and planning systems on top of much more general models like Gemini, these world models that understand how the real world works and then can then plan and achieve things in the real world.
And of course, that’s key to things like robotics working, which I think in the next two, three years is going to be a huge area that’s going to have massive advances in it. So I’ll just finish then by just having a slight conjecture about what does this all mean if we think back to Turing and all of the work that he did to lay down the foundations of computer science. And I think that if you see the work that we’ve done, I see myself as a kind of Turing’s champion in a way. How far can the Turing machines and this idea of classical computing go? And I feel like one of the lectures I took in this room was one of my favorite things to think about is the P equals NP problem, which is a famous problem in computer science of what sorts of problems are tractable on classical systems.
And there’s obviously a lot of great work going on in quantum computing systems, many of that here in Cambridge. And also at Google, we have one of the top quantum computing groups in the world. And there’s a lot of things that are thought to require quantum computing to solve, lot of real world systems that we would like to understand and model. And my conjecture is that actually classical Turing machines, basically classical machines that these types of AI systems are built on can do a lot more than we previously gave them credit for. And if you think about AlphaFold and protein folding, proteins are quantum systems.
They operate at the atomic scale and one might think you need quantum simulations to actually be able to find the structures of proteins. And yet we were able to approximate those solutions with our neural networks. And so I think one potential idea here is that any pattern that can be generated or found in nature, so I.E. has some real structure, physical structure can be efficiently discovered and modeled by one of these classical learning algorithms like AlphaFold.
And if that turns out to be true, think that has all sorts of implications for quantum mechanics and actually fundamental physics, which is something that I hope to explore and many of my colleagues hope to explore maybe with the help of these classical systems as well to help us uncover what the true nature of reality might be. And that leads me back to the whole reason why I started my path on AI many, many years ago is that I always believed that AGI built in this way could be the ultimate general purpose tool to understand the universe around us and our place in it. Thank you.
Q&A Session
MODERATOR: Great. We have time for some questions, if people have questions.
The first hand shot up just here.
AUDIENCE MEMBER: Thank you for the great talk. Because you have a background in neuroscience and you really like to think in terms of root node problems. Was there ever a root node problem you came across in neuroscience that you thought was worth tackling and still worth tackling to understand biological and artificial intelligence?
DEMIS HASSABIS: Yes. There’s many. I mean, that’s what I studied for my PhD actually was memory, but also imagination. So future thinking kind of planning. So I really wanted to understand how the brain did that. And it turns out the hippocampus is involved in both so that we could maybe mimic that with some of these algorithms. So I think there are many key things in that. Of course, they’re all the big questions around creativity, dreaming, consciousness, all these big questions that I think building AI and then comparing it to the human mind is one of the best ways I think we’ll make progress with those sorts of root note problems like what is the nature of consciousness? And is there something special about the instantiation of the substrate of the brain versus algorithmically mimicking that in silicon?
MODERATOR: Great. A question just here.
AUDIENCE MEMBER: Hi. I have two questions actually. So since DeepMind was founded before the deep learning revolution, I wanted to know what your state of mind was had deep learning not picked up? Or how well you’re going to progress? That’s the first question. And second question is, since you’ve had intimate experiences with such challenging problems, such high dimensional problems, and we know that gradient descent and its variants can’t sort of converge to the optimum solution, only locally optimum solutions. Were you surprised that anything works at all in these systems at every point? And do you think that most of the nature is kind of suboptimal and so we can potentially build a more optimal nature?
DEMIS HASSABIS: Yes. So look, I think they’re both great questions. So the first one, look, that’s why we called it DeepMind, partly because the deep refers to deep learning, right? So deep learning had well, the early parts of it wasn’t called deep learning then, but it was sort of just becoming common. There was these Boltzmann machines and things that Geoffrey Hinton had invented in just a few years before in 2006, 2005, these hierarchical neural networks.
And it seemed like a super promising idea even back then to us that come across it in academia. And then the other thing we bet on was reinforcement learning and the combination of that, right, which again is coming back into vogue, but it was also important for us to solve something like AlphaGo. So you need both parts of that. You need the deep learning to kind of model the environment and the world, and then you need the reinforcement learning to make the plans and the solutions and take action in the world. And so there was two reasons we bet on that even when it was just sort of the beginning of it, is that we knew that the classical methods, these expert systems would not scale.
And actually, that’s again one of the things I learned here and also my postdoc at MIT was that they were the kind of churches, if you like, of the classical methods, these expert systems. Actually, that’s something else you can learn here is like in your university courses is not just what to do, but also what not to do and why it may not like I sort of thought about it and I felt it would never scale to the kinds of problems that I wanted to solve with AI. Whereas the learning systems seem to have unlimited potential, though they were a lot harder to get anything to do significant to do at the beginning, right? And this is the problem because they weren’t scaled up enough. And the other reason we started DeepMind 2010 is we could also see the computing paradigm was shifting on the hardware side, GPUs and other things, which were, of course, also invented for gaming.
And turns out everything is a matrix multiplication, right? Intelligence and gaming and computer graphics. And so all of those different influences came together and the understanding of neuroscience and fMRI machines and neuroscience had advanced a lot in that previous ten years too. So I felt it was the perfect time to sort of bring all of that together back in 2010. And we were betting on that, not necessarily because we knew it would work, but we knew that the other methods we were pretty confident would not work.
And that’s what the AI winters were about basically, people trying to push those expert systems. And then the second question, I think I wouldn’t say that well, first of all, it is surprising that some of these things converge and actually we weren’t sure. So that Atari stuff I showed you. So for the first couple of years, nothing worked, right? So we couldn’t even get one point on if some of you remember Pong, one of the first computer games, we couldn’t this sort of tennis, bat and ball game, simplest game you could imagine, we weren’t able to get a point on it.
So we were wondering where we like ten, twenty years too early, like Babbage turned out to be with his difference engine, right? Amazing ideas. It worked, but he was just in the end fifty years or one hundred years too early. And I would say you want to be like five years ahead of your time, not fifty years ahead. Otherwise, you’ll be in for a lot of pain like Babbage was.
And so we were worried about that, but then it did converge. And then that gave us confidence to tackle the harder problems. And I think if you’re asking I think the last part of your question was about the things in nature. Well, the things there, what I’m thinking is they’re not suboptimal. They’re actually probably pretty optimal because they’ve gone through an evolutionary process, not just life with biology, but actually geologically and physically.
Asteroids and physical phenomena combined together, they survive some amount of time because they’re stable over time, right? And if they’re stable over time, then there’s probably some structure that’s learnable. That would be my conjecture.
MODERATOR: Great. So question down here.
AUDIENCE MEMBER: What do you think about building high bandwidth brain machine interfaces and implantable memory and reasoning modules so that humans can be further empowered to make discoveries autonomously as opposed to only talking to AI in the cloud?
DEMIS HASSABIS: Yes. I love that area, and I’ve carefully followed it and helped people building like EEG caps and things. Of course, the problem is on resolution of these things to get the readouts from the brain and also ideally you’d want it read and write. But I’m very fascinated by projects like Neuralink or chips in the brain.
Obviously, right now that’s for veterans and people to get function back in their body. I think it’s going be amazing things on that where I think people will be able to gain be able to walk again if they’ve broken their back and things like that. I think there’s going be some credible advances in medical sciences that will be amazing. But then beyond that, maybe if things becomes routine and it’s surgery safe and there’s safe ways of doing this, I could imagine that would be one way for us to keep up with the technology. And in some senses, it’s no different to what we already have today with our technologies all around us.
We will have our phones all 24/7 with us and computers and other things. So we’re already symbiotic almost with our technology. It would be one step further, of course, to have that. But I’m not sure I mean, maybe that’s for the philosophers in the room to answer the difference whether there’s a Howard boundary there with the technology if it’s attached to you versus it’s just something you carry around with you all the time.
MODERATOR: Great. Some questions over here. There’s one just…
AUDIENCE MEMBER: Hello. What do you think about the speed at which artificial intelligence is developing and its effects on sort of economic developments? There are a lot of people out there who are deciding careers right now who, given the sort of rapid change in the landscape, make it really difficult to kind of predict what they should go into?
DEMIS HASSABIS: Yes. So it’s a very complicated one because, as you say, things are changing at kind of lightening speed. We were just discussing it with Alastair actually earlier. Even designing three year computer science courses is quite difficult given that there are underlying material changes in less than three years.
I think the only thing we can say for sure is there’s going to be a lot of change, but I think that brings with it disruption and opportunity. So I mean, I’ll just give you an example on coding, which I don’t know if you could be the scientist, but so I still would recommend that you get good at coding and math because I think you’ll be able to use these new tools in a much deeper way if you have an understanding of how they’re built. But on the other hand, I think coding is going to be more available to many, many more types of people because of the way that AR you’ll be able to program sort of a natural language probably rather than a quite complicated computer language. And so that will open up fields for creative people to build games, make films, make applications that maybe is more the kind of balance of that is more on the creative side than it is on the engineering side. But I also think that will enhance engineers to be able to do 10x of what they can do today.
# Accelerating Scientific Discovery with AI (continued)
Adapting to New Tools
So I think it’s difficult to know. But what I would say is just to focus on embracing those tools in your spare time and making training yourself to be really good at picking up new information extremely quickly because I think that’s basically what’s going to happen in the next ten years.
Q&A Session
MODERATOR: Okay. We’ve got one question just in the here, the yellow and black top.
AUDIENCE MEMBER: Do you think that there are any biological processes or behaviors or patterns which can’t be modeled with existing deep learning techniques? I’m not saying like throw more computers until it works and just make a bigger and bigger model. Do you think there are some processes which physically cannot be modeled with the architecture we have?
DEMIS HASSABIS: I mean there’s certainly lots of processes that can’t be modeled today, but again, this sort of goes back to what I said at the end of the talk. I’m not showing the limit that there are. I think in the end, if physics can sort of solve it and there’s some structure there to be learned, probably with enough examples, one could reverse engineer a model of that.
And then I don’t see any theoretical reason why a classical system, albeit a very complex one, could not make predictions or simulations of that biological system. So I don’t really see any limit in the long term.
I mean there are lots of abstract things like factorizing large numbers, cryptography, where it’s sort of human-made systems, right, where there may not be any structure. I mean, there may be structure in the natural numbers. Lots of people conjecture there is. If there is, then that will also be learnable. If there isn’t, and it’s kind of uniform distribution, then you would need a quantum computer to crack cryptography, things like that. So those are open conjectures.
But I think most things in nature have evolved over geological or biological physics time. And so that suggests to me there is some structure to learn. So that makes the search or the prediction potentially tractable.
MODERATOR: Okay. And last question then. We’ll have the person in the pink shirt.
AUDIENCE MEMBER: This question comes from the behalf of the Cambridge University Game Development Society. So you mentioned about the Genie two model and how currently it’s stable for a few seconds of consistency, and you hope to eventually have that at a few minutes. But I suppose the question that our society has is games that we actually play have consistency that’s indefinite. When you’re playing Minecraft, you expect to turn around and the village is still there, right? So do you see your current model being integrated into a workflow? Or what exactly how do you see AI and your model and what you’re working on? How do you see it integrating into game development in the coming decades?
DEMIS HASSABIS: Yes. Well, look, I think there’s many ways AI is going to come in. So one is the tools to build the assets you need for games. So 3D models, animations, I think that’s all going to come in, in the next couple of years.
I think you can think of AI as well for game balancing. So you imagine you design a game and overnight, it could play a million playthroughs of that game. And then in the morning you could just get a report as a game designer like these things are unbalanced, right? Or reduce the power of this unit or whatever it is.
I think also bug testing for open world games. So I used to make simulation games, open world games. They’re a nightmare to bug test because the whole point of them is that the player can almost do almost anything and then the game will react to you. So how do you test 10 million people having their own unique journey through your game? Well, actually having AI players play it before you release it could help you solve a lot of those bugs.
And then finally, I mean, I think excitingly as well is our AI characters that are much more lifelike that move the storyline on. You used to dream about that massive multiplayer worlds where actually the AI characters were intelligent and actually updated their beliefs and their storylines based on what the players were doing. So it felt like a much more living realistic world. And I think we’re on the cusp of having of building those types of games.
And then finally, the world model we’re building, that’s more about general AI and can you actually model the it’s an expression of being able to understand the world. Does your model understand the world? Well, if can generate it for some amount of time, then obviously it must be in some sense understanding and so called an empirical evidence that is understanding something about the underlying physics. So that’s more for the general intelligence rather than I think we’re going to maybe one day we’ll have this holodeck thing that you can just imagine, and it’s just all there around you. Probably, we will be able to have that once we have AGI. But I think that’s still a ways off.
MODERATOR: Great. Well, it seems like a nice place to finish, a question on games, returning back to games. But thank you all so much for coming. And a particular special thank you to Demis for coming in and talking to us today. So thank you.
Related Posts
- Transcript of Abraham Verghese’s Harvard Commencement Speech 2025
- Transcript of JD Vance’s Commencement Speech at the U.S. Naval Academy – 5/23/25
- Transcript of This Is What the Future of Media Looks Like: Hamish Mckenzie
- Transcript of Elizabeth Banks’ Commencement Speech At the University of Pennsylvania
- Transcript of Jon M.Chu’s Speech At USC Commencement 2025