Skip to content
Home » TRANSCRIPT: How AI Is Saving Billions of Years of Human Research Time: Max Jaderberg

TRANSCRIPT: How AI Is Saving Billions of Years of Human Research Time: Max Jaderberg

Read the full transcript of Research scientist Max Jaderberg’s talk titled “How AI Is Saving Billions of Years of Human Research Time” at TED Talks 2024 conference.

Listen to the audio version here:

TRANSCRIPT:

The Difficulty of Research and the Impact of AlphaFold

MAX JADERBERG: So a while ago now, I did a PhD, and I actually thought it would be quite easy to do research. Turns out it was really hard. My PhD was spent coding up neural network layers and writing CUDA kernels, very much computer-based science. And at that time, I had a friend who worked in a lab doing real messy science. He was trying to work out the structure of proteins experimentally. And this is a really difficult thing to do. It can take a whole PhD’s worth of work just to work out the structure of a single new protein system.

And then 10 years later, the field that I was in, machine learning, revolutionized his world of protein structure. A neural network called AlphaFold was created by DeepMind that can very accurately predict the structure of proteins and solved this 50-year challenge of trying to do protein folding. And just two weeks ago, this won the Nobel Prize in chemistry. And it’s estimated that since the release of this model, we’ve saved over a billion years of research time. A whole PhD’s worth of work is now approximated by a couple of seconds of neural network time.

And to my friend, this might sound a bit depressing, and I’m sorry about that, but to me, this is just really an incredible thing. The sheer scale of new knowledge about our protein universe that we now have access to, due to an AI model that’s able to replace the need for real-world experimental lab work. And that frees up our precious human time to begin probing the next frontiers of science.

Continued Breakthroughs with AI in Science

Now some people say that this is a one-time-only event, that we can’t expect to see these sort of breakthroughs in science with AI to be repeated. And I disagree. We will continue to see breakthroughs in understanding our real messy world with AI. Why? Because we now have the neural network architectures that can eat up any data modality that you throw at them. And we have tried and tested recipes of incorporating any possible signal in the world into these learning algorithms.

And then we have the engineering and infrastructure to scale these models to whatever size is needed to take advantage of the massive amounts of compute power that we can create. And finally, we’re always creating new ways to record and measure every detail of our real messy world that then creates even bigger data sets that help us train even richer models.

A New Paradigm: AI Analogs of the Real World

And so this is a new paradigm in front of us, that of creating AI analogs of our real messy world. This new AI paradigm takes our real, messy, natural world and learns to recreate the elements of it with neural networks. And why these AI analogs are so powerful is that it’s not just about understanding, approximating or simulating the world for the sake of understanding, but this actually gives us a little virtual world that we can experiment in at scale to ultimately create new knowledge.

And you can imagine that this experimentation against our AI analogs, this can also happen in silico, in a computer with other agents, in a loop of in silico, open-ended discovery, ultimately to create new knowledge that we can take back out and change the world around us. And this isn’t science fiction. Right now, we have thousands of graphics cards burning, training foundational models of our own micro-biological world, and then agents that are probing these AI analogs to design new molecules that could be potential new drugs.

ALSO READ:  What You Are Missing While Being A Digital Zombie: Patrik Wincent (Transcript)

AI-Driven Drug Design

And I want to show you exactly how this process works for us, because I believe it can serve as a blueprint to bring about a whole new wave of the future of AI-driven scientific and technological progress. Now drug design is such an important area to focus on because it’s actually becoming harder and harder to design new drugs. This is a graph of the number of new drugs created per billion dollars of R and D spent over time. And what you can see is that the number of new drugs is exponentially decreasing. It’s becoming more and more expensive to create a new drug.

Now during this same time period, we’ve had a huge amount of advancement in the capabilities of AI, driven by a whole host of algorithmic breakthroughs. But one of the secret sources of this advancement in AI has also been that of Moore’s Law, that the amount of computing power has just been exponentially increasing over time. And these days, it perhaps isn’t Moore’s Law that we should care about, but Jensen’s law. Jensen Huang, being the CEO of Nvidia, for the exponential increase in GPU FLOPS that are now powering our neural networks.

So really the question is, how do we bring this world of AI and machine learning to that of drug design? Can we think about using our AI analogs to reverse this curse of Eroom’s law and jump on this exponential wave of GPU FLOPS powering our neural networks? Actually bringing these worlds together and driving this change is the day-to-day responsibility that I feel.

Modeling Biology with AI

So how can we go about modeling biology? Well if we were in the world of physics, for example, modeling the universe, then we can actually write down a lot of the theory by hand with maths and very accurately predict, for example, the unfolding of the universe, even millions of light years away. But we can’t do that for the incredibly complex dynamics within ourselves.

We can’t just write down some equations for ourselves. We can perhaps write down the theory of how atoms interact.