
Here is the full transcript and summary of Ilya Sutskever’s talk titled “The Exciting, Perilous Journey Toward AGI” at TED conference.
In this TED talk, OpenAI’s cofounder and chief scientist Ilya Sutskever discusses the concept of digital brains and how they are the foundation of artificial intelligence. He explains his motivations for getting into AI, the potential impact of AGI, and the increasing popularity of the idea that computers will become truly intelligent and eventually surpass human intelligence. Sutskever believes that as AI continues to advance, people will start to act in unprecedented ways, leading to increased collaboration and overcoming the challenges posed by this technology.
Listen to the audio version here:
TRANSCRIPT:
We’ve all experienced the progress of artificial intelligence. Many of you may have spoken with a computer and a computer understood you and spoke back to you. With the rate of progress being that it is, it’s not difficult to imagine that at some point in the future, our intelligent computers will become as smart or smarter than people.
And it’s also not difficult to imagine that when that happens, the impact of such artificial intelligence is going to be truly, truly vast. And you may wonder, is it going to be okay when technology is so impactful?
And here my goal is to point out the existence of a force that many of you may have not noticed that gives me hope that indeed we will be happy with the result.
So artificial intelligence, what is it and how does it work? Well it turns out that it’s very easy to explain how artificial intelligence works. Just one sentence. Artificial intelligence is nothing but digital brains inside large computers. That’s what artificial intelligence is.
Every single interesting AI that you’ve seen is based on this idea.
Now I find it interesting that the seat of intelligence in human beings is our biological brain. It is fitting that the seat of intelligence in artificial intelligence is an artificial brain.
Here I’d like to take a digression and tell you about how I got into AI. There were three forces that pulled me into it. The first one was that when I was a little child at around the age of five or six, I was very struck by my own conscious experience. By the fact that I am me and I am experiencing things. That when I look at things, I see them.
This feeling over time went away, though by simply mentioning it to you right now, it comes back. But this feeling of that I am me, that you are you, I found it very strange and very disturbing, almost. And so when I learned about artificial intelligence, I thought, wow, if we could build a computer that is intelligent, maybe we will learn something about ourselves, about our own consciousness. That was my first motivation that pulled me towards AI.
The second motivation was more pedestrian in a way. I was simply curious about how intelligence works. And when I was a teenager, an early teenager in the late 90s, the sense that I got is that science simply did not know how intelligence worked.
There was also a third reason, which is that it was clear to me back then that artificial intelligence, if it worked, it would be incredibly impactful. Now it wasn’t at all obvious that it will be possible to make progress in artificial intelligence. But if it were possible to make progress in artificial intelligence, that would be incredibly impactful.
So these were the three reasons that pulled me towards AI. That’s why I thought that’s a great area to spend all my efforts on.
So now let’s come back to our artificial intelligence, the digital brains. Today, these digital brains are far less smart than our biological brains. When you speak to an AI chatbot, you very quickly see that it’s not all there, that it’s, you know, it understands mostly, sort of, but you can clearly see that there are so many things it cannot do and that there are some strange gaps. But this situation, I claim, is temporary.
As researchers and engineers continue to work on AI, the day will come when the digital brains that live inside our computers will become as good and even better than our own biological brains. Computers will become smarter than us. We call such an AI an AGI, artificial general intelligence, when we can say that the level at which we can teach the AI to do anything that, for example, I can do or someone else.
So although AGI does not exist today, we can still gain a little bit of an insight into the impact of AGI once it’s built. It is completely obvious that such an AGI will have a dramatic impact on every area of life, of human activity, and society. And I want to go over a quick case study. This is a narrow example of a very, very broad technology.
The example I want to present is healthcare. Many of you may have had the experience of trying to go to a doctor. You need to wait for many months, sometimes. And then when you do get to see a doctor, you get a small, very limited amount of time with the doctor. And furthermore, the doctor, being only human, can have only limited knowledge of all the medical knowledge that exists. And then by the end of it, you get a very large bill.
Well, if you have an intelligent computer, an AGI, that is built to be a doctor, it will have complete and exhaustive knowledge of all medical literature, it will have billions of hours of clinical experience. And it will be always available and extremely cheap. When this happens, if you look back at today’s healthcare, similarly to how we look at 16th century dentistry, when you know when they tie people with belts and then have this drill, that’s how today’s healthcare will look like.
And again, to emphasize, this is just one example. This is just one example. AGI will have dramatic and incredible impact on every single area of human activity. But when you see impact this large, you may wonder, gosh, isn’t this technology too impactful? And indeed, for every positive application of AGI, there will be a negative application as well.
This technology is also going to be different from technology that we are used to, because it will have the ability to improve itself. It is possible to build an AGI that will work on the next generation of AGI. The closest analog we had to this kind of rapid technological improvement when the Industrial Revolution has taken place, where the material condition of human society was very, very constant, and then it was a rapid increase, rapid growth.
With AGI, something like this could happen again, but on a shorter timescale. And then furthermore, there are concerns around if an AGI ever becomes very, very powerful, which is possible. Maybe it will want to go rogue, being that it is an agent. So this is a concern that exists with this unprecedented, not yet existing technology.
And indeed, you look at all the positive potential of AGI, and all the concerning possibilities of AGI as well, and you may say, gosh, where is this all headed? One of my motivations in creating OpenAI was, in addition to developing this technology, was also to address the questions that are posed by AGI, the difficult questions, the concerns that we raised.
In addition to working with governments and helping them understand what is coming and prepare for it, we are also doing a lot of research on addressing the technological side of things, so that the AI will never want to go rogue. And this is something which I’m working on as well.
But I think the thing to note, because AI and AGI is really the only area of the economy where there is a lot of excitement, a lot of investment, everyone is working on it. There’s a huge number of labs in the world trying to build the same thing. Even if OpenAI takes these desirable steps that I mentioned, what about the rest of the companies and the rest of the world?
This is where I want to make my observation about the force that exists. And this observation is this. Consider the world one year ago, as recently as one year ago. People weren’t really talking about AI, not in the same way at all. What happened? We all experienced what it’s like to talk to a computer and to be understood.
The idea that computers will become really intelligent and eventually more intelligent than us is becoming widespread. It used to be a niche idea that only a few enthusiasts and hobbyists and people who were very into AI were thinking about. But now everyone is thinking about it.
And as AI continues to make progress, as technology continues to advance, as more and more people see what AI can do and where it is headed towards, then it will become clear just how dramatic, incredible, and almost fantastical AGI is going to be and how much trepidation is appropriate.
And what I claim will happen is that people will start to act in an unprecedentedly collaborative way out of their own self-interest. It’s already happening right now. You see the leading AGI companies starting to collaborate, for a specific example, through the Frontiers Model Forum. And we will expect that companies that are competitors will share technical information to make their AI safe.
We may even see governments do this. For another example, at OpenAI we really believed in how dramatic AGI is going to be. So one of the ideas that we were operating by, and it’s been written on our website for five years now, that when technology gets such that we are very, very close to AGI, to computers smarter than humans. And if some other company is far ahead of us, then rather than compete with them, we will help them out, join them, in a sense.
And why do that? Because we feel, we appreciate how incredibly dramatic AGI is going to be. And my claim is that with each generation of capability advancements, as AI gets better, and as all of you experience what AI can do, as people who run AI efforts and AGI efforts and people who work on them will experience it as well, this will change the way we see AI and AGI, and it will change collective behavior.
And this is an important reason why I’m hopeful that despite the great challenges that’s posed by this technology, we will overcome them.
Thank you.
SUMMARY OF THIS TALK:
Ilya Sutskever’s talk “The Exciting, Perilous Journey Toward AGI” delves into the evolution and future impact of artificial intelligence, particularly the advent of Artificial General Intelligence (AGI). Here are the key takeaways from his talk:
- Rapid Progress of AI: Sutskever emphasizes the swift advancement in AI, highlighting how computers are increasingly capable of understanding and responding to human speech. He suggests that it’s conceivable that AI could eventually match or surpass human intelligence.
- Impact of AGI: The potential impact of AGI on society and various sectors is projected to be vast and transformative. Sutskever stresses the need for caution and preparedness as we approach this new era.
- Nature of AI: He simplifies the definition of AI as digital brains within large computers, underscoring that all significant AI developments are based on this principle.
- Personal Motivation: Sutskever shares his journey into AI, driven by a fascination with consciousness, a curiosity about the workings of intelligence, and the recognition of AI’s potential transformative impact.
- Current State vs. Future of AI: He observes that today’s AI, although impressive, is still limited compared to human intelligence. However, he predicts this gap will close as research and engineering continue to advance.
- AGI’s Revolutionary Potential: The talk highlights how AGI could revolutionize various fields, using healthcare as an example where AGI could dramatically improve access, reduce costs, and enhance the quality of care.
- Risks and Ethical Concerns: Sutskever acknowledges the dual nature of AGI, where for every positive application, there could be a negative one. He also mentions the unique self-improving capability of AGI, which could lead to rapid technological advancements, but also poses significant risks.
- Addressing AGI Challenges: OpenAI, co-founded by Sutskever, aims to address the technological and ethical challenges posed by AGI, working towards ensuring it doesn’t “go rogue.”
- Changing Perceptions and Collaboration: He notes a shift in public and industry perception towards AI and AGI. As understanding of AI’s potential grows, there’s an increasing trend towards collaboration among companies and governments to ensure safe and beneficial development of AGI.
- Optimism for the Future: Despite the challenges, Sutskever remains hopeful that the collective efforts and evolving perspectives will lead to a future where AGI’s benefits are maximized while mitigating its risks.
In summary, Sutskever’s talk outlines the significant advancements in AI, the exciting and daunting prospect of AGI, and the need for responsible development and collaboration to harness its potential safely.
Related Posts
- Why eSIMs Are the Smart Choice for Mobile Businesses
- Transcript: Elon Musk and Jensen Huang Talk AI at U.S.-Saudi Investment Forum – 11/19/25
- The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)
- Transcript: NVIDIA CEO Jensen Huang’s Keynote At GTC 2025
- Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)