Skip to content
Home » Sam Altman: GPT-4, ChatGPT, and the Future of AI (Transcript)

Sam Altman: GPT-4, ChatGPT, and the Future of AI (Transcript)

Transcript of Lex Fridman Podcast with OpenAI CEO Sam Altman on GPT-4, ChatGPT, and the Future of AI

TRANSCRIPT”

LEX FRIDMAN: The following is a conversation with Sam Altman, CEO of OpenAI, the company behind GPT-4, ChatGPT, Dolly, Codex, and many other AI technologies which both individually and together constitute some of the greatest breakthroughs in the history of artificial intelligence, computing, and humanity in general.

Please allow me to say a few words about the possibilities about the possibilities and the dangers of AI in this current moment in the history of human civilization. I believe it is a critical moment. We stand on the precipice of fundamental societal transformation where soon, nobody knows when, but many, including me, believe it’s within our lifetime. The collective intelligence of the human species begins to pale in comparison by many orders of magnitude to the general superintelligence in the AI systems we build and deploy at scale.

This is both exciting and terrifying. It is exciting because of the innumerable applications we know and don’t yet know that will empower humans to create, to flourish, to escape the widespread poverty and suffering that exists in the world today, and to succeed in that old all-too-human pursuit of happiness.

It is terrifying because of the power that superintelligent AGI wields to destroy human civilization, intentionally or unintentionally. The power to suffocate the human spirit in the totalitarian way of George Orwell’s 1984 or the pleasure-fueled mass hysteria of Brave New World, where, as Huxley saw it, people come to love their oppression, to adore the technologies that undo their capacities to think. That is why these conversations with the leaders, engineers, and philosophers, both optimists and cynics, is important now.

These are not merely technical conversations about AI. These are conversations about power, about companies, institutions, and political systems that deploy, check, and balance this power, about distributed economic systems that incentivize the safety and human alignment of this power, about the psychology of the engineers and leaders that deploy AGI, and about the history of human nature, our capacity for good and evil at scale.

I’m deeply honored to have gotten to know and to have spoken with, on and off the mic, with many folks who now work at OpenAI, including Sam Altman, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, Andrej Karpathy, Jakub Pachocki, and many others. It means the world that Sam has been totally open with me, willing to have multiple conversations, including challenging ones, on and off the mic. I will continue to have these conversations to both celebrate the incredible accomplishments of the AI community and to steel man the critical perspective on major decisions various companies and leaders make, always with the goal of trying to help in my small way. If I fail, I will work hard to improve. I love you all. This is the Lex Fridman podcast. To support it, please check out our sponsors in the description.

And now, dear friends, here’s Sam Altman.

High level, what is GPT-4? How does it work? And what is the most amazing about it?

SAM ALTMAN: It’s a system that we’ll look back at and say it was a very early AI. And it will, it’s slow, it’s buggy, it doesn’t do a lot of things very well. But neither did the very earliest computers. And they still pointed a path to something that was going to be really important in our lives, even though it took a few decades to evolve.

LEX FRIDMAN: Do you think this is a pivotal moment? Like out of all the versions of GPT 50 years from now, when they look back at an early system, yeah, that was really kind of a leap, you know, in a Wikipedia page about the history of artificial intelligence, which of the GPT is what they put?

SAM ALTMAN: That is a good question. I sort of think of progress as this continual exponential. It’s not like we could say here was the moment where AI went from not happening to happening. And I’d have a very hard time, like pinpointing a single thing. I think it’s this very continual curve. Will the history books write about GPT one or two or three or four or seven? That’s for them to decide. I don’t really know. I think if I had to pick some moment, from what we’ve seen so far, I’d sort of pick ChatGPT. It wasn’t the underlying model that mattered. It was the usability of it, both the RLHF and the interface to it.

LEX FRIDMAN: What is ChatGPT? What is RLHF? Reinforcement learning with human feedback? What was that little magic ingredient to the dish that made it so much more delicious?

SAM ALTMAN: So we train these models on a lot of text data. And in that process, they learn the underlying — something about the underlying representations of what’s in here or in there. And they can do amazing things. But when you first play with that base model that we call it after you finish training, it can do very well on evals. It can pass tests. It can do a lot of — you know, there’s knowledge in there, but it’s not very useful or at least it’s not easy to use, let’s say.

And RLHF is how we take some human feedback. The simplest version of this is show two outputs, ask which one is better than the other, which one the human raters prefer, and then feed that back into the model with reinforcement learning. And that process works remarkably well with, in my opinion, remarkably little data to make the model more useful. So RLHF is how we align the model to what humans want it to do.

LEX FRIDMAN: So there’s a giant language model that’s trained on a giant data set to create this kind of background wisdom knowledge that’s contained within the internet. And then somehow adding a little bit of human guidance on top of it through this process makes it seem so much more awesome.

SAM ALTMAN: Maybe just because it’s much easier to use.