Skip to content
Home » Fed’s Michelle Bowman Interviews OpenAI CEO Sam Altman (Transcript)

Fed’s Michelle Bowman Interviews OpenAI CEO Sam Altman (Transcript)

Read the full transcript of OpenAI CEO Sam Altman in conversation with Federal Reserve Vice Chair Michelle Bowman at the Fed’s Integrated Review of the Capital Framework for Large Banks Conference in Washington, July 22, 2025.

Opening Remarks and Welcome

Michelle Bowman: We’ll let some more people trickle in and then we’ll. Well good afternoon everyone. Thank you so much for being with us for this day of our first ever Capital Conference. We’re really looking forward to all of the conversations that we’re having today, especially for what we’re going to take away as we’re continuing to do our work within the interagency and also with the Federal Reserve on capital and as we’re looking to the future of banking more generally.

I also want to take a moment to thank our panel participants before we get started with our fireside chat this afternoon. Thank you for taking time to be a part of this important conversation as we’re thinking about what regulation will look like in this space and in many others. We really look forward to engaging with you as we continue to have these discussions.

But today in many ways is about the future of banking and with that in mind, we’d like to now turn to another influence that’s shaping innovation and finance. While innovation has always played a role in the evolution of the banking industry, it’s becoming clear that new technologies are not merely incremental improvements but potentially huge leaps that could fundamentally alter the structure and function of our financial system.

One of these technologies, of course, is artificial intelligence. And I can’t think of anyone who’s better situated or prepared to discuss AI and the role of innovation in transforming finance and our economy more generally than Sam Altman, the chairman and CEO of OpenAI. Sam, thank you for making time to be with us today. I want to welcome you. Thank you for being here.

Sam Altman: Thank you very much for having me.

The Current AI Landscape

Michelle Bowman: It might be helpful just to sort of frame where we are with the landscape of AI and innovation more broadly. Would you just kind of set that framework for us?

Sam Altman: Sure. Only five years ago, AI was still thought of as something that was in the distant future if it was going to happen at all. And even two and a half years ago, which was right around the time ChatGPT launched, it still hadn’t moved past the sort of nerds in Silicon Valley. ChatGPT launched on November 30th of 2022. That was even before GPT-4. And since then, the progress has been quite rapid. The adoption, the economic impact has also started to be quite rapid.

Just last week, we had a model that was able to achieve gold-level performance on the IMO. This is something that I think if you told most people in the field what happened even a few years ago, they would say, “Absolutely not. That’s like, that’s as good as our smartest humans that are true experts in their field.”

We’re now hearing from scientists saying they’re two, three times more productive. We’re hearing from computer programmers that say they’re 10 times more productive. That’s completely changed what it means to write software. We have systems already that can perform at expert-level intelligence in many, many fields. Now, they cannot operate on tasks that are as long-horizon as humans can, so there’s still a big limitation there. But even if progress were frozen right now, which of course it won’t be, I think we still have years ahead of us for society and the economy to really digest this technology and figure out what the impact is going to be.

Intelligence Too Cheap to Meter

Sam Altman: There was a saying for a long time that I thought was great and we should try to get to again, which is, “electricity too cheap to meter.” We didn’t quite deliver on that as a society, although I think we still should. It does in fact look like we’re about to deliver on “intelligence too cheap to meter.” We’ve been able to drive down the cost of each unit of intelligence by more than a factor of 10 each year for the last five years. Looks like we’ll do that for the next five years too, maybe even more.

This weekend, I used one of our upcoming models to do a computer programming task that I had wanted to do, sort of like a home automation nerd, and I wanted the lights and music in my house to do this specific thing. I knew that before this technology, it would have taken me days to do that. I was hopeful that with this technology, given our recent progress, I’d be able to do it in hours. I was able to do it in five minutes. The AI did almost all of the work.

This is something that just a year ago, you would have paid a very high-end programmer 20 hours, 40 hours, something like that to do, and an AI did it for probably less than a dollar’s worth of compute tokens. This is an amazing change, and the speed with which it has happened, the speed with which it will continue for the next few years, I think is still quite unappreciated.

We weren’t sure, even a year ago, how much further our current research roadmap was going to continue, if we’re going to hit some sort of limit. At this point, it looks like we’ve got many years ahead of us of almost certain progress.

AI’s Unprecedented Impact on Productivity

Michelle Bowman: Well, fantastic. That really helps us frame the next conversation, which is, you’re talking to a room full of people in the financial industry and in banking, many of whom are already thinking about how they’re going to use AI or already using AI, but how could you compare the potential for AI and productivity with other tech gains that we’ve seen in the past?