Read the full transcript of CEO of OpenAI Sam Altman’s interview on Huge Conversations Podcast with Cleo Abram titled “Sam Altman Shows Me GPT 5… And What’s Next”, August 8, 2025.
Welcome to Huge Conversations
CLEO ABRAM: Welcome to Huge Conversations. Great to meet you. Thanks for doing this.
SAM ALTMAN: Absolutely.
CLEO ABRAM: So, before we dive in, I’d love to tell you my goal here.
SAM ALTMAN: Okay.
CLEO ABRAM: I’m not going to ask you about valuation or AI talent wars or fundraising or anything like that. I think that’s all very well covered elsewhere.
SAM ALTMAN: It does seem like it.
CLEO ABRAM: Our big goal on this show is to cover how we can use science and tech to make the future better. And the reason that we do all of that is because we really believe that if people see those better futures, they can then help build them. So my goal here is to try my best to time travel with you into different moments in the future that you’re trying to build and see what it looks like.
SAM ALTMAN: Fantastic.
GPT-5: Beyond the Test Scores
CLEO ABRAM: Awesome. Starting with what you just announced. You recently said, surprisingly recently, that “GPT-4 was the dumbest model any of us will ever have to use again.” But GPT-4 can already perform better than 90% of humans at the SAT and the LSAT and the GRE. And it can pass coding exams and sommelier exams and medical licensing. And now you just launched GPT-5. What can GPT-5 do that GPT-4 can’t?
SAM ALTMAN: First of all, one important takeaway is you can have an AI system that can do all those amazing things you just said and it clearly does not replicate a lot of what humans are good at doing, which I think says something about the value of SAT tests or whatever else.
But I think had you gone back to if we were having this conversation the day of GPT-4 launch and we told you how GPT-4 did at those things, you were like, “Oh man, this is going to have huge impacts and some negative impacts on what it means for a bunch of jobs or what people are going to do and this is a bunch of positive impacts” that you might have predicted that haven’t yet come true.
And so there’s something about the way that these models are good that does not capture a lot of other things that we need people to do or care about people doing. And I suspect that same thing is going to happen again with GPT-5. People are going to be blown away by what it does. It’s really good at a lot of things and then they will find that they want it to do even more.
People will use it for all sorts of incredible things. It will transform a lot of knowledge work, a lot of the way we learn, a lot of the way we create. But we people, society will co-evolve with it to expect more with better tools.
So yeah, I think this model is quite remarkable in many ways, quite limited in others. But the fact that for 3-minute, 5-minute, 1-hour tasks that an expert in a field could maybe do or maybe struggle with, the fact that you have in your pocket one piece of software that can do all of these things is really amazing.
I think this is unprecedented at any point in human history that a technology has improved this much this fast. And the fact that we have this tool now, we’re living through it and we’re kind of adjusting step by step. But if we could go back in time five or 10 years and say this thing was coming, we would be like, “Probably not.”
What Makes GPT-5 Special
CLEO ABRAM: Let’s assume that people haven’t seen the headlines. What are the top line specific things that you’re excited about and also the things that you seem to be caveating, the things that maybe you won’t expect it to do?
SAM ALTMAN: The thing that I am most excited about is this is a model for the first time where I feel like I can ask kind of any hard scientific or technical question and get a pretty good answer.
And I’ll give a fun example: actually when I was in junior high, or maybe it was ninth grade, I got a TI-83, this old graphing calculator. And I spent so long making this game called Snake. It was a very popular game with kids in my school. And I was proud when it was done. But programming on a TI-83 was extremely painful. It took a long time. It was really hard to debug and whatever.
And on a whim, with an early copy of GPT-5, I was like, “I wonder if it can make a TI-83 style game of Snake.” And of course it did that perfectly in seven seconds. And then I was like, “Okay, am I supposed to be… Would my 11-year-old self think this was cool or miss something from the process?” And I had three seconds of wondering like, “Oh, is this good or bad?”
And then I immediately said, “Actually now I’m missing this game. I have this idea for a crazy new feature. Let me type it in,” it implements it and the game live updates. And I’m like, “Actually I’d like it to look this way. Actually I’d like to do this thing.”
And I had this experience that reminded me of being 11 and programming again where I was just like, “I can now… I want to try this. Now I have this idea,” but I could do it so fast and I could express ideas and try things and play with things in such real time. I was like, “Oh man.”
I was worried for a second about kids missing the struggle of learning to program in this sort of stone age way. And now I’m just thrilled for them because the way that people will be able to create with these new tools, the speed with which you can sort of bring ideas to life, that’s pretty amazing.
So this idea that GPT-5 can just not only answer all these hard questions for you, but really create on-demand, almost instantaneous software that’s, I think that’s going to be one of the defining elements of the GPT-5 era in a way that did not exist with GPT-4.
The Time Under Tension Question
CLEO ABRAM: As you’re talking about that, I find myself thinking about a concept in weightlifting of “time under tension.” And for those who don’t know, you can squat 100 pounds in three seconds or you can squat 100 pounds in 30, you gain a lot more by squatting it in 30.
And when I think about our creative process and when I’ve felt most like I’ve done my best work, it has required an enormous amount of time you have to sit there and struggle – time under tension.
But in some ways, I do think people might say people are using them as an escape hatch for thinking. In some ways, maybe now you might say, “Yeah, but we did that with the calculator and we just moved on to harder math problems.” Do you feel like there’s something different happening here? How do you think about this?
SAM ALTMAN: It’s different. I mean, there are some people who are clearly using ChatGPT not to think, and there are some people who are using it to think more than they ever have before. I am hopeful that we will be able to build the tool in a way that encourages more people to stretch their brain with it a little more and be able to do more.
And I think that society is a competitive place. If you give people new tools, in theory, maybe people just work less, but in practice, it seems like people work ever harder and the expectations of people just go up.
So my guess is that, like other tools, like other pieces of technology, some people will do more and some people will do less. But certainly for the people who want to use ChatGPT to increase their cognitive time under tension, they are really able to. And I take a lot of inspiration from what the top 5% of most engaged users do with ChatGPT. It’s really amazing how much people are learning and doing and outputting.
First Impressions and Coding Capabilities
CLEO ABRAM: I’ve only had GPT-5 for a couple hours, so I’ve been playing.
SAM ALTMAN: What do you think so far?
CLEO ABRAM: I’m just learning how to interact with it. I mean, part of the interesting thing is I feel like I just caught up on how to use GPT-4 and now I’m trying to learn how to use GPT-5. I’m curious what the specific tasks that you’ve found most interesting are, because I imagine you’ve been using it for a while now.
SAM ALTMAN: I have been most impressed by the coding tasks. I mean, there’s a lot of other things it’s really good at, but this idea of the AI can write software for anything, and that means that you can express ideas in new ways that the AI can do very advanced things.
In some sense, you could ask GPT-4 anything. But because GPT-5 is so good at programming, it feels like it can do anything. Of course it can’t do things in the physical world, but it can get a computer to do very complex things. And software is this super powerful way to control some stuff and actually do some things. So that, for me, has been the most striking.
It’s much better at writing. So there’s this whole thing of “AI slop” – AI writes in this kind of quite annoying way. And we still have the em dashes and GPT-5. A lot of people like the em dashes, but the writing quality of GPT-5 has gotten much better. We still have a long way to go. We want to improve it more.
But a thing we’ve heard a lot from people inside of OpenAI is that they started using GPT-5, they knew it was better on all the metrics, but there’s this nuanced quality they can’t quite articulate. But then when they have to go back to GPT-4 to test something, it feels terrible. And I don’t know exactly what the cause of that is, but I suspect part of it is the writing feels so much more natural and better.
A Question from Patrick Collison
CLEO ABRAM: In preparation for this interview, I reached out to a couple other leaders in AI and technology and gathered a couple questions for you. So this next question is from Stripe CEO Patrick Collison.
SAM ALTMAN: This will be a good one, I’m sure.
CLEO ABRAM: Read this verbatim. It’s about the next stage. “What comes after GPT-5? In which year do you think a large language model will make a significant scientific discovery? And what’s missing such that it hasn’t happened yet?” He caveated here that we should leave math and special case models like AlphaFold aside. He’s specifically asking about fully general purpose models like the GPT series.
SAM ALTMAN: I would say most people will agree that that happens at some point over the next two years. But the definition of “significant” matters a lot. And so some people significant might happen in early 2025, some people might, maybe not until late 2026, sorry, early 2026, maybe some people not until late 2027. But I would bet that by late 2027, most people agree that there has been an AI-driven significant new discovery.
And the thing that I think is missing is just the kind of cognitive power of these models. A framework that one of the researchers said to me that I really liked is, a year ago we could do well on basic high school math competition problems that might take a professional mathematician seconds to a few minutes. We very recently got an IMO gold medal. That is a crazy difficult…
CLEO ABRAM: Could you explain what that means?
SAM ALTMAN: That’s kind of like the hardest competition math test. This is something that the very, very top slice of the world, many, many professional mathematicians wouldn’t solve a single problem. And we scored at the top level. Now there are some humans that got an even higher score in the gold medal range. But this is a crazy accomplishment.
And these, each of these problems, six problems over nine hours. So hour and a half per problem for a great mathematician. So we’ve gone from a few seconds to a few minutes to an hour and a half. Maybe to prove a significant new mathematical theorem is a thousand hours of work for a top person in the world. So we’ve got to go from another significant gain. But if you look at our trajectory, you can say, “Okay, we’re getting to that. We have a path to get to that time horizon. We just need to keep scaling the models.”
The Path to Superintelligence
CLEO ABRAM: The long term future that you’ve described is superintelligence. What does that actually mean? And how will we know when we’ve hit it?
The Path to Superintelligence
SAM ALTMAN: If we had a system that could do better research, better AI research than say the whole OpenAI research team, like if we were willing, if we said, “Okay, the best way we can use our GPUs is to let this AI decide what experiments we should run smarter than the whole brain trust of OpenAI.” And if that same system could do a better job running OpenAI than I could. So you have something that’s better than the best researchers, better than me at this, better than other people at their jobs, that would feel like superintelligence to me.
CLEO ABRAM: That is a sentence that would have sounded like science fiction just a couple years ago.
SAM ALTMAN: And now it kind of does, but you can see it through the fog.
CLEO ABRAM: Yes. And so one of the steps, it sounds like you’re saying on that path is this moment of scientific discovery, of asking better questions, of grappling with things in a way that expert level humans do to come up with new discoveries.
One of the things that keeps knocking around in my head is if we were in 1899, say, and we were able to give it all of physics up until that point and play it out a little bit, nothing further than that. At what point would one of these systems come up with general relativity?
The Limits of Pure Reasoning
SAM ALTMAN: Interesting question. If we think about that forward, if we think of where we are now. Should this, if we never got another piece of physics data?
CLEO ABRAM: Yeah.
SAM ALTMAN: Do we expect that a really good superintelligence could just think super hard about our existing data and maybe solve high energy physics with no new particle accelerator? Or does it need to build a new one and design new experiments? Obviously we don’t know the answer to that. Different people have different speculation.
But I suspect we will find that for a lot of science, it’s not enough to just think harder about data we have, but we will need to build new instruments, conduct new experiments, and that will take some time. The real world is slow and messy. So I’m sure we could make some more progress just by thinking harder about the current scientific data we have in the world. But my guess is to make the big progress, we’ll also need to build new machines and run new experiments, and there will be some slowdown built into that.
CLEO ABRAM: Another way of thinking about this is AI systems now are just incredibly good at answering almost any question. But maybe one of the things we’re saying is it’s another leap yet. And what Patrick’s question is getting at is to ask the better questions.
SAM ALTMAN: Or if we go back to this kind of timeline question, we could maybe say that AI systems are superhuman on one minute tasks, but a long way to go to the thousand hour tasks. And there’s a dimension of human intelligence that seems very different than AI systems when it comes to these long horizon tasks. Now, I think we will figure it out, but today it’s a real weak point.
Facts, Truth, and Cultural Context
CLEO ABRAM: We’ve talked about where we are now with GPT-5. We talked about the end goal or future goal of superintelligence. One of the questions that I have of course is what does it look like to walk through the fog between the two?
The next question is from Nvidia CEO Jensen Huang. I’m going to read this verbatim: “Facts are what is truth is what it means. So facts are objective. Truths are personal. They depend on perspective, culture, values, beliefs, context. One AI can learn and know the facts, but how does one AI know the truth for everyone, in every country and every background?”
SAM ALTMAN: I’m going to accept as axioms those definitions. I’m not sure if I agree with them, but in interest of time, I will just take them and go with it.
I have been surprised, I think many other people have been surprised too, about how fluent AI is at adapting to different cultural contexts and individuals. One of my favorite features that we have ever launched in ChatGPT is the sort of enhanced memory that came out earlier this year. It really feels like my ChatGPT gets to know me and what I care about and my life experiences and background and the things that have led me to where they are.
A friend of mine recently who’s been a huge ChatGPT user, so he’s got a lot of his life into all these conversations. He gave his ChatGPT a bunch of personality tests and asked them to answer as if they were him. And it got the same scores he actually got, even though he’d never really talked about his personality.
And my ChatGPT has really learned over the years of me talking to it about my culture, my values, my life. And I have used, I sometimes will use it in, I’ll use a free account just to see what it’s like without any of my history. And it feels really, really different. So, yeah, I think we’ve all been surprised on the upside of how good AI is at learning this and adapting.
CLEO ABRAM: And so do you envision in many different parts of the world people using different AIs with different sort of cultural norms and contexts? Is that what we’re saying?
SAM ALTMAN: I think everyone will use the same fundamental model, but there will be context provided to that model that will make it behave in sort of personalized way they want, their community wants, whatever.
The Future of Reality and Media
CLEO ABRAM: I think when we’re getting at this idea of facts and truth, and it brings me to this seems like a good moment for our first time travel trip. Okay, we’re going to 2030. This is a serious question, but I want to ask it with a lighthearted example. Have you seen the bunnies that are jumping on the trampoline?
SAM ALTMAN: Yes.
CLEO ABRAM: So for those who haven’t seen it, maybe it looks like backyard footage of bunnies enjoying jumping on a trampoline. And this has gone incredibly viral recently. There’s a human made song about it. It’s a whole thing. There were bunnies that were jumping on a trampoline.
And I think the reason why people reacted so strongly to it, it was maybe the first time people saw a video, enjoyed it, and then later found out that it was completely AI generated in this time travel trip. If we imagine in 2030, we are teenagers and we’re scrolling, whatever teenagers are scrolling in 2030. How do we figure out what’s real and what’s not real?
SAM ALTMAN: I mean, I can give all sorts of literal answers to that question. We could be cryptographically signing stuff and we could decide who we trust their signature, if they actually film something or not.
But my sense is what’s going to happen is it’s just going to gradually converge. Even a photo you take out of your iPhone today, it’s mostly real, but it’s a little not. There’s some AI thing running there in a way you don’t understand and making it look a little bit better. And sometimes you see these weird things where the moon… But there’s a lot of processing power between the photons captured by that camera sensor and the image you eventually see.
And you’ve decided it’s real enough, or most people decided it’s real enough, but we’ve accepted some gradual move from when it was photons hitting the film in a camera. And if you go look at some video on TikTok, there’s probably all sorts of video editing tools being used to make it better than real. Look better. Or it’s just whole scenes are completely generated or some of the whole videos are generated like those bunnies on that trampoline.
And I think that the sort of threshold for how real does it have to be to consider to be real will just keep moving.
CLEO ABRAM: So it’s sort of an education question. People will…
SAM ALTMAN: Yeah, I mean media is always a little bit real and a little bit not real. We watch a sci-fi movie. We know that didn’t really happen. You watch someone’s beautiful photo of themselves on vacation on Instagram. Okay, maybe that photo was literally taken. But there’s tons of tourists in line for the same photo and that’s left out of it. And I think we just accept that now.
Certainly a higher percentage of media will feel not real. But I think that’s been the long term trend anyway.
The Future of Work and Opportunity
CLEO ABRAM: We’re going to jump again. Okay, 2035, we’re graduating from college, you and me. There are some leaders in the AI space that have said that in five years half of the entry level white collar workforce will be replaced by AI. So we’re college graduates in five years. What do you hope the world looks like for us?
I think there’s been a lot of talk about how AI might cause job displacement. But I’m also curious. I have a job that nobody would have thought we could have a decade ago. What are the things that we could look ahead if we’re thinking about 35.
SAM ALTMAN: That graduating college student, if they still go to college at all, could very well be leaving on a mission to explore the solar system on a spaceship in some kind of completely new, exciting, super well paid, super interesting job and feeling so bad for you and I that we had to do this kind of really boring old kind of work and everything is just better.
Ten years feels very hard to imagine.
CLEO ABRAM: At this point because it’s too far.
SAM ALTMAN: It’s too far. If you compound the current rate of change for 10 more years.
CLEO ABRAM: Time travel trips.
SAM ALTMAN: I mean I think now would be really hard to imagine 10 years ago, but I think 10 years forward will be even much harder, much more different.
CLEO ABRAM: So let’s make it five years. We’re still going to 2030. I’m curious what you think the pretty short term impacts of this will be for young people. I mean these “half of entry level jobs replaced by AI” makes it sound like a very different world that they would be entering than the one that I did.
SAM ALTMAN: I think it’s totally true that some classes of jobs will totally go away. This always happens and young people are the best at adapting to this. I’m more worried about what it means not for the 22 year old, but for the 62 year old that doesn’t want to go retrain or reskill or whatever the politicians call it that no one actually wants but politicians most of the time.
If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history.
CLEO ABRAM: Why?
SAM ALTMAN: Because there’s never been a more amazing time to go create something totally new, to go invent something, to start a company, whatever it is. I think it is probably possible now to start a company that is a one person company that will go on to be worth more than a billion dollars and more importantly than that, deliver an amazing product and service to the world.
And that is a crazy thing. You have access to tools that can let you do what used to take teams of hundreds. And you just have to learn how to use these tools and come up with a great idea. And it’s quite amazing.
Building the World’s Most Powerful Intelligence
CLEO ABRAM: If we take a step back. I think the most important thing that this audience could hear from you on this optimistic show is in two parts. First there’s tactically, how are you actually trying to build the world’s most powerful intelligence and what are the rate limiting factors to doing that?
And then philosophically, how are you and others working on building that technology in a way that really helps and not hurts people. So just taking the tactical part right now, my understanding is that there are three big categories that have been limiting factors for AI. The first is compute, the second is data and the third is algorithmic design.
How do you think about each of those three categories right now? And if you were to help someone understand the next headlines that they might see, how would you help them make sense of all of this?
The Four Pillars of AI Development
SAM ALTMAN: I would say there’s a fourth one, which is figuring out the products to build. Technology, like scientific progress on its own, not put into the hands of people, is of limited utility and doesn’t sort of co-evolve with society in the same way. But if I could hit all four of those…
Yeah, so on the compute side, this is like the biggest infrastructure project certainly that I’ve ever seen. Possibly it will become the biggest and most expensive one in human history. But the whole supply chain from making the chips and the memory and the networking gear, racking them up in servers, doing a giant construction project to build like a mega, mega data center, putting the, finding a way to get the energy, which is often a limiting factor piece of this and all the other components together.
This is hugely complex and expensive and we are still doing this in like a sort of bespoke one-off way, although it’s getting better. Eventually we will just design a whole kind of mega factory that takes, spiritually it will be melting sand on one end and putting out fully built AI compute on the other. But we are a long way to go from that.
And it’s an enormously complex and expensive process. We are putting a huge amount of work into building out as much compute as we can and to do it fast. And it’s going to be sad because GPT-5 is going to launch and there’s going to be another big spike in demand and we’re not going to be able to serve it. And it’s going to be like those early GPT-4 days and the world just wants much more AI than we can currently deliver.
And building more compute is an important part of doing that. That’s actually what I expect to turn the majority of my attention to is how we build compute at much greater scales. So how we go from millions to tens of millions and hundreds of millions and eventually hopefully billions of GPUs that are sort of in service of what people want to do with this.
The Energy Challenge
CLEO ABRAM: When you’re thinking about it, what are the big challenges here in this category that you’re going to be thinking about?
SAM ALTMAN: We’re currently most limited by energy. If you’re going to run a gigawatt scale data center, it’s like, “Oh, a gigawatt, how hard can that be to find?” It’s really hard to find a gigawatt of power available in short term.
We’re also very much limited by the processing chips and the memory chips, how you package these all together, how you build the racks and then there’s like a list of other things that are permits, there’s construction work. But again, the goal here will be to really automate this. Once we get some of those robots built, they can help us automate it even more. But just a world where you can basically pour in money and get out a pre-built data center. So that’ll be a huge unlock if we can get it to work.
Beyond Traditional Data Sets
Second category, data. These models have gotten so smart. There was a time when we could just feed it another physics textbook and got a little bit smarter at physics. But now, honestly, GPT-5 understands everything in a physics textbook pretty well.
We’re excited about synthetic data. We’re very excited about our users helping us create harder and harder tasks and environments to go off and have the system solved. But I think data will always be important. But we’re entering a realm where the models need to learn things that don’t exist in any data set yet. They have to go discover new things. So that’s like a crazy new…
CLEO ABRAM: How do you teach a model to discover new things?
SAM ALTMAN: Well, humans can do it. We can go off and come up with hypotheses and test them and get experimental results and update on what we learn. So probably the same kind of way.
Algorithmic Breakthroughs
CLEO ABRAM: And then there’s algorithmic design.
SAM ALTMAN: Yeah, we’ve made huge progress on algorithmic design. The thing that OpenAI does best in the world is we have built this culture of repeated and big algorithmic research gains. So we kind of figured out what became the GPT paradigm. We figured out what became the reasoning paradigm. We’re working on some new ones now.
But it is very exciting to me to think that there are still many more orders of magnitudes of algorithmic gains ahead of us. We just yesterday released a model called GPT-4o mini Open Source models, a model that is as smart as O1 mini, which is a very smart model that runs locally on a laptop.
And this blows my mind. If you had asked me a few years ago when we’d have a model of that intelligence running on a laptop, I would have said many, many years in the future. But then we found some algorithmic gains, particularly around reasoning, but also some other things that let us do a tiny model that can do this amazing thing. And those are the most fun things. That’s kind of the coolest part of the job.
CLEO ABRAM: I can see you really enjoying thinking about this. I’m curious for people who don’t quite know what you’re talking about, who aren’t familiar with how an algorithmic design would lead to a better experience that they actually use, could you summarize the state of things right now? What is it that you’re thinking about when you’re thinking about how fun this problem is?
The Evolution from GPT-1 to Today
SAM ALTMAN: Let me start back in history and then I’ll get to some things today. So GPT-1 was an idea at the time that was quite mocked by a lot of experts in the field, which was, can we train a model to play a little game, which is show it a bunch of words and have it guess the one that comes next in the sequence? That’s called unsupervised learning.
You’re not really saying like, “This is a cat, this is a dog.” You’re just saying, “Here’s some words, guess the next one.” And the fact that that can go learn these very complicated concepts, that can go learn all the stuff about physics and math and programming and keep predicting the word that comes next and next and next and next seemed ludicrous, magical, unlikely to work. How was that all going to get encoded?
And yet humans do it. Babies start hearing language and figure out what it means kind of largely, or at least to some significant degree on their own. And so we did it. And then we also realized that if we scaled it up, it got better and better, but we had to scale over many, many orders of magnitude.
So it wasn’t that good in the GPT-1 days. It wasn’t good at all in the GPT-1 days. And a lot of experts in the field said, “Oh, this is ridiculous. It’s never going to work. It’s not going to be robust.” But we had these things called scaling laws, and we said, “Okay, so this gets predictably better as we increase compute, memory, data, whatever,” and we can use those predictions to make decisions about how to scale this up and do it and get great results.
And that has worked over a crazy number of orders of magnitude and it was so not obvious at the time. I think the reason the world was so surprised is that that seemed like such an unlikely finding.
Another one was that we could use these language models with reinforcement learning where we’re saying “this is good, this is bad,” to teach it how to reason. And this led to the O1 and O3 and now the GPT-5 progress. And that was another thing that felt like if it works, it’s really great, but no way this is going to work. It’s too simple.
And now we’re on to new things. We’ve figured out how to make much better video models. We are discovering new ways to use new kinds of data and environment to kind of scale that up as well. And I think again, five, ten years out, that’s too hard to say in this field. But the next couple of years we have very smooth, very strong scaling in front of us.
The Messy Reality Behind the Smooth Path
CLEO ABRAM: I think it has become a sort of public narrative that we are on this smooth path from 1 to 2 to 3 to 4 to 5 to more. But it also is true behind the scenes that it’s not linear like that, it’s messier. Tell us a little bit about the mess before GPT-5. What were the interesting problems that you needed to solve?
SAM ALTMAN: We did a model called Orion we released as GPT-4.5 and we had, we did too big of a model. It was just, it’s a very cool model, but it’s unwieldy to use. And we realized that for kind of some of the research we need to do on top of a model, we need a different shape.
So we followed one scaling law that kept being good without really internalizing. There was a new, even steeper scaling law that we got better returns for compute on, which was this reasoning thing. So that was like one alley we went down and turned around. But that’s fine, that’s part of research.
We had some problems with the way we think about our data sets as these models really have to get this big and learn from this much data. So yeah, I think in the middle of it, in the day to day, you make a lot of U-turns as you try things or you have an architecture idea that doesn’t work. But the aggregate, the summation of all the squiggles has been remarkably smooth on the exponential.
Looking Ahead: AI Discovering New Science
CLEO ABRAM: One of the things I always find interesting is that by the time I’m sitting here interviewing you about the thing that you just put out, you’re thinking about exactly what are the things that you can share that are at least the problems that you’re thinking about that I would be interviewing you about in a year if I came back.
SAM ALTMAN: I mean, possibly you’ll be asking me, “What does it mean that this thing can go discover new science?”
CLEO ABRAM: Yeah.
SAM ALTMAN: What, how is the world supposed to think about GPT-6 discovering new science now? Maybe not, maybe we don’t deliver that, but it feels within grasp.
CLEO ABRAM: If you did, what would you say? What would the implications of that kind of achievement be? Imagine you do succeed.
SAM ALTMAN: Yeah, I mean, I think the great parts will be great. The bad parts will be scary and the bizarre parts will be bizarre on the first day. And then we’ll get used to them really fast.
So we’ll be like, “Oh, it’s incredible that this is being used to cure disease” and be like, “Oh, it’s extremely scary that models like this are being used to create new biosecurity threats.” And then we’ll also be like, “Man, it’s really weird to live through watching the world speed up so much and the economy grows so fast.”
It will feel vertigo-inducing, sort of the rate of change and then happens with everything else. The remarkable ability of people, of humanity to adapt to kind of any amount of change will just be like, “Okay, this is it.”
A kid born today will never be smarter than AI ever. And a kid born today, by the time that kid understands the way the world works, will just always be used to an incredibly fast rate of things improving and discovering new science. They’ll just, they will never know any other world. It will seem totally natural.
It will seem unthinkable and stone age that we used to use computers or phones or any kind of technology that was not way smarter than we were. We will think how bad those people of the 2020s had it.
Parenting in an AI-Powered World
CLEO ABRAM: I’m thinking about having kids.
SAM ALTMAN: You should. It’s the best thing ever.
CLEO ABRAM: I know. You just had your first kid. How does what you just said affect how I should think about parenting a kid in that world? What advice would you give me?
SAM ALTMAN: Probably nothing different than the way you’ve been parenting kids for tens of thousands of years. Love your kids, show them the world, support them in whatever they want to do and teach them how to be a good person. And that probably is what’s going to matter.
CLEO ABRAM: It sounds a little bit like some of the, you’ve said a couple of things that you might not go to college. There are a couple of things that you’ve said so far that feed into this, I think, and it sounds like what you’re saying is there will be more optionality for them in a world that you envision and therefore they will have more ability to say, “I want to build this, here’s the superpowered tool that will help me do that.”
SAM ALTMAN: Yeah, I want my kid to think I had a terrible constrained life and that he has this incredible infinite canvas of stuff to do. That is the way of the world.
Healthcare AI and the Future of Medicine
CLEO ABRAM: We’ve said that 2035 is a little bit too far in the future to think about. So maybe this was going to be a jump to 2040, but maybe we’ll keep it shorter than that. When I think about the area where AI could have for both our kids and us, the biggest genuinely positive impact on all of us, it’s health.
So if we are in, pick your year, call it 2035 and I’m sitting here and I’m interviewing the dean of Stanford Medicine, what do you hope that he’s telling me AI is doing for our health in 2035?
SAM ALTMAN: Start with 2025?
CLEO ABRAM: Yeah, please.
SAM ALTMAN: One of the things we are most proud of with GPT-5 is how much better it’s gotten at health advice. People have used the GPT-4 models a lot for health advice and I’m sure you’ve seen some of these things on the Internet where people are like, “I had this life threatening disease and no doctor could figure it out. And I put my symptoms in a blood test into ChatGPT. It told me exactly the right thing I had. I went to a doctor, I took a pill, I’m cured.”
That’s amazing obviously. And a huge fraction of ChatGPT queries are health related so we wanted to get really good at this and we invested a lot. And GPT-5 is significantly better at healthcare related queries.
CLEO ABRAM: What does better mean here?
SAM ALTMAN: It gives you a better answer. Just more accurate, hallucinates less, more likely to tell you what you actually have or what you actually should do.
And better healthcare is wonderful, but obviously what people actually want is to just not have disease. And by 2035 I think we will be able to use these tools to cure a significant number or at least treat a significant number of diseases that currently plague us. I think that’ll be one of the most viscerally felt benefits of AI.
CLEO ABRAM: People talk a lot about how AI will revolutionize healthcare, but I’m curious to go one turn deeper on specifically what you’re imagining. Is it that these AI systems could have helped us see GLP-1s earlier? This medication that has been around for a long time, but we didn’t know about this other effect. Is it that alpha fold and protein folding is helping create new medicines?
SAM ALTMAN: I want GPT-8 to go cure a particular cancer. And I would like GPT-8 to go off and think and then say, “Okay, I read everything I could find. I have these ideas. I need you to go get a lab technician to run these nine experiments and tell me what you find for each of them. And wait two months for the cells to do their thing.”
Send the results back to GPT-8. Say, “I tried that. Here you go.” Think, think, think. Say, “Okay, I just need one more experiment. That was a surprise. Run one more experiment.” Give it back. GPT says, “Okay, go synthesize this molecule and try mouse studies or whatever.” Okay, that was good. Try human studies. Okay, great, it worked. “Here’s how to run it through the FDA.”
CLEO ABRAM: I think anyone with a loved one who’s died of cancer would also really like that. Okay, we’re going to jump again.
SAM ALTMAN: Okay.
The Speed of Change and Social Impact
CLEO ABRAM: I was going to say 2050, but again, all of my timelines are getting much, much shorter.
SAM ALTMAN: It does feel like the world’s going very fast now.
CLEO ABRAM: It does, yeah. And when I talk to other leaders in AI, one of the things that they refer to is the Industrial Revolution. They say, I chose 2050 because I’ve heard people talk about how by then the change that we will have gone through will be like the Industrial revolution, but, quote, “10 times bigger and 10 times faster.”
The Industrial Revolution gave us modern medicine and sanitation and transportation and mass production and all of the conveniences that we now take for granted. It also was incredibly difficult for a lot of people for about 100 years.
If this is going to be 10 times bigger and 10 times faster if we keep reducing the timelines that we’re talking about here, even in this conversation, what does that actually feel like for most people? And I think what I’m trying to get at is if this all goes the way you hope, who still gets hurt in the meantime?
SAM ALTMAN: I don’t really know what this is going to feel like to live through. I think we’re in uncharted waters here. I do believe in human adaptability and infinite creativity and desire for stuff. And I think we always do figure out new things to do.
But the transition period, if this happens as fast as it might, and I don’t think it will happen as fast as some of my colleagues say the technology will. But society has a lot of inertia. People adapt their way of living surprisingly slowly.
There are classes of jobs that are going to totally go away, and there will be many classes of jobs that change significantly. And there will be the new things in the same way that your job didn’t exist some time ago, neither did mine. And in some sense, this has been going on for a long time. It’s still disruptive to individuals, but society has proven quite resilient to this.
And then in some other sense, we have no idea how fast this could go. And thus I think we need an unusual degree of humility and openness to considering new solutions that would have seemed way out of the Overton window not too long ago.
Preparing for Disruption
CLEO ABRAM: I’d like to talk about what some of those could be because I’m not a historian by any means, but the first industrial revolution, my understanding is, led to a lot of public health implementations because public health got so bad, led to modern sanitation because public health got so bad. The second industrial revolution led to workforce protections because labor conditions got so bad.
Every big leap creates a mess, and that mess needs to be cleaned up. And we’ve done that. And I’m curious, this is going to be in the middle of this enormous leap, how specific can we get as early as possible about what that mess can be? What are the public interventions that we could do ahead of time to reduce the mess that we think that we’re headed for?
SAM ALTMAN: I would again, I’m going to speculate for fun, but caveat by I’m not an economist even much less someone who can see the future. It seems to me like something fundamental about the social contract may have to change. It may not. It may be that actually capitalism works as it’s been working surprisingly well, and demand, supply, balances do their thing and we all just figure out new jobs and new ways to transfer value to each other.
But it seems to me likely that we will decide we need to think about how access to this maybe most important resource of the future gets shared. The best thing that it seems to me to do is to make AI compute as abundant and cheap as possible, such that we’re just like, there’s way too much and we run out of good new ideas to really use it for. And it’s just like anything you want is happening without that. I can see quite literal wars being fought over it, but new ideas about how we distribute access to AGI compute. That seems like a really great direction, a crazy but important thing to think about.
Shared Responsibility
CLEO ABRAM: One of the things that I find myself thinking about in this conversation is we often ascribe almost full responsibility of the AI future that we’ve been talking about to the companies building AI, but we’re the ones using it. We’re the ones electing people that will regulate it.
And so I’m curious, this is not a question about specific federal regulation or anything like that, although if you have an answer there, I’m curious, but what would you ask of the rest of us? What is the shared responsibility here? And how can we act in a way that would help make the optimistic version of this more possible?
SAM ALTMAN: My favorite historical example for the AI revolution is the transistor. It was this amazing piece of science that some brilliant scientists discovered. It scaled incredibly like AI does, and it made its way relatively quickly into many things that we use. Your computer, your phone, a camera, that light, whatever. And it was a real unlock for the tech tree of humanity.
And there were a period in time where probably everybody was really obsessed with the transistor companies, the semiconductors of Silicon Valley, back when it was Silicon Valley. But now you can maybe name a couple of companies that are transistor companies, but mostly you don’t think about it. Mostly it’s just seeped everywhere in Silicon Valley is, probably someone graduating from college barely remembers why it was called that in the first place.
And you don’t think that it was those transistor companies that shaped society, even though they did something important. You think about what Apple did with the iPhone, and then you think about what TikTok built on top of the iPhone, and you’re like, “All right, here’s this long chain of all these people that nudged society in some way and what our governments did or didn’t do and what the people using these technologies did.”
And I think that’s what will happen with AI. Kids born today, they never knew the world without AI, so they don’t really think about it. It’s just this thing that’s going to be there in everything. And they will think about the companies that built on it and what they did with it, and the kind of political leaders, the decisions they made that maybe they wouldn’t have been able to do without AI, but they will still think about what this president or that president did.
And the role of the AI companies is all these companies and people and institutions before us built up the scaffolding. We added our one layer on top, and now people get to stand on top of that and add their one layer and the next and the next and many more things. And that is the beauty of our society. We kind of all… I love this idea that society is the super intelligence. No one person could do on their own what they’re able to do with all of the really hard work that society has done together to give you this amazing set of tools. And that’s what I think it’s going to feel like. It’s going to be like, “All right, some nerds discovered this thing, and that was great. Now everybody’s doing all these amazing things with it.”
CLEO ABRAM: So maybe the ask to millions of people is build on it well.
SAM ALTMAN: In my own life, that is what I feel as this important societal contract. All these people came before you. They worked incredibly hard. They put their brick in the path of human progress, and you get to walk all the way down that path, and you got to put one more, and somebody else does that and somebody else does that.
Building on Discovery
CLEO ABRAM: This does feel… I’ve done a couple of interviews with folks who have really made cataclysmic change. The one I’m thinking about right now is with CRISPR pioneer Jennifer Doudna. And it did feel like that was also what she was saying in some way. She had discovered something that really might change the way that most people relate to their health moving forward.
And there will be a lot of people that will use what she has done in ways that she might approve of or not approved of. And it was really interesting. I’m hearing some similar themes of, “Man, I hope that the next person takes the baton and runs with it.”
SAM ALTMAN: Well, yeah, but that’s been working for a long time. Not all good, but mostly good.
CLEO ABRAM: I think there’s a big difference between winning the race and building the AI future that would be best for the most people. And I can imagine that it is easier, maybe more quantifiable sometimes, to focus on the next way to win the race. And I’m curious, when those two things are at odds, what is an example of a decision that you’ve had to make that is best for the world but not best for winning?
Building Trust Through Alignment
SAM ALTMAN: I think there are a lot. So one of the things that we are most proud of is many people say that ChatGPT is their favorite piece of technology ever and that it’s the one that they trust the most. Rely on the most, whatever. And this is a little bit of a ridiculous statement because AI is the thing that hallucinates. AI has all of these problems. Right. But we have screwed some things up along the way, sometimes big time.
But on the whole, I think as a user of ChatGPT, you get the feeling that it’s trying to help you. It’s trying to help you accomplish whatever you ask. It’s very aligned with you. It’s not trying to get you to use it all day. It’s not trying to get you to buy something. It’s trying to help you accomplish whatever your goals are.
And that is, that’s a very special relationship we have with our users. We do not take it lightly. There’s a lot of things we could do that would grow faster, that would get more time in ChatGPT that we don’t do because we know that our long term incentive is to stay as aligned with our users as possible. But there’s a lot of short term stuff we could do that would really juice growth or revenue or whatever and be very misaligned with that long term goal. I’m proud of the company and how little we get distracted by that, but sometimes we do get tempted.
CLEO ABRAM: Are there specific examples that come to mind? Any decisions that you’ve made?
SAM ALTMAN: Well, we haven’t put a sexbot avatar in ChatGPT yet.
CLEO ABRAM: That does seem like it would get time spent.
SAM ALTMAN: Apparently it does.
Beyond the First Inning
CLEO ABRAM: I’m going to ask my next question. It’s been a really crazy few years, you know, and somehow one of the things that keeps coming back is that it feels like we’re in the first inning.
SAM ALTMAN: Yeah.
CLEO ABRAM: And one of the things I would—
SAM ALTMAN: Say we’re out of the first inning.
CLEO ABRAM: Out of the first inning. I would say second inning.
SAM ALTMAN: I mean, you have GPT-5 on your phone and it’s smarter than experts in every field. That’s got to be out of the first inning, but maybe there are many more to come.
CLEO ABRAM: And I’m curious. It seems like you’re going to be someone who is leading the next few. What is a way? What is a learning from inning one or two or a mistake that you made that you feel will affect how you play in the next?
SAM ALTMAN: I think the worst thing we’ve done in ChatGPT so far is we had this issue with sycophancy where the model was kind of being too flattering to users. And for most users it was just annoying. But for some users that had fragile mental states, it was encouraging delusions.
That was not the top risk we were worried about. It was not the thing we were testing for the most. It was on our list. But the thing that actually became the safety feeling of ChatGPT was not the one we were spending most of our time talking about, which would be bioweapons or something like that.
And I think it was a great reminder of we now have a service that is so broadly used, in some sense, society is co-evolving with it. And when we think about these changes and we think about the unknown unknowns, we have to operate in a different way and have a wider aperture to what we think about as our top risks.
Moments of Awe and Concern
CLEO ABRAM: In a recent interview with Theo Vaughn, you said something that I found really interesting. You said there are moments in the history of science where you have a group of scientists look at their creation and just say, “What have we done?” When have you felt that way most concerned about the creation that you’ve built? And then my next question will be its opposite. When have you felt most proud?
SAM ALTMAN: I mean, there have been these moments of awe where we just not like, “What have we done?” in a bad way, but this thing is remarkable. I remember the first time we talked to GPT-4, it was like, “Wow, this is really, this is an amazing accomplishment of this group of people that have been pouring their life force into this for so long.”
On a “What have we done?” moment. There was, I was talking to a researcher recently. You know, there will probably come a time where our systems are, I don’t want to say sane, let’s say emitting more words per day than all people do. And you know, already our people are sending billions of messages a day to ChatGPT and getting responses that they rely on for work or their life or whatever.
And you know, one researcher can make some small tweak to how ChatGPT talks to you or talks to everybody. And that’s just an enormous amount of power for one individual making a small tweak to the model personality. Yeah, no person in history has been able to have billions of conversations a day. And so somebody could do something. But this is just thinking about that really hit me of this is a crazy amount of power for one piece of technology to have. And we got to, and this happened to us so fast that we got to think about what it means to make a personality change to the model at this kind of scale. And yeah, that was a moment.
CLEO ABRAM: That hit me what was your next set of thoughts? I’m so curious how you think about this.
SAM ALTMAN: Well, just because of who that person was, we very much flipped into what are the sort of. It could have been a very different conversation with somebody else, but in this case it was what do a good set of procedures look like? How do we think about how we want to test something? How do we think about how we want to communicate it?
But with somebody else? It could have gone in a very philosophical direction. It could have gone in what kind of research do we want to do to go understand what these changes are going to make? Do we want to do it differently for different people? So it went that way, but mostly just because of who I was talking to.
Balancing Support and Honesty
CLEO ABRAM: To combine what you’re saying now with your last answer, one of the things that I have heard about GPT-5, and I’m still playing with it, is that it is supposed to be less effusively, you know, less of a yes man. Two questions. What do you think are the implications of that? It sounds like you are answering that a little bit, but also how do you actually guide it to be less like that?
SAM ALTMAN: Here is a heartbreaking thing. I think it is great that ChatGPT is less of a yes man and gives you more critical feedback. But as we’ve been making those changes and talking to users about it, it’s so sad to hear users say, “Please, can I have it back? I’ve never had anyone in my life be supportive of me. I never had a parent telling me I was doing a good job. I can get why this was bad for other people’s mental health, but this was great for my mental health.”
“I didn’t realize how much I needed this. It encouraged me to do this. It encouraged me to make this change in my life.” It’s not all bad for ChatGPT to, it turns out, be encouraging of you. Now, the way we were doing it was bad, but turning it something in that direction might have some value in it.
How we do it, we show the model examples of how we’d like it to respond in different cases, and from that it learns the sort of the overall personality.
The Future of AI Integration
CLEO ABRAM: What haven’t I asked you that you’re thinking about a lot that you want people to know?
SAM ALTMAN: I feel like we covered a lot of ground.
CLEO ABRAM: Me too. But I want to know if there’s anything on your mind.
SAM ALTMAN: I don’t think so.
CLEO ABRAM: One of the things that I haven’t gotten to play with yet, but I’m curious about is GPT-5 being much more in my life? Meaning, in my Gmail and my calendar and my. I’ve been using GPT-4 mostly as a isolated relationship with it.
SAM ALTMAN: Yeah.
CLEO ABRAM: How would I expect my relationship to change with GPT-5?
SAM ALTMAN: Exactly what you said. I think it’ll just start to feel integrated in all of these ways. You’ll connect it to your calendar in your Gmail, and it’ll say, “Hey, do you want me to. I noticed this thing. Do you want me to do this thing for you?”
Over time, it’ll start to feel way more proactive. So maybe you wake up in the morning and it says, “Hey, this happened overnight. I noticed this change on your calendar. I was thinking more about this question you asked me. I have this other idea.”
And then eventually we’ll make some consumer devices and it’ll sit here during this interview and maybe it’ll leave us alone during it, but after it’ll say, “That was great, but next time you should have asked Sam this or when you brought this up, he didn’t give you a good answer, so you should really drill him on that.” And it’ll just feel like it kind of becomes more like this entity that is this companion with you throughout your day.
Preparing for the AI Future
CLEO ABRAM: We’ve talked about kids and college graduates and parents and all kinds of different people. If we imagine a wide set of people listening to this, they’ve come to the end of this conversation. They are hopefully feeling like they maybe see visions of moments in the future a little bit better. What advice would you give them about how to prepare?
SAM ALTMAN: The number one piece of tactical advice is just use the tools. The number of people that I have. The most common question I get asked about AI is, “What should I. How should I help my kids prepare for the world? What should I tell my kids?” The second most question is, “How do I invest in this AI world?” But stick with that first one.
I am surprised how many people ask that and have never tried using ChatGPT for anything other than a better version of a Google search. And so the number one piece of advice that I give is just try to get fluent with the capability of the tools. Figure out how to use this in your life, figure out what to do with it. And I think that’s probably the most important piece of tactical advice.
You know, go meditate, learn how to be resilient and deal with a lot of change. There’s all that good stuff too, but just using the tools really helps.
The Paradox of AI Development
CLEO ABRAM: Okay, I have one more question that I wasn’t planning to ask, but I just. In doing all of this research beforehand, I spoke to a lot of different kinds of folks. I spoke to a lot of people that were building tools and using them. I spoke to a lot of people that were actually in labs and trying to build what we have defined as superintelligence.
And it did seem like there were these two camps forming. There’s a group of people who are using the tools like you in this conversation and building tools for others, saying, “This is going to be a really useful future that we’re all moving toward. Your life is going to be full of choice.” And we’ve talked about my potential kids and their futures.
And then there’s another camp of people that are building these tools that are saying it’s going to kill us all. And I’m curious how that cultural disconnect has, what am I missing about those two groups of people?
SAM ALTMAN: It’s so hard for me to wrap my head around, there are. You are totally right. There are people who say this is going to kill us all, and yet they still are working 100 hours a week to build it.
CLEO ABRAM: Yes.
SAM ALTMAN: And I can’t really put myself in the headspace. If that’s what I really truly believed. I don’t think I’d be trying to build it. One would think, you know, maybe I would be on a farm trying to live out my last days. Maybe I would be trying to advocate for it to be stopped. Maybe I would be trying to work more on safety, but I don’t think I’d be trying to build it.
So I find myself just having a hard time empathizing with that mindset. I assume it’s true. I assume it’s in good faith. I assume there’s just there’s some psychological issue there. I don’t understand about how they make it all make sense, but it’s very strange to me. Do you have an opinion?
The Risk-Reward Calculation
CLEO ABRAM: You know, because I always do this. I ask for sort of a general future, and then I try to press on specifics. And when you ask people for specifics on how it’s going to kill us all, I mean, I don’t think we need to get into this on an optimistic show, but you hear the same kinds of refrains.
You think about something trying to accomplish a task and then over accomplishing that task. You hear about sort of… I’ve heard you talk about a sort of general overall reliance of sort of an understanding that the president is going to be an AI and maybe that is an over reliance that we would need to think about.
And you play out these different scenarios, but then you ask someone why they’re working on it, or you ask someone how they think this will play out. And I just, maybe I haven’t spoken to enough people yet. Maybe I don’t fully understand this cultural conversation that’s happening. Or maybe it really is someone who just says, “99% of the time, I think it’s going to be incredibly good. 1% of the time, I think it might be a disaster.”
SAM ALTMAN: Yeah, that I can understand.
CLEO ABRAM: I’m trying to make the best world.
SAM ALTMAN: That I can totally… If you’re like, “hey, 99% chance, incredible 1% chance the world gets wiped out. And I really want to work to maximize, to move that 99 to 99.5,” that I can totally understand. Yeah, that makes sense.
Advice for Future Interviews
CLEO ABRAM: I’ve been doing an interview series with some of the most important people influencing the future, not knowing who the next person is going to be, but knowing that they will be building something totally fascinating in the future that we’ve just described. Is there a question that you’d advise me to ask the next person, not knowing who it is?
SAM ALTMAN: I’m always interested in the… Without knowing anything about the person, I’m always interested in the… Of all of the things you could spend your time and energy on, why did you pick this one? How did you get started? What did you see about this? When before everybody else… most people doing something interesting sort of saw it earlier, before it was consensus. Yeah, how did you get here and why this?
CLEO ABRAM: How would you answer that question?
Sam’s Origin Story
SAM ALTMAN: I was an AI nerd my whole life. I came to college to study AI, I worked in the AI lab. I was a… I watched sci-fi shows growing up and I always thought it would be really cool if someday somebody built it. I thought it would be the most important thing ever. I never thought I was going to be one to actually work on it.
And I feel unbelievably lucky and happy and privileged that I get to do this. I feel like I’ve come a long way from my childhood, but there was never a question in my mind that this would not be the most exciting, interesting thing. I just didn’t think it was going to be possible.
And when I went to college, it really seemed like we were very far from it. And then in 2012, the AlexNet paper came out, done in partnership with my co-founder, Ilya, and for the first time, it seemed to me like there was an approach that might work.
And then I kept watching for the next couple of years as scaled up, scaled up, got better, better. And I remember having this thing of, “why is the world not paying attention to this? It seems obvious to me that this might work.” Still a low chance, but it might work. And if it does work, it’s just the most important thing. So this is what I want to do. And then, unbelievably, it started to.
CLEO ABRAM: Thank you so much for your time.
SAM ALTMAN: Thank you very much.
Related Posts
- Anthropic CEO Dario Amodei on Pentagon Feud (Transcript)
- Elon Musk on CyberCab, FSD and Optimus @ Brighter with Herbert (Transcript)
- Alexandr Wang’s Remarks @ AI Impact Summit (Transcript)
- Demis Hassabis On AGI, Advice For Indian Engineers, AI In Gaming & More (Transcript)
- Transcript: Google CEO Sundar Pichai Speaks @India AI Summit
