Read the full transcript of Y Combinator CEO Garry Tan in fireside conversation with Sam Altman on June 16, 2025 at AI Startup School in San Francisco, on “The Future of OpenAI, ChatGPT’s Origins, and Building AI Hardware”.
The Bold Decision to Pursue AGI
SAM ALTMAN: We said, okay, we’re going to go for AGI. 99% of the world thought we were crazy. 1% of the world they really resonated with, you know, in 10 or 20 years, unless something goes hugely wrong, we all have like unimaginable super intelligence. This is the best fucking time ever in the history of technology, ever, period, to start a company.
GARRY TAN: Well, Sam, thank you so much for joining us and thanks for all the inspiration. I mean, OpenAI itself is a true inspiration for any really, really ambitious person. Maybe we just start with that. I mean, what were some of the decisions early that seemed small, that turned out to be incredibly pivotal?
SAM ALTMAN: I mean, just deciding to do it was a big one. Like, we got very close to not starting OpenAI. AGI sounded crazy. I was, I had Gary’s job then and we were, you know, there was like all this other great stuff to do that would work, all these great startups and AGI was like kind of a pipe dream.
And also, even if it was possible, DeepMind seemed like impossibly far ahead. And so we had this year, over the course of 2015, where we were talking about starting it and, you know, it was like kind of coin flippy.
And I think this is the story of like, many ambitious things where they seem so difficult and there’s such good reasons not to do them that it really takes a core of people that like, sit in a room, look each other in the eye and say, all right, let’s do this.
Overcoming the Billion Reasons Not to Start
GARRY TAN: So there were just a billion things, a billion reasons why people might say you shouldn’t do it. I mean, off the bat, like even one of the things you figured out was the scaling laws.
SAM ALTMAN: It’s so hard to remember what it was like, next year will be our 10 year anniversary and so not. Yeah, thank you. But to like remember what the vibes were like about AI 10 years ago, that was like way before the first language models that worked.
We were trying to like play video games and we had this little robotic hand that could sort of barely do a Rubik’s Cube. And we had no ideas for products, no revenue, no really idea that we were ever going to have revenue. And we were like sitting around at conference tables and whiteboards trying to come up with ideas for papers to write.
It was such. It’s like, hard to explain now because it looks so obvious now how improbable it seemed at the time and how the idea of ChatGPT was like, completely in the realm of science fiction.
Attracting the World’s Smartest People
GARRY TAN: I mean, one of the things that really jumped out at me was you sort of, you know, rallied this idea that you should be working on AGI and then simultaneously you found the smartest people in the world who were working on that thing.
SAM ALTMAN: That second part was sort of easier than it sounds. If you say we’re going to, like, do this crazy thing and it’s, and it’s exciting and it’s important, if it works and other people aren’t doing it, you can actually, like, get a lot of people together.
And so we were, we said, okay, we’re going to go for AGI. 99% of the world thought we were crazy. 1% of the world it really resonated with, turned out there were a lot of smart people in that 1% and you could get, like, there wasn’t really anywhere else for them to go. So we were able to really concentrate the talent and it was a mission that people cared about.
So even though it seemed unlikely if it worked, it seemed super valuable. And we’ve observed this many times with startups, if you are doing the same thing as everyone else, it’s, it is very hard to concentrate talent and it’s very hard to get people to really believe in a mission. And if you’re doing a one of one thing, you have a really nice tailwind there.
Starting Small: The Zero Dollar Startup Principle
GARRY TAN: Okay, so some people in this room might be thinking, like, should I try to start an OpenAI scale thing off the bat? You also worked on Loop your first time around. Were there lessons from that?
SAM ALTMAN: OpenAI was not an OpenAI skill thing off the bat. OpenAI was like eight people in a room and then it was 20 people in a room. And it was very unclear what to do. And we were just like trying to write a good research paper. So the things that eventually become really big do not start off that way.
I think it’s important to dream that it could be big if it works. Nothing big starts that way. And Vinod Khosla has this quote that I’ve always liked, which is, there’s a very big difference between a zero million dollar startup and a zero billion dollar startup, but they both have zero dollars of revenue. They’re both like a few people sitting in a room and you’re both trying to. They’re both just trying to get the first thing to work.
So the only, the only advice I have about trying to start something big is pick. Pick a market where it seems like there’s some version of the future where it could be big. If it works. But other than that it’s like one dumb foot in front of the other for a long time.
The Product Overhang: A Golden Opportunity
GARRY TAN: How people use ChatGPT has changed a lot. How people use your API has changed a lot. What surprises you the most with the latest models like O3 and what emergent behaviors or use cases are standing out to you right now?
SAM ALTMAN: I think we’re in a really interesting time. We haven’t been in one of these for a while but like right now we’re in an interesting time where the product overhang relative to what the model like what the models are capable of is here. The products that people have figured out to build is way down here. There’s a huge. Even if the models got no better, which of course they will, there’s a huge amount of new stuff to build.
And also like last week, O3 cost five times as much as it did this week and that’s going to keep going. I think people will be astonished at how much the price per performance falls. We have an open source model coming out soon. I think people are going to be, yeah, I don’t want to like steal the team’s glory and pre announce this, but I think you all will be astonished. I think it will be like much better than you’re hoping for.
And the ability to like use it to run incredibly powerful models locally is going to like really, really surprise people on what’s possible. But so you have this world where like the model capability has gone into like a kind of like a very new realm. The cost of the APIs are going to keep falling quite dramatically. The open source models are going to be super great.
And I think we have not yet seen the level of new product innovation that the reasoning models are capable of, which makes sense, they’re pretty new, but this is like an exceptional time to go build a company that takes advantage of this sort of new thing that exists, this sort of new square on a periodic table that no one has built with yet. So only in the last month I think have we really started to see startups that are saying, okay, like reasoning models are different, you know, the whole interaction model is different and really building for that.
Memory: The Path to AI Companionship
GARRY TAN: I mean for me even memory has turned into. It feels like I’m talking to someone who knows me, which is interesting.
SAM ALTMAN: Yeah, memory is my favorite feature that we’ve launched this year. I don’t think Most people at OpenAI would say that because we’ve launched a lot of stuff, but I love memory and chatgpt and I think it points to where we will hopefully go with the product, which is you will have this entity that gets to know you, that connects to all your stuff, and that is like proactively helping you.
It won’t just be like, you send a message and it sends you one back, but it’ll be running all the time. It’ll be looking at your stuff, It’ll know when to send you a message, it’ll know when to go do something on your behalf. You’ll have special new devices and it’ll be integrated on every other service you use, and you’ll just have this thing running with you throughout your life. I think memory is the first time where people can sort of see that coming.
The “Her” Future: Gradual but Inevitable
GARRY TAN: Back in the day you tweeted a little bit about her. When is that coming? Can you give us an alpha leak around that?
SAM ALTMAN: I think gradually is the answer. No, no. If I had a date in mind, I would probably just be excited and tell you. But, like, it’s a little bit here with memory, right? It’ll be a little more here when it’s persistently running the background and sending you stuff. It’ll be a lot more here when we ship the first new device. But I think the key of her is not the little piece of hardware. It’s that this thing got to a point where it could run in the background and feel like a sort of AI companion.
MCP Integration and Data Connections
GARRY TAN: I guess we’re starting to see the power of LLMs with integrations into your real data. You know, I’ve heard rumors that MCP is coming to OpenAI.
SAM ALTMAN: I think today.
GARRY TAN: Yeah. Oh, today?
SAM ALTMAN: I think so.
GARRY TAN: Fantastic. What has been surprising about the actual integrations? Like, have you been seeing people actually operating on their core database? You know, at yc, we actually have that agent infrastructure internally and we use it all the time.
SAM ALTMAN: Definitely people are starting to use ChatGPT as this, like, operating system with everything with their whole lives in it. And integrating into as many data sources as possible is important. Devices that are always with you, like new kinds of web browsers, the connection to all data sources, memory, and then a model that’s persistently running. Put all that together, I think you get to, like, a pretty powerful place.
Cloud vs. Local: The Future of AI Computing
GARRY TAN: Do you think that’ll be in the cloud in the future or will it be on our desktop or some mix of both?
SAM ALTMAN: Some mix of all of that. Definitely people will run low local models for some things. Like, man, if we could push like half the ChatGPT workload onto your local devices, no one would be happier than us. Like our cloud, I think we will run the like largest and most expensive piece of infrastructure in the world pretty soon. So if we could push some of that off, that’d be great. But a lot of it will run on the cloud.
Scaling to the Fifth Biggest Website
GARRY TAN: Is it surprising to you how hard it is to get compute?
SAM ALTMAN: I mean we’ve gotten really good at it, but it is. We went from like a zero, like no, chatgpt.com didn’t exist two and a half years ago to like the fifth biggest website in the world. It’ll be the third at some point, hopefully someday the first if our current growth rates continue. And I think doing that is just hard no matter what. Like that’s, you usually get longer than we’ve gotten to scale up a like infrastructure for a new company. But you know, there’s like a lot of people that want to help.
The Convergence: GPT-5 and Beyond
GARRY TAN: Well, it’s incredible, incredible work that you guys have been doing. We’re seeing Reasoning models like O3 and 04 mini evolve in parallel with multimodal models like 4.0. What happens when these two threads converge and what’s the vision for GPT5 and beyond?
SAM ALTMAN: I mean we won’t get all the way here with GPT5, but eventually we do want one integrated model that can like reason really hard when it needs to and generate like real time video when it needs to do that. If you ask a question, you could imagine it thinking super hard, doing some research, writing a bunch of code just in time for like a brand new app only for you to use, or kind of like rendering live video that you can interact with.
So I think that will feel like a real new kind of computer interface. The AI sort of somewhat already does. But when we get to a model that has like true complete multimodality, like perfect video, perfect coding, everything and deep reasoning that will, that will feel like quite powerful.
The Robot Future: Free Humanoids with Subscriptions
GARRY TAN: It seems like that might be a hop step over to do the embodied aspect. You know, that’s having vision, having speech and having reasoning is a hop step to you know, basically the robot we want.
SAM ALTMAN: Our strategy has been nail that first and then make sure we can connect that to a robot. But the time for the robot is coming soon. I think I am very excited about a world where when you sign up for like the highest tier of the ChatGPT subscription, we send you a free humanoid robot to.
GARRY TAN: I mean that future is going to be pretty wild being able to have robots that do real work in the real world.
The Future of Robotics and Manufacturing
SAM ALTMAN: I think we’re not that far away now. The mechanical engineering of robots has been quite difficult and the sort of AI for the cognitive part has been quite difficult too, but it feels within grasp and I think in a few years robots will start to do super useful stuff.
Making a billion robots is still going to take a while, but I don’t know. I’m interested in the question of how many robots do you need to fully automate the supply chain? Like if you make a million humanoid robots the old fashioned way, can they run the entire supply chain, drive the mining equipment, like drive the container ships, run the foundries and make the new robots and then maybe like you actually can get a lot of robots in the world quickly, but the demand for humanoid robots in the world will be far more than we know how to think about with the current supply chain.
GARRY TAN: I guess when you were sitting in my seat, one of the things you led was a lot more investment into hard tech at YC. Sitting here where we are geopolitically, what do we need to do to make sure that America can actually have manufacturing and industrial capacity? We can’t even build precision screws and large sheet metal without crazy cost overruns. How, what can we do to make sure that happens here?
SAM ALTMAN: There are all of these answers that people throw around and have thrown around the same things for a while and it clearly hasn’t worked. So I think all of the policy is worth trying, but my instinct is we need to try something new. We shouldn’t keep trying the same failed stuff.
And like AI and robotics does give us a new possibility of a way to bring manufacturing back here and to bring sort of these complex industries here in a really important new way. And I would say that’s at least worth trying.
Building Defensible AI Startups
GARRY TAN: Yeah. What does defensibility look like here? One of the classic questions is, how do I start a startup that doesn’t get run over by OpenAI? That’s sort of the number one question that’s in our chat.
SAM ALTMAN: We don’t want to run you over. Look, we’re going to do our thing hopefully very well. We are going to try to make the best super assistant out of ChatGPT that we can. We’re going to add the things that we think we need to add to that. But that is like one small part of the opportunity in front of us.
And it makes us sad when people are like, I’m going to start a new startup and I’m going to like make a version of ChatGPT because we think we’re going to do that pretty well. And we have like, kind of a big head start, but there is so much more space to go after and there are so many incredible other companies that have been built using our platform.
We would like to make it easier for you all. We would like to do more things. Like finally, now you can imagine that ChatGPT could drive a lot of traffic to new startups and that there’s like a new kind of app or agent or whatever you want to call it, store that we could do inside of ChatGPT and drive traffic to new startups to help.
You could imagine that we could do like a sign in with OpenAI and people could bring their personalized model and easily connect it to a new startup and that would probably help in a bunch of ways. So we want to be a platform for other people to build stuff. Our advice is like, don’t build our core chat assistant.
But there is another problem which is, and this is the same for every kind of moment that I’ve seen in startup history. People get excited about the same thing at the same time. And so rather than go build the thing that you have thought of that is not everybody what else is doing. We are like very social creatures and we get very influenced by what other people are doing.
And I bet if Garry listed off the five ideas that he hears most often of what people want to build with AI, like half the room would raise their hands for working on one of those five. And there is hopefully in this room, the person who’s going to start a company that is much bigger than OpenAI someday. And I would bet that person is not working on any of the five.
So it is hard to build something defensible if everybody else is trying to do the same thing. Sometimes it works. It’s not impossible. But the best, the most enduring companies are usually not doing the same thing as everybody else. And that gives you time to figure out what the great product is, how to build the technology, before you have to answer the defensibility question.
It took us a long time to figure out how to answer the defensibility question for ChatGPT. We had built this thing and for a long time the only defensibility was like we had the only product out in the market and then we kind of like a brand that started to be well known and now we have things like memory and connections and a whole bunch of other stuff that really is defensible.
But that was like a fair criticism of us for a long time. We didn’t have any defensibility strategy. We just, like, had the only good thing out there, and then you have some window before which you have to build defensibility.
Being Contrarian and Facing Criticism
GARRY TAN: One of the things we’ve talked about in the past is that we both are big followers of Peter Thiel in that he talks a lot about being contrarian, but I think that you’ve…
SAM ALTMAN: Peter is a genius.
GARRY TAN: Absolutely. And you found you’ve been contrarian in really fundamental ways. I mean, going back to beginning with the conversation, people thought, oh, this idea that the scaling laws are valuable today, it’s taken as basic truth, but it was exactly the opposite of ground truth. Not that many years ago, when you got that pushback, what did you and your team feel? Did you say, I won’t do what you tell me. I’m going to push back against. Getting pushed back means that this is a contrarian area and we’re going to bet here and we’re going to be right.
SAM ALTMAN: It is hard to have conviction in the face of a lot of other people telling you you’re wrong. And I think people who don’t, who say it’s easy are not being honest. It gets easier over time.
But, like, I remember one time I can say this one because it got publicized in early. Not early, a few years into OpenAI, where Elon sent us this really mean email. We’ve been working together for a while and said we had a 0% chance of success. Not 0.1%, that we were totally failing. We had showed him, like, GPT-1 recently. He was like, this is crap. It’s not going to work. Doesn’t make sense.
And I was really a hero of mine at the time. And I remember going home that night and being like, what if he’s right? Like, this sucks. You’re working so hard on this thing, like, you’re pouring your life force into it. And you have these people who are smart and that you look up to and they say you are totally wrong, or this is just never going to work or you don’t have defensibility, someone’s going to kill you. This is going to happen. That’s going to happen.
And I don’t have a magic answer other than it’s really tough and it gets significantly easier over time, but it’s going to happen to all of you. And you just, like, get knocked down and get back up and brush yourself off and try to keep going.
The Year of AI Agents
GARRY TAN: Let’s talk AI agents. That’s sort of level three AGI. This is the year, I think Greg Brockman talked about recently, this is the year of the agent. With tools like operator, code interpreter, what kind of workflows do you think will disappear or appear that we just aren’t ready for yet?
SAM ALTMAN: For a long time, ChatGPT was like a Google replacement. You could ask it something that was about as long as a Google query, maybe like half an hour worth of Google queries that could assemble together. And that was still pretty good, but it still felt like a more advanced version of search.
But now you start to see things where you can really give a task to Codex, for example, or to Deep Research. And you have this thing go off and do a bunch of stuff and come back to you with a proposal. It’s like a very junior employee that can work on something for a short period of time. And if you think about how much of the work that the world does is work that can be done in front of a computer in like, few hour chunks, where you then have someone say, like, okay, that was good enough, or not. It’s quite a lot.
So I think this is going to go. This is part of that overhang we were talking about earlier. But I think this is going to go quite far. And I think with current O3, to say nothing of our next model, you can build a lot of experiences like this.
The Future of Human-Computer Interaction
GARRY TAN: How do you see the future of human computer interaction and interfaces, and what are sort of the limitations of those interfaces that motivated you?
SAM ALTMAN: One of the things that I think sci-fi got right is the idea of the interface almost melts away. Like voice interfaces today, we think of as something that is kind of sucky because they don’t work that well. But in theory, if you could say to a computer, this is exactly what I want to happen today. And if there’s any changes if, like, I’m delayed, if something happens, I trust you to, like, go off and do all those things. But, like, I don’t want to be interrupted. I don’t want to think about it. And it just did it all. And you trusted that it worked.
That would be an interface that almost melted away, except when it was like a super great human assistant needed to talk to you. But you would be, like, really thrilled. When I use my phone today, I feel like I am walking down Times Square in New York getting bumped into by people. I love my phone. It’s an incredible piece of technology. But it’s like notification here, this thing happening, this thing popping up like bright colors, like all kinds of flashing things in me. It’s just stressful.
And I can imagine an interface where the computer mostly melts away. It does the stuff I need, but I really trust that it’s going to do a great job of like surfacing information to me, making judgment calls about when I have to do it, acting on my behalf when it should. And I’m quite excited for that. I’m not going to tell you what the new device is. Well, I’ll tell you like one on one, but I’m not going to tell. But it’s very like, I hope we can sort of like show people a different way to have computers.
GARRY TAN: Is that one of the reasons why you brought on one of the greatest living designers on the planet in Jony Ive?
SAM ALTMAN: Yeah, he is amazing. He really lives up to all the hype. I think we’ve only had kind of two big revolutions in computer interfaces really in the last, like 50 years. We had the keyboard and mouse and screen and then we had touch and phones. And the opportunity to do a new one doesn’t come along that often. And I think AI really does totally open the playing field for something completely new. And I think if you’ve got to pick one person to bet on to figure that out, he’s the obvious bet.
Just-in-Time Software and the Future of Development
GARRY TAN: Yeah. So one of the things that we’ve been debating at YC that, don’t know if this is good, might be scary for a lot of software engineers who want to create B2B SaaS is this idea that what if in the future you had your underlying database, you have an API layer that is your access control and enforces your business logic and then the interface is the LLM. Like your computer is literally the agent and you have just in time software. They’re like complex flows. You’re just going to go in and it’ll code gen an artifact or a pane for you that does that thing you wanted and it’ll go in the file and it’ll bring it back if you ever need it.
SAM ALTMAN: That’s going to happen.
GARRY TAN: Yeah.
The Best Time to Start a Company
SAM ALTMAN: Look, there are two ways you can look at this. First of all, assume you all are like starting startups or have started startups, thinking about starting startups. This is the best fucking time ever in the history of technology ever, period, to start a company. Yeah, this is. And. But part of the reason it’s the best is because, like the ground is shaking. And it’s true, there are a lot of these challenges.
So on one hand you can look at something like that and say, we have been a SaaS company and now like all of the code can just be generated right in time when someone needs it and what does that mean for us? Or you can look at it and say, wow, this is going to happen, but it’s going to happen to everybody. And the way startups win is when they can iterate faster than big companies and they can do it at a much lower cost.
Like, big companies have a lot of advantages, but they iterate very slowly and they, you know, if something is like very cheap, then a lot of their big advantages go away. So you can look at this, all of these problems one way or another. But the way I would recommend looking at them is everybody is going to face the same challenges and opportunities. But when the clock cycle of the industry changes this much, startups almost always win. And we’ve probably never seen it change this much. Act on it from that direction. I think you’ll be in incredible shape.
Maybe you can invite me sometime to do a talk about what the areas of defensibility that you can build over time are, because I think that is the inherent question. People are like, oh, okay, I’m a SaaS company. There’s going to be just in time software. I think the question behind the question is like, what are actual defensibility strategies? So that would be a fun talk someday, I guess.
GARRY TAN: Backstage at one of the last events we had, we were talking about, there’s this book that’s sort of like the classic McKinseyism, which is the seven powers. And I was just thinking about that. I never would have thought the two of us technologists sitting around actually citing a book that McKinsey consultants are known for.
SAM ALTMAN: Feels so wrong.
GARRY TAN: Yeah, I don’t know.
SAM ALTMAN: Aesthetically it feels terrible, but yes, let’s.
The Age of Intelligence
GARRY TAN: Seven powers. I guess we’re entering this age of intelligence. I love that essay of yours. What do you think this era will mean for, you know, how we live, how we work, and how do we create value for each other as a society?
SAM ALTMAN: You know, in some sense, the whole arc of technology is one story, which is we discover more science, science build better tools. All of society, like, builds the scaffolding a little bit higher and we, we have this more impressive tool chain. And the whole point of it is that one person can do way more than they could before. And this has been going on for a long, long time. Each generation certainly. I mean, if you compare a person today From a person 100 or 1,000 years ago, one person is incredibly more capable.
And the kind of like social contract is that you put something, you know, you build the next layer of scaffolding. But what someone can do now with this new set of tools, with this, you know, this new layer that’s built in, is pretty incredible. And I think the. One of the things that will feel most different about these next 10 years versus these last 10 years is how much a single person or a small group of people with a lot of agency can get done.
And that is a bigger deal than it sounds like, because coordination costs are huge. And when we can empower people with more knowledge, more tools, more resources, whatever, I think we won’t just see a little bit more stuff get built, but because of these kind of coordination costs across people, we’ll see a real step change. I think the. The amount that one person or a small team get done, the satisfaction in doing that, and most importantly, like, the quality of stuff we’ll all get for each other will be quite remarkable.
When I think back about the OpenAI story, I often think about just the kind of key few tens of people that did the amazing work that led to what we all have now. But I try to remember that I always also have to think about, like, the tens of millions of people. Maybe it’s more throughout history that started like digging rocks out of the ground, figuring out how semiconductors work, building computers, building the Internet, and on and on and on. That let this small group be able to work at such a high level of impact that they never would have been able to do without the collective output of society.
Living on the Leading Edge
GARRY TAN: Is it surprising to you to what degree? I mean, this room is. You’re preaching to the room of the converted. But this is awesome, by the way. I mean, this is like the collected set of people who are going to go create the future. But there’s, you know.
SAM ALTMAN: Yeah, there’s probably. There’s maybe like never been a gathering like this in one place before. This is. This is very cool to see.
GARRY TAN: But at the same time, you know, we’re, in some ways, this is the leading cutting edge of all of society because there are seven and a half billion people who probably, you know, have not even tried this stuff yet. And not only that, their main interaction with it is the. That it doesn’t work, that it hallucinates. What do you have to say to the 3,000 people in front of you right now? Just, this is the thin edge of the spear. We are literally teaching people and giving people this technology.
SAM ALTMAN: First of all, that’s like a great place to be in one of the most fun things about working at YC is you get to live on the leading edge and you get to be around the people who are the advanced guard and that’s just like a fun way to live your life and you get to see what’s coming and you know, hopefully have some small amount of input into shaping it.
But I don’t know, I think AI is like somewhat mainstream right now. The negative, the way that it’s not is most people still think of AI as ChatGPT and a lot of people use ChatGPT, but they use it like a chatbot and they have not yet wrapped their head around what’s coming next. And probably you all have, but I don’t know, it’s like a great privilege to get to live a little bit in the future and go build stuff for everybody else coming along.
Hiring Lessons
GARRY TAN: So you’re sort of one of the best people in the world at bringing together the smartest people. What are some of the hardest lessons you’ve had to learn about hiring? A lot of the people in this room, they have never managed a person before, let alone gotten someone to quit their, you know, six to seven figure job at some big company to come work on their revolution.
SAM ALTMAN: Hiring really smart people who are clearly really driven and really productive and can work as part of a team I think does get you 90% of the way there. And the degree to which people focus on other things to hire for always surprises me. So I think, you know, given that we can’t do the full 45 minutes right now, really smart people, driven, curious, self motivated, hardworking, like good track record of accomplishment and can work really well as part of a team and sort of aligned with the company’s vision and so that everybody’s at least going for the same, the same direction. That works pretty well.
GARRY TAN: I mean, by strong track record, do you mean the person who’s like, you know, sort of been an administrator and had like the, you know, top name at the top institution for 20 years? Or do you mean like because you, you went the other way?
SAM ALTMAN: I, I don’t, especially early in a startup, I don’t believe in hiring those people. Their experience is valuable and there are times where you, you really need that. But I have not had success, and to be frank, like YC has not had that much success. Trying to start with like the very senior eminent administrator. As one of the, like, you know, as the first hire in a startup, I would, I would take like young, scrappy, but clearly like get stuff done over the person who has like the extremely polished track Record. There will come a time where you need some of those people later.
But I don’t know how you do it. But when I was, when I was like reading YC applications, I would like never look at the resume items. You know, you worked at like Google or went to this college, I never cared. I would always go right to like what’s the most impressive stuff you’ve done? And then sometimes I would like not be convinced by that and go look at the resume. But that was always like a backup to me me as a secondary thing.
GARRY TAN: So sort of look at what they’ve actually, what they’ve coded, what they’ve built like their velocity, how they think about problems and solve them.
SAM ALTMAN: I see PB back there, he has this quote, I hope it’s his quote because I’ve attributed him a bunch of times of hire for slope, not Y intercept. And I think that’s just like unbelievably great advice.
Being CEO of OpenAI
GARRY TAN: Let’s talk about being CEO of OpenAI. What are some of the hardest lessons there?
SAM ALTMAN: Just overall I don’t recommend it. No one single challenge would be that hard. But the number of things we have to do at the same time and the kind of like number of other big companies that are gunning for us in various ways, it’s just like more context than I thought it was possible to handle at once and more sort of like switching from like big, big decision to like totally unrelated but also huge decision.
AI for Science
GARRY TAN: Looking ahead 10 to 20 years, what are you sort of most personally excited about? You know, and what should people be building now to make that future possible? You know, there are people who are scientists, there are people who are software engineers, there are people who are, I mean this is an all technical crowd.
SAM ALTMAN: Look there, there’s a lot, you know, 10 or 20 years, unless something goes hugely wrong, we’ll have like unimaginable super intelligence and, and I’m very excited to see how that goes. Forced to pick one thing to just not leave it as like a vague answer. I think AI for science is what I’m personally most excited about.
I am a believer that to a first order approximation, all long term sustainable economic growth in the world, like everything that leads to people’s lives getting better is basically discovering new science and having reasonably good governance and institutions so that that science can get to developed and shared with the world. But if we could vastly increase the rate of new scientific discovery with AI, I believe that would compound to just incredible increases and wonders for everyone’s lives. So I think I’d pick That on.
Energy and AI Connection
GARRY TAN: That time frame, I guess one of the things I’ve been always really impressed by is, you know, you, you know, personally recruited helion to come do y combinator and they’re doing incredible things over on the fusion side. Was that something that you were thinking about even all the way back then or, you know, obviously energy and climate was sort of a part of, you know, what everyone’s worried about even back then.
SAM ALTMAN: But this is a little bit embarrassing. I’ve been obsessed with energy and AI as like the two, the things that I thought would be the two most important things or at least the ones I was going to be most, that I felt most passionate about for a long time. And really like the two areas that I, I knew I wanted to like concentrate time and capital towards.
I I cannot recall ever thinking until like after starting OpenAI that they were going to be so obviously related that you know, that that energy would be the, eventually the fundamental limiter on how much intelligence we could have. And I don’t know how I missed that because I usually am good at thinking about things like that. But I really did think of them as like very independent. You know, we were going to need AI to have all the ideas, energy to make all this stuff happen in the world. And I obviously right after starting OpenAI I got obsessed with meaning energy for AI. But like pre2015 I think I thought of them as orthogonal vectors.
GARRY TAN: I mean I’m sure you’ve seen that chart that all the effective accelerationists in the room have seen around basically having a high standard of living. Like the sort of. Really it’s.
SAM ALTMAN: I’m obsessed with this chart. I’ve been obsessed with that chart a long time.
GARRY TAN: It’s directly related to the amount of energy that any given person has access to.
SAM ALTMAN: Yeah, I think this is one of the most amazing charts over a long, long period of human history is the correlation of quality of life and abundance of energy and cost of energy. So that was that chart and charts like that were a significant reason that I got obsessed with energy in the first place. It is just this like crazy high impact thing.
Building Abundance Through Technology
GARRY TAN: It sounds like it wasn’t entirely interdependent. It was more. You had twin interests, you’ve literally woven them together.
SAM ALTMAN: I had like the one interest of like radical abundance and just like what were the kind of technological leverage points to just like make the future like wildly different and better. And these are the two kind of key things for that, but not as much as the same vector.
Now I think a lot about, like, how much energy can we actually build on Earth before we just heat the planet too much from running the GPUs and like, how long can we go before we have to put all the GPUs in space? But at the time, yeah, I really thought of them differently.
GARRY TAN: I mean, it seems like one of the defining beliefs that technologists uniquely, ideally have, that they believe that we can actually create that sort of abundance. You know, if you have intelligence on tap and then you have energy on tap, then how does that go? It’s like, you know, all watched over by machines of loving grace.
SAM ALTMAN: I’ve never been to one of those degrowther conferences in Europe or whatever, but I’ve always kind of wanted to go to one.
GARRY TAN: This is the anti de Growth conference.
SAM ALTMAN: This is the anti de growth conference, totally. But I would like, love to be like, sitting, you know, in the dark, in the cold, with no one pulling out their phones and just like talking about how horrible everything was and there was no hope. Like, I would love to experience that mindset once because I’ve never felt it.
And I think it is like, it is one of the movements that has been ever hardest for me to identify with. Obviously this is like my crew in my world, but the sort of like the optimism of startups of San Francisco, of the technology industry, of AI, of what all you all will do, like, that is. That is like the natural space my brain abides. And it’s very hard for me to really empathize with the other side of that. But I’m pretty sure we’re right and they’re wrong.
The Path to Technological Abundance
GARRY TAN: How do we get there though, right? This incredible vision of technology actually creating for abundance for others. I mean, you’ve already done so much, but, you know, point us the way, like, how else do we get there? How do we make it faster? You know, does government play a role in this?
SAM ALTMAN: Almost just about five years ago, like, pretty much this week, we put GPT-3 into an API and people started playing with it and it was barely usable. It was quite embarrassing. And in five years we have gone from this like, thing that could barely write a sentence to a thing that is like, you know, PhD level intelligence in most areas.
Finally, five more years, I think we’ll be able to maintain the same rate of progress. And I think if we do that, if we also build out the infrastructure to serve that to people, then everybody in this room will figure out how to take that technology and adapt it to what everybody needs.
The analogy I like most for AI is the transistor like the historical, technical analogy. Some people figured out a new really important scientific discovery and society, the economy, whatever you want to call it, just got to work, just did its thing. The magic of that, just figured out how to make incredible value for people and really, over a fairly short period of decades, significantly ramp up quality of life.
I think this will be even faster and steeper than that, but I think it’ll go in direction in the same way we need to make the great technology figure out the remaining scientific stuff, which I don’t think there’s much left. We need to figure out how to build out the infrastructure that you all will need to be able to serve this. And then you all have got to go figure out what, what people in the world need with this new magic.
Early Y Combinator Days
GARRY TAN: So let’s flashback to 2005, the very first batch of Y Combinator. How did you hear about Paul Graham? You were reading his essay.
SAM ALTMAN: I was reading his essays. So I’d heard about, like, he kind of had this cult following on the Internet. But I heard about what was then called the Summer Founders program and now it’s just called Y Combinator from Blake Ross, who I lived in the same freshman dorm with and posted about it on Facebook.
GARRY TAN: And then I think Paul said, oh, you’re a freshman. You know, there’s like another batch coming. And what did you reply to him.
SAM ALTMAN: By email with, you know, funny you bring that up. I just dug up the email like a couple of days ago because I, I felt I had been misquoted over time.
GARRY TAN: I’m curious.
SAM ALTMAN: And his telling of the story is I said, I’m a sophomore and I’m coming, but I wrote a much nicer thing. It was like, oh, maybe there was some misunderstanding. Actually, I’m a sophomore and I can still make it. And I would love to, if that’s still okay to come the next day.
Advice for Entrepreneurs
GARRY TAN: So in some ways, the wild thing is you’re sitting in front of 3,000 people who kind of was, you know, they are sitting where you were back in 2005. What would you say to, you know, the Sam Altman from that time, you know, given what, you know, all the things you’ve seen, all the things you’ve learned since, like, what are the things that you’re most surprised you didn’t know that? I mean, it just took. I mean, you’ve been through it, you know, like, you’ve done it.
SAM ALTMAN: I wish someone had, like, taught me the importance of, like, conviction and resilience over a long period of time. People don’t really talk about how hard that is. It’s like easier for a little while, but your reserves kind of like wear down on it and how to keep that going for a long period of time. Also just sort of like trust that it’s eventually going to work out.
Like, obviously my first startup didn’t work that well. I think a lot of people kind of give up after one failed startup, but startups don’t work out all the time. And learning how to keep going through that, keep working through that is I think really important.
Developing like trust in your own instincts and increasing that trust as you refine your decision making and instincts over time, I think that’s really important. Kind of courage to work on stuff that is out of fashion but is what you believe and what you care about. I think that’s really important.
I had a kid recently and the thing everyone tells you when you have a kid is that it is the best thing you will ever do, but also it is the hardest thing you will ever do. Like, the good parts are much better than you can imagine. The hard parts are much harder. That is all totally true. And that is also basically what I feel like being an entrepreneur is like. The good parts are really great, better than you think. And the hard parts are like shockingly much harder than anyone can, can express in a way that makes any sense to you. And you have to just keep going.
GARRY TAN: Sam Altman, everyone. Thank you.
SAM ALTMAN: Thank you.
Related Posts
- Why eSIMs Are the Smart Choice for Mobile Businesses
- Transcript: Elon Musk and Jensen Huang Talk AI at U.S.-Saudi Investment Forum – 11/19/25
- The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)
- Transcript: NVIDIA CEO Jensen Huang’s Keynote At GTC 2025
- Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)
