Read the full transcript of former CEO of Google Eric Schmidt’s interview on Moonshots with Peter Diamandis podcast on “What Artificial Superintelligence Will Actually Look Like”, July 17, 2025.
Welcome Back to Moonshots
INTERVIEWER: Eric, welcome back to Moonshots.
ERIC SCHMIDT: It’s great to be here with you guys.
INTERVIEWER: Thank you. It’s been a long road since I first met you at Google. I remember our first conversations were fantastic. It’s been a crazy month in the world of AI, but I think every month from here is going to be a crazy month. And so I’d love to hit on a number of subjects and get your take on them.
ERIC SCHMIDT: Of course.
AI is Under-Hyped: The Learning Machine Acceleration
INTERVIEWER: I want to start with probably the most important point that you’ve made recently that got a lot of traction, a lot of attention, which is that AI is under-hyped when the rest of the world is either confused, lost, or think it’s not impacting us. We’ll get into more detail, but quick – most important point to make there.
ERIC SCHMIDT: AI is a learning machine. And in network effect businesses, when the learning machine learns faster, everything accelerates. It accelerates to its natural limit. The natural limit is electricity. Not chips – electricity.
INTERVIEWER: Really?
The Energy Crisis: Nuclear Power and AI’s Massive Demands
INTERVIEWER: Okay, so that gets me to the next point here, which is discussion on AI and energy. So we saw recently Meta announcing that they signed a 20-year nuclear contract with Constellation Energy. We’ve seen Google, Microsoft, Amazon, everybody buying basically nuclear capacity right now. That’s got to be weird that private companies are basically taking over into their own hands what was utility function before?
ERIC SCHMIDT: Well, just to be cynical, I’m so glad those companies plan to be around the 20 years that it’s going to take to get the nuclear power plants built.
INTERVIEWER: And there have been two in the last, what, 30 years?
ERIC SCHMIDT: There’s excitement that there’s an SMR small modular reactor coming in at 300 megawatts, but it won’t start till 2030. As important as nuclear, both fission and fusion is, they’re not going to arrive in time to get us what we need as a globe to deal with our many problems and the many opportunities that are before us.
INTERVIEWER: So if you look at the sort of three-year timeline toward AGI, do you think if you started a fusion reactor project today that won’t come online for 5, 6, 7 years, is there a probability that the AGI comes up with some other breakthrough, fusion or otherwise, that makes it irrelevant before it even gets online?
ERIC SCHMIDT: A very good question. We don’t know what artificial general intelligence will deliver and we certainly don’t know what superintelligence will deliver. But we know it’s coming. So first we need to plan for it. And there’s lots of issues as well as opportunities for that.
But the fact of the matter is that the computing needs that we need now are going to come from traditional energy suppliers in places like the United States and the Arab world and Canada and the Western world. And it’s important to note that China has lots of electricity, so if they get the chips, it’s going to be one heck of a race.
The Scale of Computing Power Requirements
INTERVIEWER: Yeah, they’ve been scaling it at two or three times the US. The US has been flat for how long in terms of energy production?
ERIC SCHMIDT: From my perspective, infinite. In fact, electricity demand declined for a while, as has overall energy needs because of conservation, other things. But the data center story is the story of the energy people, right? And you sit there and you go, “How could these data centers use so much power?” Well, and especially when you think about how little power our brains do.
Well, these are our best approximation in digital form of how our brains work. But when they start working together, they become super brains. The promise of a superbrain with a 1 GW data center, for example, is so palpable, people are going crazy.
And by the way, the economics of these things are unproven. How much revenue do you have to have to have 50 billion in capital? Well if you depreciate it over three years or four years, you need to have 10 or 15 billion dollars of capital spend per year just to handle the infrastructure. These are huge businesses and huge revenue, which in most places is not there yet.
The Race for Energy Efficiency
INTERVIEWER: I’m curious, there’s so much capital being invested and deployed right now in nuclear, bringing Three Mile Island back online, in fusion companies. Why isn’t there an equal amount of capital going into making the entire chipset and compute just a thousand times more energy efficient?
ERIC SCHMIDT: There is a similar amount of capital going in. There are many, many startups that are working on non-traditional ways of doing chips. The transformer architecture, which is what is powering things today, has new variants. Every week or so I get a pitch from a new startup that’s going to build inference time, test time computing which are simpler and they’re optimized for inference.
It looks like the hardware will arrive just as the software needs expand. And by the way, that’s always been true. We old timers had a phrase: “Grove giveth and Gates taketh away.” So Intel would improve the chipsets and the software people would immediately use it all, suck it all up, higher level code. I have no reason to believe that that law, Grove and Gates’ law, has changed.
The Computational Demands of Advanced AI
If you look at the gains in the Blackwell chip or the MI350 chip in AMD, these chips are massive supercomputers and yet we need, according to the people, hundreds of thousands of these chips just to make a data center work. That shows you the scale of what these thinking algorithms require.
Now you sit there and you go, “What could these people possibly be doing with all of these chips?” I’ll give you an example. We went from language to language, which is what ChatGPT can be understood as, to reasoning and thinking. If you want to look at an OpenAI example, look at OpenAI o3 which does forward and back reinforcement learning and planning.
Now the cost of doing the forward and back is many orders of magnitude more than just answering your question for your PhD thesis or your college paper. That planning, the back and forth, is computationally very, very expensive.
The Path to Human-Level Intelligence
So with the best energy and the best technology today, we are able to show evidence of planning. Many people believe that if you combine planning and very deep memories, you can build human-level intelligence. Now of course it will be very expensive to start with, but humans are very, very industrious.
And furthermore, the great future companies will have AI scientists – that is, non-human scientists – AI programmers as opposed to human programmers who will accelerate their impact. So if you think about it, going back to you being the author of the abundance thesis, as best I can tell, Peter, you’ve talked about this for 20 years. You saw it first.
It sure looks like if we get enough electricity we can generate the power – in the sense of intellectual power – to generate abundance along the lines that you predicted two decades ago.
Real-World Applications and Economic Impact
ERIC SCHMIDT: Let me throw some numbers at you just to reinforce what you said. We have a couple companies in the lab that are doing voice customer service, voice sales with the new technology, just as of the last month. And the value of these conversations is ten to a thousand dollars and the cost of the compute is maybe 2, 3 concurrent GPUs. It’s like 10, 20 cents and so they would buy massively more computer to improve the quality of the conversation.
There aren’t even close to enough. We count about 10 million concurrent phone calls that should move to AI in the next year or so. And my view of that is that’s a good tactical solution and a great business.
Enterprise Transformation Through AI
Let’s look at other examples of tactical solutions that are great businesses. And I obviously have a conflict of interest talking about Google because I love it so much. So with that in mind, look at Google’s strength in GCP. Google’s cloud product, where they have a completely fully served enterprise offering for essentially automating your company with AI.
And the remarkable thing, and this is shocking to me, is you can in an enterprise write the tasks that you want and then using something called the model context protocol, you can connect your databases to that and the large language model can produce the code for your enterprise.
Now there’s 100,000 enterprise software companies, middleware companies that grew up in the last 30 years that I’ve been working on this, that are all now in trouble because that interstitial connection is no longer needed with their business. And of course they’ll have to change as well. The good news for them is enterprises make these changes very slowly.
The Future of Programming and Mathematics
If you built a brand new enterprise architecture for ERP and MRP, you would be highly tempted to not use any of the ERP or MRP suppliers but instead use open source libraries. Build essentially using BigQuery or the equivalent from Amazon, which is Redshift, and essentially build that architecture and it gives you infinite flexibility and the computer system writes most of the code.
Now programmers don’t go away. At the moment it’s pretty clear that junior programmers go away – the sort of journeyman, if you will, of the stereotype – because these systems aren’t good enough yet to automatically write all the code they need. Very senior computer scientists, computer engineers who are watching it, that will eventually go away.
The San Francisco Consensus
One of the things to say about productivity, and I call this the San Francisco consensus, because it’s largely the view of people who operate in San Francisco, goes something like this: We’re just about to the point where we can do two things that are shocking. The first is we can replace most programming tasks by computers and we can replace most mathematical tasks by computers.
Now you sit there and you go, “Why?” Well, if you think about programming and math, they have limited language sets compared to human language. So they’re simpler computationally and they’re scale free. You can just do it and do it and do it with more electricity. You don’t need data, you don’t need real world input, you don’t need telemetry, you don’t need sensors.
So it’s likely, in my opinion, that you’re going to see world class mathematicians emerge in the next one year that are AI based and world class programmers that can appear within the next one or two years when those things are deployed at scale.
The Acceleration of Scientific Discovery
Remember, math and programming are the basis of everything. It’s an accelerant for physics, chemistry, biology, material science. So going back to things like climate change, can you imagine if we – and this goes back to your original argument, Peter – imagine if we can accelerate the discoveries of the new materials that allow us to deal with a decarbonized world. It’s very exciting.
The Timeline for AI Superintelligence
INTERVIEWER: Okay, I just want to hit this because it’s important, the potential for there to be – I don’t want to use the word PhD level, other than thinking in terms of research at the PhD level – AIs that can basically attack any problem and solve it and solve math, if you would, in physics. This idea of an AI intelligence explosion. Leopold put that at like 26, 27, heading towards digital superintelligence in the next few years. Do you buy that timeframe?
The San Francisco Consensus and Timeline Predictions
ERIC SCHMIDT: So, again, I consider that to be the San Francisco consensus. I think the dates are probably off by one and a half or two times, which is pretty close. So a reasonable prediction is that we’re going to have specialized savants in every field within five years. That’s pretty much in the bag as far as I’m concerned. And here’s why.
You have this amount of humans, and then you add a million AI scientists to do something, your slope goes like this, your rate of improvement. We should get there. The real question is, once you have all these savants, do they unify? Do they ultimately become a superhuman?
The term we’re using is super intelligence, which implies intelligence that beyond the sum of what humans can do, the race to super intelligence, which is incredibly important because imagine what a superintelligence could do that we ourselves cannot imagine. It’s so much smarter than we. And it has huge proliferation issues, competitive issues, China versus the US issues, electricity issues, so forth. We don’t even have the language for the deterrence aspects and the proliferation issues of these powerful models or the imagination.
Redefining Intelligence Benchmarks
Totally agree. In fact, it’s one of the great flaws actually in the original conception. You remember Singularity University and Ray Kurzweil’s books and everything. And we kind of drew this curve of rat level intelligence, then cat, then monkey, and then it hits human, and then it goes super intelligent. But it’s now really obvious when you talk to one of these multilingual models that’s explaining physics to you, that it’s already hugely super intelligent within its savant category.
And so Dennis Hassabis keeps redefining AGI Day as well. “When it can discover relativity, the same way Einstein did with data that was available up until that date. That’s when we have AGI,” so. So long before that. Yeah. So I think it’s worth getting the timeline right.
The Agentic Revolution
Yeah. So the following things are baked in. You’re going to have an agentic revolution where agents are connected to solve business processes, government processes and so forth. They will be adopted most quickly in companies in country, companies that have a lot of money and a lot of time latency issues at stake.
It will be adopted most slowly in places like government, which do not have an incentive for innovation and fundamentally are job programs and redistribution of income kind of programs. So call it what you will. The important thing is that there will be a tip of the spear in places like financial services, certain kind of biomedical things, startups and so forth. And that’s the place to watch.
So all of that is going to happen. The agents are going to happen, this math thing is going to happen. The softer thing is going to happen. We can debate the rate at which the biological revolution will occur, but everyone agrees that it’s right after that. We’re very close to these major biological understandings.
Physics and Synthetic Data Generation
In physics, you’re limited by data, but you can generate it synthetically. There are groups which I’m funding which are generating physics, essentially models that can approximate algorithms that cannot. They’re incomputable. So in other words, you have a, essentially a foundation model that can answer the question good enough for the purposes of doing physics without having to spend a million years doing the computation of, you know, quantum chromodynamics and things like that. All of that’s going to happen.
National Security Implications
The next questions have to do with what is the point in which this becomes a national emergency. And it goes something like this. Everything I’ve talked about is in the positive domain, but there’s a negative domain as well. The ability for biological attacks, obviously cyber attacks. Imagine a cyber attack that we as humans cannot conceive of, which means there’s no defense for it because no one ever thought about it. Right. These are real issues.
A biological attack. You take a virus, I won’t obviously go into the details. You take a virus that’s bad and you make it undetectable by some changes in its structure, which, again, I won’t go into the details that we released a whole report at the national level on this issue.
So at some point the government, and it doesn’t appear to understand this now, is going to have to say this is very big because it affects national security, national economic strengths and so forth.
China’s AI Strategy
Now, China clearly understands this, and China is putting an enormous amount of money into this. We have slowed them down by virtue of our chips controls, but they found clever ways around this. There are also proliferation issues. Many of the chips that they’re not supposed to have, they seem to be able to get.
And more importantly, as I mentioned, the algorithms are changing. And instead of having these expensive foundation models by themselves, you have continuous updating, which is called test time training. That continuous updating appears to be capable of being done with lesser power chips.
Open Source Challenges
So there are so many questions that I think we don’t know. We don’t know the role of open source because remember, open source means open weights, which means everyone can use it. A fair reading of this is that every country that’s not in the west will end up using open source because they’ll perceive it as cheaper, which transfers leadership in open source from America to China. That’s a big deal, right?
If that occurs, how much longer do the chip bands, if you will hold, and how long before China can answer, what are the effects of the current government’s policies of getting rid of foreigners and foreign investment? What happens with the Arab data centers, assuming they work, and I’m generally supportive of them, if those things are then misused to help train Chinese models? The list just goes on and on. We just don’t know.
Regulatory Challenges and Innovation
INTERVIEWER: Okay, can I ask you probably one of the toughest questions, I don’t know if you saw Marc Andreessen. He went and talked to the Biden administration, past administration, and said, “How are we going to deal with exactly what you just talked about? Chemical and biological and radiological and nuclear risks from big foundation models being operated by foreign countries.” And the Biden answer was, “You know, we’re going to keep it into the three or four big companies like Google, and we’ll just regulate them.” And Mark was like, “That is a surefire way to lose the race with China, because all innovation comes from a startup that you didn’t anticipate, or it’s just the American history and you’re cutting off the entrepreneur from participating in this.”
So as of right now, with the open source models, the entrepreneurs are in great shape. But if you think about the models getting crazy smart a year from now, how are we going to have the balance between startups actually being able to work with the best technology, but proliferation not percolating to every country in the world?
ERIC SCHMIDT: Again, a set of unknown questions. And anybody who knows the answer to these things is not telling the full truth. The doctrine in the Biden administration was called 10 to the 26 flops. It was a point that was a consensus above which the models were powerful enough to cause some damage. So the theory was that if you stayed below 10 to the 26, you didn’t need to be regulated, but if you were above that, you needed to be regulated. And the proposal and the Biden administration was to regulate both the open source and the closed source.
Okay, that’s, that’s the, those are the summary. That of course, has been ended by the Trump administration. They have not yet produced their own thinking in this area. They’re very concerned about China and it getting forward, so they’ll come out with something.
Key Strategic Questions
From my perspective, the core questions are the following. Will the Chinese be able to use even with chip restrictions? Will they use architectural changes that will allow them to build models as powerful as ours? And let’s assume they’re government funded. That’s the first question.
The next question is how will you raise $50 billion for your data center if your product is open source? Yeah. In the American model, part of the reason these models are closed is that the business people and the lawyers correctly are saying, “I’ve got to sell this thing because I’ve got to pay for my capital.” These are not free goods. And the US government correctly is not giving $50 billion to these companies. So we don’t know that.
The DeepSeek Challenge
To me, the key question to watch is look at DeepSeek. So DeepSeek, a week or so ago, Gemini 2.5 Pro got to the top of the leaderboards in intelligence. Great achievement for my friends at Gemini. A week later, DeepSeek comes in and is slightly better than Gemini. And DeepSeek is trained on the existing hardware that’s in China, which includes stuff that’s been pilfered and some of the Ascend. It’s called the Ascend Huawei chips and a few others.
What happens now? The U.S. people say, “Well, you know, the DeepSeek people cheated and they cheated by doing a technique called distillation, where you take a large model and you ask it 10,000 questions, you get its answers and then you use that as your training material.” So the US Companies will have to figure out a way to make sure that their proprietary information that they’ve spent so much money on does not get leaked into these open source things. I just don’t know.
Nuclear and Biological Safety Measures
With respect to nuclear, biological, chemical and so forth issues, the US Companies are doing a really good job of looking for that. There’s a great concern, for example, that nuclear information would leak into these models. As they’re training without us knowing it. And by the way, that’s a violation of law.
INTERVIEWER: Oh, really?
ERIC SCHMIDT: They work. And the whole nuclear information thing is there’s no free speech in that world for good reasons, and there’s no free use and copyright and all that kind of stuff. It’s illegal to do it. And so they’re doing a really, really good job of making sure that that does not happen. They also put in very significant tests for biological information and certain kinds of cyber attacks.
What happens there, their incentive is their incentive to continue, especially if it’s not. If it’s not required by law. The government has just gotten rid of the safety institutes that were in place in Biden and are replacing it by a new term which is largely a safety assessment program. Which is a fine answer.
I think collectively we in the industry just want the government at the secret and top secret level to have people who are really studying what China and others are doing. You can be sure that China really has very smart people studying what we’re doing. We at the secret and top secret level should have the same thing.
The AI27 Paper and Future Scenarios
INTERVIEWER: Have you read the AI27 paper?
ERIC SCHMIDT: I have.
INTERVIEWER: And so for those listening who haven’t read it, it’s a future vision of the AI and US and China racing towards AI and at some point the story splits into a “we’re going to slow down and work on alignment,” or “we’re going full out” and you know, spoiler alert. In the race to infinity, humanity vanishes.
Mutual AI Malfunction Theory
ERIC SCHMIDT: So the right outcome will ultimately be be some form of deterrence and mutually assured destruction. I wrote a paper with two other authors, Dan Hendricks and Alex Wang, where we named it “Mutual AI Malfunction.” And the idea was go something like this. You’re the United States, I’m China. You’re ahead of me. At some point you cross a line, you know, you, Peter, cross a line and I, China, go, “This is unacceptable.” At some point, it becomes in terms.
INTERVIEWER: Of amount of compute and amount of.
ERIC SCHMIDT: It’s something you’re doing where it affects my sovereignty. It’s not just words and yelling and an occasional shooting down a jet. It’s a real threat to the identity of my country, my economic, what have you. Under this scenario, I would be highly tempted to do a cyber attack to, to slow you down.
Okay. In Mutually Assured Malfunction, if you will, we have to engineer it so that you have the ability to then do the same thing to me. And that causes both of us to be careful not to trigger the other. That’s what mutual Assured Destruction is. That’s our best formulation right now.
Chip Tracking and Monitoring
We also recommend in our work, and I think it’s very strong that the government requires that we know where all the chips are. And remember, the chips can tell you where they are because they’re computers. And it would be easy to add a little crypto thing which would say, “Yeah, here I am and this is what I’m doing.” So knowing where the chips are, knowing where the training runs are, and knowing what these fault lines are are very important.
Now, there are a whole bunch of assumptions in this scenario that I described. The first is that there was enough electricity. The second is that there was enough power. The third is the Chinese had enough electricity, which they do, and enough computing resources, which they may or may not.
INTERVIEWER: Have, or may in the future have.
The Nuclear Analogy and Geopolitical Implications
ERIC SCHMIDT: Or may in the future have. And also I’m asserting that everyone arrives at this eventual state of superintelligence at roughly the same time. Again, these are debatable points, but the most interesting scenario is we’re saying it’s 1938, the letter has come from Einstein to the President and we’re having a conversation and we’re saying, “Well, how does this end?”
Okay, so if you were so brilliant in ’38, what you would have said is this ultimately ends with us having a bomb, the other guys having a bomb, and then we’re going to have one heck of a negotiation to try to make sure that we don’t end up destroying each other. And I think the same conversation needs to get started now. Well before the Chernobyl events, well before the build ups.
INTERVIEWER: Can we just take that one more step? And don’t answer if you don’t want to, but if it was 1947, 1948, so before the Cold War really took off, and you say, “Well, that’s similar to where we are with China right now.” We have a competitive lead, but it may or may not be fragile. What would you do differently in 1947, 1940, or what would Kissinger do different in 1947, 1948, 1949 than what we did?
Lessons from Kissinger’s Realism
ERIC SCHMIDT: I wrote two books with Dr. Kissinger and I miss him very much. He was my closest friend. And Henry was very much a realist in the sense that when you look at his history, in roughly ’36, ’38, he and his family were Jewish, were forced to emigrate from Germany because of the Nazis, and he watched the entire world that he’d grown up with as a boy be destroyed by the Nazis and by Hitler. And then he saw the conflagration that occurred as a result.
And I can tell you that whether you like him or not, he spent the rest of his life trying to prevent that from happening again. So we are today safe because people like Henry saw the world fall apart. So I think from my perspective, we should be very careful in our language and our strategy to not start that process.
Henry’s view on China was different from other China scholars. His view was that we shouldn’t poke the bear, that we shouldn’t talk about Taiwan too much, and we let China deal with her own problems, which were very significant. But he was worried that we, or China in a small way would start World War Three in the same way that World War One was started.
You remember that World War One started with essentially a small geopolitical event which was quickly escalated for political reasons on all sides. And then the rest was a horrific war, the war to end all wars at the time. So we have to be very careful when we have these conversations not to isolate each other.
Henry started a number of what are called Track 2 Dialogues, which I’m part of one of them, to try to make sure we’re talking to each other. And so somebody who’s a hardcore person would say, “Well, we’re Americans and we’re better and so forth.” Well, I can tell you, having spent lots of time on this, the Chinese are very smart, very careful, capable, very much appear. And if you’re confused about that again, look at the arrival of DeepSeek a year ago. I said they were two years behind. I was clearly wrong. With enough money and enough power, they’re in the game.
The DeepSeek Surprise and Inference Time Computing
INTERVIEWER: Let me actually drill in just a little bit more on that too, because I think one of the reasons DeepSeek caught up so quickly is because it turned out that inference time generates a lot of IQ and I don’t think anyone saw that coming. And inference time is a lot easier to catch up on. And also if you take one of our big open source models and distill it and then make it a specialist like you were saying a minute ago, and then you put a ton of inference time compute behind it, it’s a massive advantage and also a massive leak of capability within CBRN, for example, that nobody anticipated.
ERIC SCHMIDT: And CBRN, remember, is chemical, biological, radiological and nuclear. Let me rephrase what you said. If the structure of the world in five to 10 years is 10 models, and I’ll make some numbers up, five in the United States three in China, two elsewhere. And those models are data centers that are multi gigawatts. They will be all nationalized in some way in China. They will be owned by the government. The stakes are too high.
In my military work, one day I visited a place where we keep our plutonium. And we keep our plutonium in a base that’s inside of another base with even more machine guns and even more specialized because the plutonium is so interesting and obviously very dangerous. And I believe it’s the only one or two facilities that we have in America.
So in that scenario, these data centers will have the equivalent of guards and machine guns because they’re so important. Now, is that a stable geopolitical system? Absolutely. You know where they are. President of one country can call the other. They can have a conversation, they can agree on what they agree on and so forth.
The Open Source Proliferation Problem
But let’s say that is not true. Let’s say that the technology improves, again, unknown to the point where the kind of technologies that I’m describing are implementable on the equivalent of a small server, then you have a humongous data center proliferation problem. And that’s where the open source issue is so important, because those servers, which will be proliferate throughout the world, will all be on open source. We have no control regime for that.
Now, I’m in favor of open source, as you mentioned earlier with Marc Andreessen, that open competition tends to allow people to run ahead in defense of the proprietary companies collectively. They believe, as best I can tell, that the open source models can’t scale fast enough because they need this heavyweight training.
If you look, I’ll give you an example of GROK is trained on a single cluster. It was built by Nvidia in 20 days in Memphis, Tennessee of 200,000 GPUs. GPU is about $50,000. You can say it’s about a $10 billion supercomputer in one building that does one thing right. If that is the future, then we’re okay because we’ll be able to know where they are.
If in fact the arrival of intelligence is ultimately a distributed problem, then we’re going to have lots of problems with terrorism, bad actors, North Korea, which is my greatest concern.
INTERVIEWER: Right. China and the US are rational actors.
ERIC SCHMIDT: Yeah.
INTERVIEWER: The terrorist who has access to this. And I don’t want to go all negative on this podcast. It’s an important thing to wake people up to the deep thinking you’ve done on this. My concern is the terrorist who gains access? And are we spending enough time and energy and are we training enough models to watch them?
Monitoring Superintelligent Systems
ERIC SCHMIDT: So first, the companies are doing this. There’s a body of work happening now which can be understood as follows. You have a super intelligent model. Can you build a model that’s not as smart as the student it’s studying? Now there is a professor that’s watching the student, but the student is smarter than the professor. Is it possible to watch what it does? It appears that we can. It appears that there’s a way. Even if you have this rogue incredible thing, we can watch it and understand what it’s doing and thereby control it.
Another example of where we don’t know is that it’s very clear that these savant models will proceed. There’s no question about that. The question is, how do we get the Einsteins? So there are two possibilities. One, and this is to discover completely new schools of thought, which is what’s…
INTERVIEWER: The most exciting thing, you know, and imagine years.
The Age of Millions of Polymaths
ERIC SCHMIDT: And in our book Genesis, Henry and I and Craig talk about the importance of polymaths in history. In fact, the first chapter is on polymaths. What happens when we have millions and millions of polymaths? Very, very interesting.
Okay, now it looks like the great discoveries, the greatest scientists and people in our history had the following property. They were experts in something and they looked at a different problem and they saw a pattern in one area of thinking that they could apply to a completely unrelated field. And they were able to do so and make a huge breakthrough. The models today are not able to do that.
So one thing to watch for is algorithmically, when can they do that? This is generally known as the non stationarity problem, because the reward functions in these models are fairly straightforward. Beat the human, beat the question, and so forth. But when the rules keep changing, is it possible to say the old rule can be applied to a new rule to discover something new? And again, the research is underway. We won’t know for years.
The Evolution of AI Scaffolding
Peter and I were over at OpenAI yesterday actually, and we were talking to many people, but Noam Brown in particular. And I said, “The word of the year is scaffolding.” And he said, “Yeah, maybe the word of the month is scaffolding.” I was like, “Okay, what did I step on there?”
He said, “Look, right now, if you try to get the AI to discover relativity or just some Greenfield opportunity, it won’t do it. If you set up a framework, kind of like a lattice, like a trellis, the vine will grow on the trellis beautifully, but you have to lay out those pathways and breadcrumbs.”
He was saying the AI’s ability to generate its own scaffolding is imminent. That doesn’t make it completely self improving. It’s not Pandora’s box. But it’s also much deeper down the path of create an entire breakthrough in physics or create an entire feature length movie or these prompts that require 20 hours of consecutive inference time compute pretty much sure that that will be a 2025 thing at least from their point of view.
Warning Signs of Recursive Self-Improvement
So recursive self improvement is the general term for the computer continuing to learn. We’ve already crossed that in the sense that these systems are now running and learning things and they’re learning from the way they think within limited functions. When does the system have the ability to generate its own objective and its own question? It does not have that today. That’s another sign.
Another sign would be that the system decides to exfiltrate itself and it takes steps to get itself away from the command and control system. That has not happened yet.
INTERVIEWER: It hasn’t called you yet and said, “Hi Eric.”
ERIC SCHMIDT: But there are theoreticians who believe that the systems will ultimately choose that as a reward function because they’re programmed to continue to learn. Another one is access to weapons. Right. And lying to get it. So these are tripwires. Each of which is a tripwire that we’re watching.
And again, each of these could be the beginning of a mini Chernobyl event that would become part of consciousness. I think at the moment, the US government is not focused on these issues. They’re focused on other things. Economic opportunity, growth and so forth. It’s all good. But somebody’s going to get focused on this and somebody’s going to pay attention to it and it will ultimately be a problem.
Clarifying the Centralized vs. Distributed AI Model
ERIC SCHMIDT: Can I clean up one kind of common misconception there? Because I think it’s a really important one. In the movie version of AI you described, hey, maybe there are 10 big AIs and five are in the US, three are in China and two are, one’s not in Brussels. Probably one’s maybe in Dubai or Israel.
INTERVIEWER: Israel. Okay, there you go. Somewhere like that.
INTERVIEWER: Yeah.
The Proliferation Problem and AI Distribution
ERIC SCHMIDT: In the movie version of this, if it goes rogue, you know, the SWAT team comes in, they blow it up and it’s solved. But the actual real world is when you’re using one of these huge data centers to create a super intelligent AI. The training process is 10^26, 10^28, you know, or more flops. But then the final brain can be ported and run on four GPUs, eight GPUs. So a box about this size and it’s just as intelligent.
And that’s one of the beautiful things about it. This is called “stealing the weights.” And the new thing is that that weight file with if you have an innovation in inference, time speed and you say, “Oh, same weights, no difference, distill it or just quantize it or whatever. But I made it 100 times faster.” Now it’s actually far more intelligent than what you exported from the data center.
But all of these are examples of the proliferation problem. And I’m not convinced that we will hold these things in the ten places. And here’s why. Let’s assume you have the ten, which is possible. They will have subsets of models that are smaller but nearly as intelligent. And so the tree of knowledge of systems that have knowledge is not going to be 10 and then zero, it’s going to be 10, 100, a thousand, a million, a billion at different levels of complexity.
So the system that’s on your future phone may be three orders of magnitude, four order magnitude smaller than the one at the very tippy top, but it will be very, very powerful.
There’s some great research going on at MIT. It’ll probably move to Stanford just to be fair, but it always does. But it’s great research going on at MIT on if you have one of these huge models and it’s been trained on movies, it’s been trained on Swahili. A lot of the parameters aren’t useful for this savant use case, but the general knowledge and intuition is. So what’s the optimal balance between narrowing the training data and narrowing the parameter set to be a specialist without losing general learning.
The Big Model Camp vs. Specialization
So the people who are opposed to that view, and again, we don’t know, would say the following. If you take a general purpose model and you specialize it through fine tuning, it also becomes more brittle. Their view is that what you do is you just make bigger and bigger and bigger models. Because they’re in the big model camp, right? And that’s why they need gigawatts of data centers and so forth. And their argument is that that flexibility of intelligence that we are seeing will continue.
Dario wrote a piece called basically about “Machines of Amazing Grace,” I think. And he argued that there are three scaling laws playing. The first one is what you know of which is foundation model growth. We’re still on that. The second one is a test time training law and the third one is a reinforcement learning training law. Training laws are where if you just put more hardware and more data, they just get smarter in a predictable way. We’re just at the beginning in his view of this. The second and third one beginning.
That’s why I’m sure our audience would be frustrated. Why do we not know? We don’t know. Right. It’s too new, it’s too powerful. And at the moment, all of these businesses are incredibly highly valued. They’re growing incredibly quickly. The uses of them I mentioned earlier, going back to Google, the ability to refactor your entire workflow in a business is a very big deal. That’s a lot of money to be made there for all the companies involved. We will see.
The Future of Jobs and Employment
INTERVIEWER: Eric, shifting the topic. One of the concerns that people have in the near term and people have been ringing the alarm bells is on jobs. I’m wondering where you come out on this and flipping that forward to education. How do we educate our kids today in high school and college? And what’s your advice? So on the first thing, do you believe that as Dario has gone on TV shows now and speaking to significant white collar job loss, we’re seeing obviously a multitude of different drivers and robots coming in. How do you think about the job market over the next five years?
ERIC SCHMIDT: Let’s posit that in 30 or 40 years there’ll be a very different employment. Robotic human interaction or the definition of…
INTERVIEWER: Do we need to work at all?
ERIC SCHMIDT: The definition of work, the definition of identity. Let’s just posit that. And let’s also posit that it will take 20 or 30 years for those things to work through the economy of our world. Now in California and other cities in America, you can get on a Waymo taxi. Waymo is 2025. The original work was done in the late 90s. The original challenge at Stanford was done, I believe, in 2004.
INTERVIEWER: DARPA Grand Challenge. It was 2004. Sebastian Thrun.
ERIC SCHMIDT: That’s right. So more than 20 years from a visible demonstration to our ability to use it in daily life. Why? It’s hard, it’s deep tech, it’s regulated and all of that. I think that’s going to be true especially in robots that are interacting with humans. They’re going to get regulated. You’re not going to be wandering around and the robot’s going to decide to slap you. Society is not going to allow that. So in the shorter term, five or 10 years, I’m going to argue that this is positive for jobs in the following way.
The History of Automation and Economic Growth
If you look at the history of automation and economic growth, automation starts with the lowest status and most dangerous jobs and then works up the chain. So if you think about assembly lines in cars and furnaces and all these sort of very, very dangerous jobs that our forefathers did, they don’t do them anymore. They’re done by robotic solutions of one another. And typically not a humanoid robot, but an arm.
So the world dominated by arms that are intelligent will automate those functions. What happens to the people? Well, it turns out that the person who was working with the welder, who’s now operating the arm has a higher wage and the company has higher profits because it’s producing more widgets. So the company makes more money and the person makes more money.
Now you sit there and say, “Well, that’s not true because humans don’t want to be retrained.” But in the vision that we’re talking about, every single person will have a computer assistant that’s very intelligent, that helps them perform. And if you take a person of normal intelligence or knowledge and you add a sort of accelerant, they can get a higher paying job.
So you sit there and you go, “Well, why are there more jobs? There should be less jobs.” That’s not how economics works. Economics expands because the opportunities expand, profits expands, wealth expands and so forth. So there’s plenty of dislocation. But in aggregate, are there more people employed or fewer? The answer is more people with higher paying jobs.
Global Demographics and AI Adoption
INTERVIEWER: Is that true in India as well?
ERIC SCHMIDT: It will be. And you picked India because India has a positive demographic outlook, although their birth rate is now down to 2.0.
INTERVIEWER: That’s good.
ERIC SCHMIDT: The rest of the world is choosing not to have children. If you look at Korea, it’s now down to 0.7 children per 2 parents. China is down to 1 child per 2 parents.
INTERVIEWER: It’s evaporating.
ERIC SCHMIDT: Now. What happens in those situations? They completely automate everything because it’s the only way to increase national priority. So the most likely scenario, at least in the next decade, is it’s a national emergency to use more AI in the workplace to give people better paying jobs and create more productivity in the United States because our birth rate has been falling.
And what happens is people have talked about this for 20 years. If you have this conversation and you ignore demographics, which is negative for humans, and economic growth, which occurs naturally because of capital investment, then you miss the whole story. Now, there are plenty of people who lose their jobs, but there’s an awful lot of people who have new jobs.
And the typical simple example would be all those people who work in Amazon distribution centers and Amazon trucks. Those jobs didn’t exist until Amazon was created, right? The number one shortage in jobs right now in America are truck drivers. Why? Truck driving is a lonely, hard, low paying, low status job. They don’t want it. They want a better paying job, right?
The Need for Universal Education Technology
Going back to education, it’s really a crime that our industry has not invented the following product. The product that I want it to build is a product that teaches every single human who wants to be taught in their language in a gamified way. The stuff they need to know to be a great citizen in their country, right? That can all be done on phones now. It can all be learned and you can all learn how to do it. And why do we not have that product? Right? The investment in the humans of the world is the best return on always knowledge and capability is always the right answer.
Let me try and get your opinion on this because you’re so influential with. So I’ve got about a thousand people in the companies where I’m the controlling shareholder and I’ve been trying to tell them exactly what you just articulated. Where a lot of these people have been in the company for 10, 15 years, they’re incredibly capable and loyal, but they’ve learned a specific white collar skill, they worked really hard to learn the skill. And the AI is coming within no more than three years and maybe two years.
And the opportunity to retrain and have continuity is right now. But if they delay, which everyone seems to be “just let’s wait and see.” And what I’m trying to tell them is if you wait and see, you’re really screwing over that employee.
Corporate Adaptation and Innovation
So we are in wild agreement that this is going to happen and the winners will be the ones who act. Now, what’s interesting is when you look at innovation history, the biggest companies who you would think of are the slowest because they have economic resources that the little companies typically don’t. They tend to eventually get there, right?
So watch. What the big companies do are their CFOs and the people who measure things carefully, who are very, very intelligent. They say, “I’m done with that thousand engineering team that doesn’t do very much. I want 50 people working in this other way and we’ll do something else for the other people.”
And when you say big companies, we’re thinking Google Meta. We’re not thinking big bank hasn’t done anything. I’m thinking about big banks. When I talk to CEOs and I know a lot of them in traditional industries, what I counsel them is you already have people in the company who know what to do. You just don’t know who they are.
So call a review of the best ideas to apply AI in our business. And inevitably the first ones are boring. Improve customer service, improve call centers and so forth. But then somebody says, “You know, we could increase revenue if we built this product.”
The End of Traditional User Interfaces
I’ll give you another example. There’s this whole industry of people who work on regulated user interfaces. I think user interfaces are largely going to go away because if you think about it, the agents speak English typically, or other languages, you can talk to them, you can say what you want, the UI can be generated.
So I can say, “Generate me a set of buttons that allows me to solve this problem” and it’s generated for you. Why do I have to be stuck in what is called the WIMP interface. Windows, icons, menus and pull down. That was invented in Xerox PARC, right? 50 years ago. Why am I still stuck in that paradigm? I just want it to work.
INTERVIEWER: Kids in high school and college now, any different recommendations for where they go?
The Digital Native Generation
ERIC SCHMIDT: When you spend any time in a high school or… I was at a conference yesterday where we had a drone challenge and you watch the 15-year-olds, they’re going to be fine. They’re just going to be fine. It all makes sense to them. And we’re in their way.
They’re digital natives, but they’re more than digital natives. They get it, they understand the speed, it’s natural to them. They’re also frankly, faster and smarter than we are. That’s just how life works, I’m sorry to say. So we have wisdom, they have intelligence, they win.
So in their case, I used to think the right answer was to go into biology. I now actually think going into the application of intelligence to whatever you’re interested in is the best thing you can do as a young person. Purpose driven, any form of solution that you find interesting.
Most kids get into it for gaming reasons or something and they learn how to program very young. So they’re quite familiar with this. I work at a particular university with undergraduates and they’re already doing different algorithms for reinforcement learning as sophomores. This shows you how fast this is happening at their level. They’re going to be just fine.
They’re responding to the economic signals, but they’re also responding to their purpose. So an example would be you care about climate, which I certainly do. If you’re a young person, why don’t you figure out a way to simplify the climate science to use simple foundation models to answer these core questions? Why don’t you figure out a way to use these powerful models to come up with new materials that allow us again to address the carbon challenge? And why don’t you work on energy systems to have better and more efficient energy sources that are not less carbon? You see my point?
I’ve noticed, because I have kids exactly that era, and there’s a very clear step function change, largely attributable, I think, to Google and Apple, that they have the assumption that things will work. And if you go just a couple years older during the WIMP era, like you described it, which I’ll attribute more to Microsoft, the assumption is nothing will ever work. Like, if I try to use this thing, it’s going to crash. I’m going to be tortured.
The End of the Off Button
What’s also interesting was that in my career I used to give these speeches about the Internet, which I enjoyed, where I said, “You know, the great thing about the Internet is it has an off button and you can turn off your button and you can actually have dinner with your family and then you can turn it on after dinner.” This is no longer possible.
So the distinction between the real world and the digital world has become confusing. But none of us are offline for any significant period of time. And indeed the reward system in the world has now caused us to not even be able to fly in peace, drive in peace, take a train in peace.
INTERVIEWER: Starlink is everywhere, right?
ERIC SCHMIDT: And that ubiquitous connectivity has some negative impact in terms of psychological stress, loss of emotional physical health and so forth. But the benefit of that productivity is without question.
AI’s Impact on Hollywood
INTERVIEWER: Google I/O was amazing. I mean, just hats off to the entire team there. VO3 was shocking. And we’re sitting here eight miles from Hollywood and I’m just wondering your thoughts on the impact this will have. We’re going to see the one person film, feature film. Like we’re seeing potentially one person unicorns in the future with agenic AI. Are we going to see an individual be able to compete with a Hollywood studio? And should they be worried about their assets?
ERIC SCHMIDT: Well, they should always be worried because of intellectual property issues and so forth. I think blockbusters are likely to still be put together by people with an awful lot of help from AI. I don’t think that goes away.
If you look at what we can do with generating long form video, it’s very expensive to do long term video, although that will come down. And also there’s an occasional extra leg or extra clock or whatever. It’s not perfect yet and that requires human editing. So even in the scenario where a lot of the video is created by a computer, they’re going to be humans that are producing it and directing it for reasons.
My best example in Hollywood is that I was at a studio where they were showing me this. They had an actor who was recreating William Shatner’s movie movements, a young man. And they had licensed the likeness from William Shatner, who’s now older, and they put his head on this person’s body and it was seamless. Well, that’s pretty impressive.
That’s more revenue for everyone. The unknown actor becomes a bit more famous. Mr. Shatner gets more revenue. The whole movie genre works. That’s a good thing.
Another example is that nowadays they use green screens rather than sets. And furthermore, in the alien department, when you have scary movies, instead of having the makeup person, they just add the makeup digitally. So who wins? The costs are lower, the movies are made quicker. In theory, the movies are better because you have more choices. So everybody wins.
Who loses? Well, there was somebody who built that set and that set isn’t needed anymore. That’s a carpenter and a very talented person who now has to go get a job in the carpentry business.
So again, I think people get confused. If I look at the digital transformation of entertainment subject to intellectual property being held, which is always a question, it’s going to be just fine. There’s still going to be blockbusters. The cost will go down, not up, or the relative income.
Because in Hollywood they essentially have their own accounting and they essentially allocate all the revenue to all the key producing people. The allocation will shift to the people who are the most creative. That’s a normal process.
Remember we said earlier that automation gets rid of the poorest quality jobs, the most dangerous jobs, the jobs that are sort of straightforward are probably automated, but the really creative jobs… Another example, the script writers. You’re still going to have script writers, but they’re going to have an awful lot of help from AI to write even better scripts. That’s not bad.
AI’s Persuasive Power
INTERVIEWER: I saw a study recently out of Stanford that documented AI being much more persuasive than the best humans.
ERIC SCHMIDT: Yes.
INTERVIEWER: That set off some alarms. It also set off some interesting thoughts on the future of advertising. Any particular thoughts about that?
The Power of AI Manipulation and Democratic Concerns
ERIC SCHMIDT: So we know the following. We know that if the system knows you well enough, it can learn to convince you of anything. So what that means in an unregulated environment is that the systems will know you better and better. They’ll get better at pitching you. And if you’re not savvy, if you’re not smart, you could be easily manipulated.
We also know that the computer is better than humans trying to do the same thing. So none of this surprises me. The real question, and I’ll ask this as a question, is in the presence of unregulated misinformation engines, of which there will be many – advertisers, politicians, just criminal people, people trying to evade responsibility. There’s all sorts of people who have free speech when they have free speech, which includes the ability to use misinformation to their advantage. What happens to democracy?
We’ve all grown up in democracies where there’s a sort of consensus around trust, and there’s an elite that more or less administers the trust vectors and so forth. There’s a set of shared values. Do those shared values go away? In our book about Genesis, we talk about this as a deeper problem. What does it mean to be human when you’re interacting mostly with these digital things? Especially if the digital things have their own scenarios.
The Emotional Impact of AI Companions
My favorite example is that you have a son or a grandson or a child or a grandchild, and you give them a bear. And the bear is a personality, and the child grows up, but the bear grows up too. So who regulates what the bear talks to the kid? Most people haven’t actually experienced the super, super empathetic voice that can be any inflection you want. When they see that, which will be in the next probably two months, they’re going to completely open their eyes.
Well, remember that voice casting was solved a few years ago and that you can cast anyone else’s voice onto your own. And that has all sorts of problems. Have you seen an avatar yet of somebody that you love that’s passed away, or Henry Kissinger or anything? We actually created one with the permission of his family. Did you start crying instantly? It’s very emotional. It’s very emotional because it brings back – I mean, it’s a real human. It’s a real memory, a real voice. And I think we’re going to see more of that now.
One obvious thing that will happen is at some point in the future, when we naturally die, our digital essence will live in the cloud and it will know what we knew at the time. And you can ask it a question. So can you imagine asking Einstein, going back to Einstein, “What did you really think about this other guy? Did you actually like him or were you just being polite with him with letters?” Right. And in all those sort of famous contests that we study as students, can you imagine being able to ask the people. Yeah. Today, you know, with today’s retrospective, what did you really think?
The Evolution of Media and Attention
I know that the education example you gave earlier is so much more compelling when you’re talking to Isaac Newton or Albert Einstein instead of just, oh, you talk about it. But you know, it’s so coming back to the movies when one of the first companies we incubated out of MIT Course Advisor, we sold it to Don Graham in the Washington Post. And then so I was working for him for a year after that and the conception was here’s the Internet, here’s the newspaper, let’s move the newspaper onto the Internet. We’ll call it washingtonpost.com.
And if you look at where it ended up today with Meta TikTok, YouTube didn’t end up anything like the newspaper moves the Internet. So now here’s movies. You can definitely make a long form movie much more cheaply. But I just had this experience of somebody that I know is a complete – this director will try and make a tear jerker by leading me down a two hour long path. But I can get you to that same emotional state in about five minutes if it’s personalized to you.
Well, one of the things that’s happened because of the addictive nature of the Internet is we’ve lost sort of the deep state of reading. So I was walking around and I saw a Barnes and Noble bookstore. Big oh my God, my old home. Did you go back? And I went in and I felt good. But it’s a very fond memory. But the fact of the matter is that people’s attention spans are shorter, they consume things quicker.
One of the things interesting about sports is the sports highlights business is a huge business. Licensed clips around highlights because it’s more efficient than watching the whole game. So I suspect that if you’re with your buddies and you want to be drinking and so forth, you put the game on, that’s fine. But if you’re a busy person and you’re busy with whatever you’re busy of, and you want to know what happened with your favorite team, the highlights are good enough. Yeah, you got four panes of it going at the same time too.
The Battle for Human Attention
And so this is again a change and it’s a more fundamental change to attention. I’ve been working – I work with a lot of 20 somethings in research and one of the questions I had is how do they do research in the presence of all of these stimulations? And I can answer the question definitively. They turn off their phone. Yeah, you can’t think deeply as a researcher with this thing buzzing.
And remember that part of the industry’s goal was to fully monetize your attention aside from sleeping. And we’re working on having you have less sleep, I guess, from stress. We’ve essentially tried to monetize all of your waking hours with something, some form of ad, some form of entertainment, some form of subscription that is completely antithetical to the way humans traditionally work with respect to long, thoughtful examination of principles, the time that it takes to be a good human being. These are in conflict right now.
There are various attempts at this. So, you know, my favorite are these digital apps that make you relax. Okay, so the correct thing to do to relax is to turn off your phone. Right. And then relax in a traditional way for, you know, 70,000 human years of existence.
Deep Work with AI
I had an incredible experience. I’m doing the flight from MIT to Stanford all the time. And like you said, attention spans are getting shorter and shorter and shorter. The TikTok Extreme, the clips are so short. This particular flight was my first time brainstorming with Gemini for six hours straight and I completely lost track of time. And I’m trying to figure out it’s a circuit design and a chip design for inference time computer. And it’s so good at brainstorming with me and bringing back data. And as long as the Wi-Fi on the plane is working, time went by. So my first experience with technology that went the other direction.
But notice that you also were not responding to texts and annoyances. You weren’t reading ads. You were deep inside of a system which for which you paid a subscription. So if you look at the deep research stuff, one of the questions I have, when you do a deep research analysis, I was looking at factory automation for something. Where is the boundary of factory automation versus human automation? It’s an area I don’t understand very well. It’s very, very deep. Technical set of problems I didn’t understand. Took 12 minutes or so to generate this paper. 12 minutes of these supercomputers is an enormous amount of time. What is it doing? And the answer, of course, the product is fantastic.
The Business Model Evolution
Yeah. You know, to Peter’s question earlier too, I keep the Google IPO prospectus in my bathroom room up in Vermont. It’s 2004. I’ve read it probably 500 times, but I don’t know if you remember, it’s getting a little ratty. Actually, the only person besides me who did the same thing, I read it 500 times because I had to. It was legally required. Well, I still read it because of the misconceptions. It’s just such a great learning experience.
But even before the IPO, if you think back, there was this big debate about will it be ad revenue, will it be subscription revenue, will it be paid inclusion, will the ads be visible? And all this confusion about how you’re going to make money with this thing. Now the Internet moved to almost entirely ad revenue. But if you look at the AI models, they’re, you know, you got your $20 now, $200 subscription and people are signing up like crazy. So, you know, it’s ultra, ultra convincing.
Is that going to be a form of ad revenue where it convinces you to buy something or no, is it going to be subscription revenue where people put pay a lot more and there’s no advertising at all? No, but you have this. With Netflix, there was this whole discussion about would, how would you fund movies through ads? And the answer is you don’t. You have a subscription. And the Netflix pool people looked at having free movies without a subscription and advertising supported and the math didn’t work.
So I think both will be tried. I think the fact of the matter is deep research, at least at the moment, is going to be chosen by well to do or professional tasks. You are capable of spending that $200 a month – a lot of people don’t afford, cannot afford. And that free service, remember, is the thing that is the stepping stone for that young person, man or woman, who just needs that access.
My favorite story there is that when I was at Google and I went to Kenya and Kenya is a great country, and I was with this computer science professor and he said, “I love Google.” And I said, “Well, I love Google too.” And he goes, “Well, I really love Google.” I said, “I really love Google too.” And I said, “Why do you really love Google?” He said, “Because we don’t have textbooks.” And I thought the top computer science program in a nation does not have textbooks.
INTERVIEWER: In a couple of things here, Eric. In the next few years, what moats actually exist for startups as AI is coming in and disrupting? Do you have a list?
ERIC SCHMIDT: Yes. I’ll give you a simple answer.
INTERVIEWER: And what do you look for in the companies that you’re investing in?
The Future of Startup Moats
ERIC SCHMIDT: So first the deep tech hardware stuff. There’s going to be patents, patents, filings, inventions, you know, the hard stuff. Those things are much slower than the software industry in terms of growth and they’re just as important. You know, power systems, all those robotic systems we’ve been waiting for a long time, they’re just, it’s just slower for all sorts of reasons.
INTERVIEWER: Hardware is hard.
ERIC SCHMIDT: Yes, hardware is hard for those reasons. In software, it’s pretty clear to me it’s going to be really simple. These software is typically a network effect business where the fastest mover wins. The fastest mover is the fastest learner in an AI system. So what I look for is a company where they have a loop. Ideally they have a couple of learning loops.
I’ll give you a simple learning loop that as you get more people, the more people click and you learn from their click, they express their preferences. So let’s say I invent a whole new consumer thing which I don’t have an idea right now for it, but imagine I did. And furthermore I said that I don’t know anything about how consumers behave, but I’m going to launch this thing the moment people start using it. I’m going to learn from them and I’ll have instantaneous learning to get smarter about what they want.
So I start from nothing. If my learning slope is this, I’m essentially unstoppable. I’m unstoppable because my learning advantage by the time my competitor figures out what I’ve done is too great. Now how close can my competitor be and still lose? The answer is a few months because the slopes are exponential.
And so it’s likely to me that there will be another 10 fantastic Google scale meta scale companies. They’ll all be founded on this principle of learning loops. And when I say learning loops, I mean in the core product, solving the current problem as fast you can. If you cannot define the learning loop, you’re going to be beaten by a company that can define it.
And you said 10 meta Google size companies. Do you think there will also be a thousand? Like if you look at the enterprise software business, the, you know, Oracle on down, PeopleSoft, whatever, thousands of those, or will they all consolidate into those 10 that are domain dominant learning loop companies?
I think I’m largely speaking about consumer scale because that’s where the real growth is. The problem with learning loops is if your customer is not ready for you, you can only learn at a certain rate. So it’s probably the case that the government is not interested in learning and therefore there’s no growth in learning loop serving the government. I’m sorry to say that needs to get fixed. Yeah, educational systems are largely regulated and run by the unions over. They’re not interested in innovation. They’re not going to be doing any learning. I’m sorry to say we have to get. That has to get fixed.
So the ones where there’s a very fast feedback signal are the ones to watch. Another example, it’s pretty obvious that you can build a whole new stock trading company where you learn. If you get the algorithms right, you learn faster than everyone else and scale matters. So in the presence of scale and fast learning loops, that’s the moat. Now, I don’t know that there’s many others that you do have.
INTERVIEWER: Do you think brand would be a moat?
The Future of Brand Loyalty and Learning Loops
ERIC SCHMIDT: Brand matters, but less so. What’s interesting is people seem to be perfectly willing now to move from one thing to the other, at least in the digital world. And there’s a whole new set of brands that have emerged that everyone is using that are the next generations that I haven’t even heard of.
Within those learning loops, you think domain specific synthetic data is a big advantage? Well, the answer is whatever causes faster learning. There are applications where you have enough training data from humans. There are applications where you have to generate the training data from what the humans are doing.
You could imagine a situation where you had a learning loop where there’s no humans involved, where it’s monitoring something, some sensors, but because you learn faster on those sensors, you get so smart you can’t be replaced by another sensor management company. That’s the way to think about it.
The Capital Challenge for Academic Research
So what about the capital for the learning loop? Because do you know Daniela Rus who runs CSAIL? So Daniela and I are really good friends. We’ve been talking to our governor, Maura Healey, who’s one of the best governors in the world.
INTERVIEWER: I agree.
ERIC SCHMIDT: So there’s a problem in our academic systems where the big companies have all the hardware because they have all the money and the universities do not have the money for even reasonable sized data centers. I was with one university where after lots of meetings, they agreed to spend $50 million on a data center which generates less than a thousand GPUs for the entire campus and all of research. And that doesn’t even include the terabytes of storage and so forth.
So I and others are working on this as a philanthropic matter. The government is going to have to come in with more money for universities for this kind of stuff that is among the best investment. When I was young I was on a National Science Foundation scholarship and by the way, I made $15,000 a year. The return to the nation of that $15,000 has been very good, shall we say, based on the taxes that I pay and the jobs that we have created.
So creating an ecosystem for the next generation to have the access to the systems is important. It’s not obvious to me that they need billions of dollars. It’s pretty obvious to me that they need a million dollars, two million dollars. That’s the goal.
The Timeline to Superintelligence and Human Purpose
INTERVIEWER: I want to take us in a direction of wrapping up on superintelligence and the book. We didn’t finish the timeline on superintelligence and I think it’s important to give people a sense of how quickly the self referential learning can get and how rapidly we can get to something a thousand times, a million, a billion times more capable than a human.
On the flip side of that, Eric, when I look at my greatest concerns, when we get through this five to seven year period that you say rogue actors and stabilization and such. One of the biggest concerns I have is the diminishment of human purpose. You wrote in the book and I’ve listened to it, haven’t read it physically and my kids say you don’t read anymore. Attention span, you listen to books you don’t read. But you said “the real risk is not Terminator, it’s drift.” You argue that AI won’t destroy humanity violently but might slowly erode human values, autonomy and judgment if left unregulated, misunderstood. So it’s really a Wall-E like future versus a Star Trek boldly go out there.
ERIC SCHMIDT: We’re very in the book and my own personal view is it’s very important that human agency be protected. Human agency means the ability to get up in the day and do what you want, subject to the law. And it’s perfectly possible that these digital devices can create a form of a virtual prison where you don’t feel that you as a human can do what you want. That is to be avoided.
INTERVIEWER: I’m not worried about that case. I’m more worried about the case that if you want to do something, it’s just so much easier to ask your robot or your AI to do it for you. The human spirit that wants to overcome a challenge, the unchallenged life is so critical.
ERIC SCHMIDT: But there will always be new challenges. When I was a boy, one of the things that I did is I would repair my father’s car. I don’t do that anymore. When I was a boy, I used to mow the lawn. I don’t do that anymore.
INTERVIEWER: Sure.
ERIC SCHMIDT: So there are plenty of examples of things that we used to do that we don’t need to do anymore, but there’ll be plenty of things. Just remember, the complexity of the world that I’m describing is not a simple world. Just managing the world around you is going to be a full time and purposeful job, partly because there will be so many people fighting for misinformation and for your attention. And there’s obviously lots of competition and so forth. There’s lots of things to worry about. Plus you have all of the people trying to get your money, create opportunities, deceive you, what have you. So I think human purpose will remain because humans need purpose.
INTERVIEWER: That’s the point.
The Future of Work and Human Creativity
ERIC SCHMIDT: And there’s lots of literature that the people who have what we would consider to be low paying, worthless jobs enjoy going to work. So the challenge is not to get rid of their job, is to make their job more productive using AI tools. They’re still going to go to work. And to be very clear, this notion that we’re all going to be sitting around doing poetry is not happening.
In the future, there’ll be lawyers, they’ll use tools to have even more complex lawsuits against each other. There will be evil people who will use these tools to create even more evil problems. There will be good people who will be trying to determine the evil people. The tools change, but the structure of humanity, the way we work together is not going to change.
Peter and I were on Mike Saylor’s yacht a couple months ago and I was complaining that the curriculum is completely broken in all these schools. But what I meant was, we should be teaching AI. And he said, “Yeah, they should be teaching aesthetics.” And I looked at him like, what the hell are you talking about? He said, “No, in the age of AI, which is imminent, look at everything around you, whether it’s good or bad, enjoyable, not enjoyable. It’s all about designing aesthetics.”
When the AI is such a force multiplier that you can create virtually anything, what are you creating and why? And that becomes the challenge. If you look at Wittgenstein and the sort of theories of all of this stuff, it is all fundamentally, we’re having a conversation that America has about tasks and outcomes. It’s our culture. But there are other aspects of human life, meaning thinking, reasoning. We’re not going to stop doing that.
So imagine if your purpose in life in the future is to figure out what’s going on and to be successful, just figuring that out is sufficient because once you figured it out, it’s taken care of for you. That’s beautiful. That provides purpose. It’s pretty clear that robots will take over an awful lot of mechanical or manual work. And for people who like to, I like to repair the car, I don’t do it anymore, I miss it. But I have other things to do.
The Arrival of Digital Superintelligence
INTERVIEWER: Take me forward. When do you see what you define as digital superintelligence?
ERIC SCHMIDT: Within 10 years.
INTERVIEWER: Within 10 years. And what do people need to know about that? What do people need to understand and sort of prepare themselves for either from as a parent or as a employee or as a CEO?
ERIC SCHMIDT: One way to think about it is that when digital superintelligence finally arrives and is generally available and generally safe, you’re going to have your own polymath. So you’re going to have the sum of Einstein and Leonardo da Vinci in the equivalent of your pocket. I think thinking about how you would use that gift is interesting. And of course evil people will become more evil. But the vast majority of people are good.
INTERVIEWER: Yes.
ERIC SCHMIDT: They’re well meaning. So going back to your abundance argument, there are people who’ve studied the notion of productivity increases and they believe that you can get, we’ll see 30% year over year, economic growth through abundance and so forth. That’s a very wealthy world. That’s a world of much less disease, many more choices, much more fun, if you will. Just taking all those poor people and lifting them out of the daily struggle they have. That is a great human goal. Let’s focus on that.
Economic Transformation and New Challenges
INTERVIEWER: That’s the goal we should have. Does GDP still have meaning in that world?
ERIC SCHMIDT: If you include services, it does. One of the things about manufacturing and everyone’s focused on trade deficits and they don’t understand the vast majority of modern economies are service economies, not manufacturing economies. And if you look at the percentage of farming, it was roughly 98% to roughly 2 or 3% in America over 100 years. If you look at manufacturing, the heydays in the 30s and 40s and 50s, those percentages are now down well lower than 10%. It’s not because we don’t buy stuff, it’s because the stuff is automated. You need fewer people. Those, there’s plenty of people working in other jobs.
So again, look at the totality of the society. Is it healthy? If you look in China, it’s easy to complain about them. They have now deflation. They have a term where people are, it’s called “laying down.” Where they stay at home, they don’t participate in the workforce which is counter to their traditional culture. If you look at reproduction rates, these countries that are essentially having no children, that’s not a good thing. Those are problems that we’re going to face. Those are the new problems of the age.
INTERVIEWER: I love that. Eric, so grateful for your time.
ERIC SCHMIDT: Thank you. Thank you both. I love your show.
INTERVIEWER: Yeah, thank you buddy.
ERIC SCHMIDT: Thank you. Thank you guys.
Related Posts
- Transcript: NVIDIA CEO Jensen Huang’s Keynote At GTC 2025
- Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)
- Transcript: The ONLY Trait For Success In The AI Era – Aravind Srinivas on Silicon Valley Girl Podcast
- Transcript: Mark Zuckerberg on AI Glasses, Superintelligence, Neural Control, and More
- Lessons from Apple ID Hacks: What Transcription & Media Sites Should Do to Secure Their Users’ Accounts
