Read the full transcript of Anton Korinek, professor of economics at the University of Virginia and a leading AI economist, in conversation with Barbara DeLollis of Havard Business School on “The $100 Trillion Question: What Happens When AI Replaces Every Job?”
The Urgency of AI Governance
ANTON KORINEK: I think the time to acquire expertise is now to make sure that our governmental institutions have the expertise of how to deal with AI systems, how to deal with AI companies, so that they can make well informed decisions. Also in the competition sphere, if companies cut corners and create ever riskier systems just because they don’t want to fall behind, that could be bad for society. I think we don’t have a lot of global cooperation on the question and sense you can see we are like in a big race between the AI superpowers about who makes progress faster if AI takes off.
And if we do reach AGI, that in itself would be an absolutely radical development on the economic front. And that kind of radical development would also require a radical response. My research is on the economics of artificial general intelligence. So it means of AI systems that surpass human intellectual capabilities across the board. I have started focusing on that 10 years ago when this was very much a niche activity. But I think now we are so close. We are just a couple years from it, and the research is suddenly extremely urgent and relevant in much shorter time scales.
And within this field, the questions I’m looking at are how will AGI affect labor markets? How will it affect growth and productivity? How will it affect market concentration? And then a second strand of research that I’m looking at is if we think that these AGI systems are going to be so powerful, how shall we envision the process of integrating them into the economy and integrating them into activities like my own research?
AI’s Current Capabilities
BARBARA DELOLLIS: Are we nearing the point where AI matches human intelligence in a lot of domains?
ANTON KORINEK: I think we have already crossed that point. So in some sense, AIs are better than most humans at performing math. They are much better at analyzing large quantities of text. They are much better in a growing number of domains. But of course, right now I think it is clear that AI is nowhere near as good as the best humans, the best human experts in specific areas.
BARBARA DELOLLIS: How do you track that?
ANTON KORINEK: Oh, it’s difficult. There are technical benchmarks in different fields. They develop benchmarks of, for example, how good are AI systems at writing computer code? How good are AI systems at solving math problems, and so on. And in all these benchmarks, we can rapidly see how AI is getting better. And many of them are what people call saturated, meaning the AI can solve all the questions even though humans typically can’t. So they are getting better real fast.
The Speed of Technological Change
BARBARA DELOLLIS: So speaking of speed, tech is evolving so quickly. In fact, Perplexity CEO and founder Arvind Srinivas had said that he plans in months instead of years from a business perspective because technology is evolving so quickly.
ANTON KORINEK: Crazy, right?
BARBARA DELOLLIS: What does the short planning horizon say about the urgency of asking the question? Is big tech too big?
ANTON KORINEK: So I think those short horizons are something that I can also feel. And in some sense, AI systems are improving so rapidly that it’s completely unpredictable what the world will look like in a couple years down the road. So many of us were advised when we were younger, you should have a five year plan, right? In five years we may have artificial general intelligence, which is AI systems that are better than humans, artificial super intelligence, AI systems that are far beyond our human intellect. And it’s almost impossible to imagine what the world would look like under such scenarios. I think ultimately the best plan is to follow what’s happening in AI and make sure that you are constantly up to date and that you update the plans that you have been making.
Economic Impact of AI
BARBARA DELOLLIS: When you’re talking to business leaders, how do you describe AI’s impact on our economy?
ANTON KORINEK: So right now I would say we actually see only a very small impact. AI is not yet visible in the productivity statistics. It’s not yet visible in our macroeconomic variables. But in some sense we are all expecting the impact to be really massive within the next couple of years. And businesses across the country, across the world have been investing massively in AI. They have started incorporating AI into their processes. And so far, some of them have seen some small payoffs of that. But I think the biggest payoffs are yet to come.
Preventing Inequality in the AI Age
BARBARA DELOLLIS: As AI evolves, how do we prevent technological advancements from benefiting only a few, while leaving many people behind?
ANTON KORINEK: I think from an economic perspective, that’s going to be the main challenge that we’ll experience in the age of AI. And what I anticipate is that our current system of income distribution, which revolves largely about people receiving most of their income from work or from having worked in the past and receiving a pension, it’s just not going to work that way anymore after we have AGI, after we have artificial general intelligence.
So I think we need to fundamentally rethink our systems of income distribution. We need something like a universal basic capital or universal basic income, whatever that may be, and however we exactly structure it to make sure that when AI takes off, when we reach this threshold where AI systems become better than humans at most cognitive tasks and when our economy is going to be able to suddenly produce so much more that humans can also share in some of those gains. And it doesn’t immiserate the masses.
BARBARA DELOLLIS: We heard Sam Altman make the case for that on Harvard’s campus last May. Do you think that is a radical idea? Is it something that will increasingly become in vogue with governments around the world?
ANTON KORINEK: It’s absolutely a radical idea. And I think right now at this very moment, we don’t need or want something like a universal basic income because it’s hugely expensive and it would provide disincentives to work for a lot of people, even though our economy really relies on labor. And we want people who are able to contribute to the economy. But if AI takes off and if we do reach AGI, that in itself would be an absolutely radical development on the economic front. And that kind of radical development would also require a radical response.
BARBARA DELOLLIS: Can you explain why is that the case? Is it simply because with AGI we would not need as many people producing or doing things?
ANTON KORINEK: Yeah, AGI would, by definition, by definition of it being general, it would be able to do essentially anything that a human worker can do. And that means human workers, including you and me, would become easily substitutable by AI. And once you’re substitutable and you have the technology and the technology is rapidly getting cheaper because which always happens in the technology sphere, then it means our wages or our labor market value would also decline in tandem.
Changing Attitudes Toward AI
BARBARA DELOLLIS: So when you’re having conversations with business leaders or policymakers and giving them this scenario, what is the typical response that you’re receiving?
ANTON KORINEK: It has changed rapidly over the past two years. So two years ago I could tell that people were not taking this seriously. I could tell people were like, oh yeah, that’s some weird sci fi scenario. And in the past half year, in the past couple of months especially, I can tell that more and more people, more and more business leaders, more and more political leaders are taking this very seriously. I think it’s in part because they can see how AI is moving rapidly, how AI is able to produce output that was just unimaginable a year ago, and how the trajectory is going only in one direction, which is upwards. And if you follow that trajectory, then I think you can see the writings on the wall that it’s just a question of time when AI will reach this level of AGI. And whenever that happens, then the economic, the social, the political implications of that are just going to be severe.
Education in the AI Era
BARBARA DELOLLIS: With machines surpassing human capabilities in only a matter of time, what practical changes should we make in education?
ANTON KORINEK: That’s the million dollar question. Yes, to be sure. We don’t know exactly when this moment will happen. There are still a lot of very smart people who say, well, it may never happen. I personally think it’s plausible that it could be just a couple years. It’s not implausible that it could take a decade or a little more either.
But I think one thing in education is clear, which is that right now the ability to leverage AI systems and to use them as a force multiplier is probably the most useful thing we can possibly teach our students. Also one of the most useful things we can teach our employees, one of the most useful things for leaders to acquire. And so that’s an advice that I think no matter what your exact future scenario looks like, it’s going to be useful.
Political Stability and AI
BARBARA DELOLLIS: How can we ensure AI doesn’t destabilize political systems? And what measures should we be taking now?
ANTON KORINEK: I think there is a big risk that it will be destabilizing. And I think one of the kind of greatest risks that I can see as an economist is that if we allow AI to create massive labor market disruption where lots of people will lose their jobs, will lose their source of income, will lose their livelihood, then that’s more likely to give us destabilization. So probably one of the best things to prepare is to ensure that we have a system of income distribution under AI that would make sure that people can share in the benefits. And I think that would be, from an economic perspective, the best preparation.
Competition and Market Concentration
BARBARA DELOLLIS: Tech markets dominated by a very small number of players. What new rules are essential to keep competition fair?
ANTON KORINEK: That’s a very interesting question, and I’ve just written a paper on this topic. The funny thing is, right now the level of competition in the AI market is fierce. You rarely see an industry where there’s so much competition and companies are undercutting each other and outdoing each other on a daily basis almost. And yet I think a lot of us have this concern that at some point, as these models get more and more expensive, only a small number of players will be able to afford to stay in the game and will be able to produce kind of the systems of the future that we have already been talking about.
And if that’s the case, and I think it’s a plausible case to make, then it’s going to be a big challenge how to govern those few players again. One strategy that I’m almost certain will be useful is to make sure that our governmental institutions have the expertise of how to deal with AI systems, how to deal with AI companies so that they can make well informed decisions also in the competition sphere. And we probably want to make sure that there is some competition. We also want to make sure that the competition doesn’t turn into something too reckless. Because if companies cut corners and create ever riskier systems just because they don’t want to fall behind, that could be bad for society as well.
Current State of AI Regulation
BARBARA DELOLLIS: In the United States, what would you say is the level of progress being made with regulating AI?
ANTON KORINEK: Right now we don’t have a lot of AI regulation. And I guess you can also make the case that right now we don’t need a lot of it. Part of it is that companies are self regulating, but part of it is also that we have systems that are not particularly powerful yet.
BARBARA DELOLLIS: When do governments need that level of expertise?
ANTON KORINEK: I think the time to acquire expertise is now. We need actors within government who really understand the frontier of AI, who understand the best systems. And so that when the time is ripe, when they are sufficiently capable and powerful that they actually impose very significant risks so that they can contribute to the regulatory debate and can make sure that we apply this in a smart way, in a way that we mitigate the risks, but don’t hold back the progress too much because we don’t want to pay too big of a price for it.
And I think it can be done. I think we can mitigate the risks and still allow for a lot of progress because the risks arise in some very specific areas, like for example, these systems creating dangerous things in the chemical, biological, nuclear space and so on. And we can kind of ensure that systems don’t do that while still producing the economically useful work that I think we ultimately all may benefit from.
BARBARA DELOLLIS: Why is global cooperation vital for AI governance? And what dangers do you think we face if countries don’t collaborate?
Global Cooperation and AI Safety
ANTON KORINEK: Right now, I think we don’t have a lot of global cooperation on the question. And in some sense you can see we are like in a big race between the AI superpowers about who makes progress faster. And right now I don’t think those systems are particularly dangerous yet. But I think as they get better, as they become better, it would be in the interest of all the parties that are involved in this race to talk to each other to make sure that they establish common safety standards and to make sure that this technology does not get out of hand.
Because nobody in the world, not the US, not China, not any of the other players, wants this technology to create massive risks for humanity as a whole. So I think when we have systems that would be capable enough to create those risks, then it would be absolutely desirable for the leading players to talk to each other. And then we will need a global governance framework for how we mitigate those risks, just like we have done in the past with dangerous technologies.
Related Posts
- Why eSIMs Are the Smart Choice for Mobile Businesses
- Transcript: Elon Musk and Jensen Huang Talk AI at U.S.-Saudi Investment Forum – 11/19/25
- The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)
- Transcript: NVIDIA CEO Jensen Huang’s Keynote At GTC 2025
- Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)
