Editor’s Notes: In this episode of The Weekly Show, Jon Stewart sits down with MIT economists Daron Acemoglu and David Autor to explore the “third existential threat” facing our world: artificial intelligence. The conversation moves beyond typical tech-utopian hype, focusing instead on how AI might disrupt the American workforce, particularly white-collar jobs, and the potential for it to centralize power in the hands of a few tech giants. Together, they discuss actionable “pro-worker” policies—like wage insurance and data property rights—intended to ensure that this technological revolution creates shared prosperity rather than a permanent underclass. (April 23, 2026)
TRANSCRIPT:
Introduction: AI, Work, and the Future of the Economy
JON STEWART: Oh, ladies and gentlemen, welcome. My name is Jon Stewart. It’s another Weekly Show podcast on this Earth Day Eve. Is that— do you celebrate? I love Earth. I can’t wait. The pitter-patter of little feet at 6:00 in the morning running downstairs to open up their Earth Day presents.
And as this glorious Earth is being celebrated while simultaneously being destroyed on the back end of it, I thought it would be appropriate not to worry about Iran, not to worry about climate change, but to worry about a third existential threat, which is AI. Artificial intelligence. It is happening, people.
And it’s about time that we had a sober conversation about its deleterious effects, but also its opportunities. And so we’re going to go straight to the source. We’re going to go to two brilliant, brilliant MIT economists who are going to talk to us a little bit about the possibilities of AI, the collateral damage of AI, and the various ways we might be able to mitigate that. So we’re just going to get right into it with those cats right now.
Here they are, folks. We’re going to break it down today in terms of the AI revolution and what will be the repercussions for the American people, the American worker, the world writ large. Who do you go to for this kind of thing? You go to the experts, you go to the brilliant people, you go to Daron Acemoglu, Nobel laureate. I don’t throw that around. Nobel laureate in economics, MIT Institute professor, and David Autor, Reubenfeld Professor of Economics at MIT. Guys, thank you so much for joining us today.
DAVID AUTOR: Thanks for having us on.
DARON ACEMOGLU: Oh, our pleasure. Absolutely.
How Soon Will We Feel the Full Impact of AI?
JON STEWART: David and Daron, I am beginning to get increasingly discomfited by the speed at which AI seems to be infiltrating into not just sort of the popular consensus in culture, but the workforce. So I want to ask you guys, what is our timeframe as this technology is— when are we going to really feel the full effect of this new technology?
DARON ACEMOGLU: Just beginning to get worried about it now, Jon.
JON STEWART: Don, you know me. We know each other. No, I’ve been worried about everything.
DARON ACEMOGLU: So am I. And I’m very worried about this too. Not about the timeline because the timeline is so uncertain. It’s hard for me to worry about something that’s so uncertain, but with all of the consequences, I think we are definitely not ready for AI. The workforce isn’t ready for AI. We don’t know what it’s going to do.
I think the people who are really not ready for AI are the students, whose learning is going to be affected in so many different ways. And we don’t know, we have no guardrails, no ways of ensuring that students are actually learning how to learn and they can actually become experts in anything in the age of AI when they can get a lot of answers from AI. So there’s just so many things to be concerned with.
JON STEWART: Now, David, will they need to learn anything? Because won’t AI— what will they need to learn?
The Value of Human Expertise in an AI World
DAVID AUTOR: If they don’t need to learn anything, then they’re just not needed as workers. And we don’t want to be in that scenario, right? So we do need people to have expertise and mastery, and I do think AI has both potential and risk, right? And I think Daron will talk more about the risk, so I’ll probably talk more about the potential.
Let me point out that although I do not have a Nobel Prize, around here at MIT it’s more distinguished to not have one than to have one.
JON STEWART: Dave, can I tell you, I love how you’ve set yourself apart from your colleagues. By not getting a Nobel Prize.
DAVID AUTOR: Someone’s got to stand out.
JON STEWART: You know what, the idea that you have that rebellious spirit at MIT to go against the grain. And not get a Nobel Prize. Well, then let’s start with that, David. The real concern is, look, and let’s step back for a moment. We talk about disruptions for workers over time, you know, industrial revolution, globalization. Those were sort of the dynamics that really impacted workers, but those took place over time. So, David, you’re going to talk more about the potential. Talk us through the previous disruptions and how AI fits into those paradigms.
Lessons from Past Technological Disruptions
DAVID AUTOR: So let me first say, just to bring it to the present first, what we should be concerned about is not running out of jobs per se, but having jobs where expert labor is not needed. So a future in which everyone is carrying the box from the UPS truck to the front door is very different from a future in which everyone is doing medical care, right? So it’s not the quantity per se, but whether specialized human labor is still needed.
I think it will be, but it really matters whether we are replaceable, whether we are all kind of redundant versions of one another, or whether we have real added value in this economy.
Now, we’ve been through lots of technological transitions.
A lot of who worked in those dark satanic mills was basically unmarried women and indentured children doing dirty, dangerous, unskilled work. And it took decades, really into the late 1800s, I’m sorry, until we started to—
JON STEWART: See, this is why you don’t have the Nobel.
DAVID AUTOR: I know, this is what holds me back.
JON STEWART: You gotta know the right century.
DAVID AUTOR: That’s right. Until we actually started to use specialized skills again, where people needed to follow rules, they needed to master tools, and their expertise was really needed. And so that was a very traumatic technological transition, and eventually we came through it okay, but most of the people who were there at the outset did not.
And a lot of these transitions — young people adapt to them usually more successfully by choosing different careers. People don’t make big career transitions in mid-adulthood. They don’t go from being a steelworker to a doctor or a programmer to a nurse. Those transitions are generational. And so when it moves really fast, as it did in the era of the China trade shock, for example, people just get left behind. Places eventually recover, but individuals much less so.
Will AI Disrupt White-Collar Work the Way Automation Disrupted Blue-Collar Work?
JON STEWART: And you talk about — it’s very interesting, and Daron, maybe we’ll ask you — we’re talking about specialized labor, and David is talking about the craftspeople who knew weaving and those things, and they’re replaced by automation and these kinds of things. Manufacturing jobs that were replaced in the China shock maybe weren’t considered as specialized, but still blue collar. Is AI going to bring about those same disruptions, but in what you would call white-collar labor, or less specialized knowledge and more administrative knowledge?
DARON ACEMOGLU: I think it certainly will. The timeframe is unclear. Just to add to what David said, this kind of experience is not a distant one. As David’s own work shows, the China shock, when it led to cheap imports coming and destroying parts of manufacturing, had the same effect.
JON STEWART: You’re talking about in the 2000s when they were— when China was admitted to the WTO and things like that.
DARON ACEMOGLU: Yeah. Starting in the 1990s, but especially starting in 2000.
DAVID AUTOR: But really after 2000.
DARON ACEMOGLU: Right. And robots at a much smaller scale had exactly the same effects — huge increase in productivity for steel, electronics, cars, but blue-collar workers lost their jobs. Many communities, just like with the Chinese imports shock, were thrown into recession.
And the same thing can happen if there is very rapid displacement of white-collar jobs. Now, the timing is very unclear. There is a lot of hype and a lot of reality to the capabilities of AI models. So far, we’re not seeing mass layoffs. We may be seeing some slowdown in hiring. It’s unclear. And white-collar jobs are less concentrated geographically compared to, say, textiles or toys — the things that were affected by Chinese imports — or cars, definitely, or steel. But the numbers of jobs in white-collar occupations is high. So there could be a lot of people who lose their jobs.
Now, the thing is that despite the tremendous advances in AI over the last 8 months or so, these models are not yet able to do the whole occupation for many of the white-collar jobs yet. That may be to come, or it may take a while. That’s why there’s so much uncertainty, but uncertainty is a very bad reason to be complacent.
Human Labor as a “Tax”: How AI Companies Talk to Their Investors
JON STEWART: David, the story that those that are behind AI tell us is very different. When the people that are creating these AI models talk, they talk in utopian terms, right? We will be freed from the burden of the toil. We will paint and write poetry, even though AI’s probably going to do that as well.
But when they talk to their investors, they speak very differently. And I want to ask you about a quote that I heard. There was a gentleman who was talking to his investors about AI and he said, “It will allow you the benefit of productivity without the tax of human labor.” He referred to human labor — us — as a tax, as something that a company wants to avoid paying to retain productivity. That’s what worries me, that we talk a lot about this and it’s always framed in terms of productivity.
DARON ACEMOGLU: So wouldn’t you like to be freed from your podcasting job, Jon?
JON STEWART: Listen, man, I’ve been toiling in the podcast mines for — I’m getting podcast lung. It’s a terrible, crippling addiction.
DAVID AUTOR: So most of us are both workers and consumers, and we’re not going to be able to consume if we’re not working. But of course, from the perspective of a firm, they want their customers. They’d rather not have their workers, right? Labor — economists will tell you this — labor demand is derived demand.
JON STEWART: It’s not that firms want labor. Explain that, derived demand. What is that?
DAVID AUTOR: Yeah, they want to make stuff, right? And usually making stuff requires space and people and electricity and stuff. But if they could make it without the people, they would be just as happy. It’s like Spinal Tap, you know? If they had the sex and the drugs, they could do without the rock and roll.
And so people have always been necessary. So although firms have always had this fantasy that they could fully automate, they’ve never been able to do so. And often it’s kind of turned out not how they expected, right? So during the era of numerically controlled machines, they thought they would de-skill and replace workers. Actually, they turned manufacturing workers into programmers. So it doesn’t always work out the way that firms expect it to, but it may this time.
What Makes AI Different from Previous Automation
DAVID AUTOR: There’s certainly many, many more things that are subject to AI automation than were subject to the previous era, because AI has a whole new set of capabilities, right? Previous computers could do routine tasks. They could follow rules — rules specified so tightly that a non-sentient, non-improvisational, non-problem-solving, non-creative machine could just carry it out without having to understand what it’s doing. That really limited the set of activities that we could subject to computer programming.
But now AI learns inductively, right? It learns from unstructured information. It infers rules. It solves problems without our even understanding how it’s solving them. That allows it to enter many, many new realms.
Now, to make this very concrete, it’s useful, I think, to contrast two occupations — one that people talk about all the time and one they should be talking about. So the one they talk about all the time is long-haul truck drivers, right? And there are about 3.5 million of them in the United States, and they say, “They’re going to be replaced by autonomous vehicles.”
That is a problem we can handle because it’s going to go very slowly, right? The day that, let’s say, Elon Musk announces tomorrow he has a self-driving truck — and let’s just pretend we believe him —
JON STEWART: That’s how I’ve been operating for years.
DAVID AUTOR: And so it totally works. We’re not going to throw all our trucks into the Atlantic Ocean and buy new ones tomorrow. It’s going to take decades to replace all of that capital and all the infrastructure. So that’s going to be a slow transition, and labor markets can deal with transitions that happen at a couple percentage points a year because people retire, new people don’t enter. That’s manageable.
JON STEWART: You’re saying if it takes place over a generation —
DAVID AUTOR: Absolutely.
JON STEWART: — then that’s something that even though it will be disruptive, it won’t be catastrophic.
The Vulnerability of Cognitive Work
DAVID AUTOR: Exactly. Now let’s think of call center workers. There are about as many of them in the United States as there are long-haul truckers. They’re paid less, they’re primarily women, but there are just as many. Those jobs can go very, very quickly, right? Because those can be the— automation can encroach rapidly. I don’t think they’ll all go. The ones that remain will actually be more specialized. They’ll be at the top of the queue, right? When the AI says, “I give up,” you’ll be handed over to the last 20 people standing.
JON STEWART: So rather than 20 people, 5 people will handle what’s left of the human tasks. Exactly. That need to be handled.
DAVID AUTOR: And let’s just say that’s a mixed bag, right? Those will be better jobs. They’ll be higher paid, there’ll be more expertise intensive, but there’ll be fewer of them, right? And we’ll see this in language translation. We’ll see this in call centers. We may see this in software as well, right?
Software will bifurcate. We’ll have a small number of people who build AI models, who run data centers, who run enterprise software, and they’ll be highly paid and highly specialized. And then we’ll have infinity vibe coders, right? And they’ll be like Uber drivers, right? You’ll call them up to write an app for you. They won’t be paid nothing. There’ll be a lot of them that won’t be highly paid.
So we’re going to see a variety of impacts, but the work that is just fully cognitive work, right? Is much, much more vulnerable. It can change much more quickly. Eventually robotics will also more and more enter the physical realm, but that’s still some ways off.
Reduction, Stagnation, and the Risk of Armageddon
JON STEWART: AI feels like it’s strip-mined the entirety of human accomplishment. The 10,000 years that we have spent developing these areas of expertise, these areas of knowledge, the kinds of things that made us feel relevant to the progress of the human condition. AI comes in and 6 months later goes, “Okay, what else you got? What else are you going to feed me?” And then it starts to move forward.
So what David’s talking about is already a reduction of the human workforce. Is that the thing that you are most concerned about, Daron? Or is it the eradication?
DARON ACEMOGLU: Yeah. Reduction is first and eradication is later, and in the process wages will be stagnating or even declining. And everything David said, I agree with, but there’s one other thing to add. Again, it’s a wild card because we don’t know how quickly these AI capabilities will develop and how quickly they will be adopted.
But all of our earlier examples of displacement, which as I said and David said, haven’t been so good for workers — such as during the first 80 years or so of the British Industrial Revolution, or during China and robot shocks — they were confined to a few occupations. Even then, it was very hard for people to relocate and get jobs and newcomers to find jobs.
Weavers during the British Industrial Revolution, once power looms came in, they lost about two-thirds of their earnings. But they could then become unskilled factory operators. Blue-collar workers went to construction or other things, or some of them withdrew from the labor force.
If Dario Amodei or some of the other people who are most vocal about the capabilities of these models and what they will do to the workforce are correct, there are going to be many sectors at the same time being hit. If the rest of the economy was booming and 3.5 million customer service representatives were laid off, you could find other jobs for them, perhaps with somewhat lower pay. But what if all occupations are going in the same direction? That is Armageddon. Now, I don’t think that’s going to happen anytime soon.
JON STEWART: David just sighed. You said “Armageddon” and David sighed. I will let David—
DARON ACEMOGLU: I mean, that’s not going to happen anytime soon. But I think we have to be prepared for it because some people are saying it’s going to happen in the next 2, 3, 4, 5 years. Either those plans are leading trillions of dollars of investment, which are going to come to nothing, or there’s going to be a grain of truth in some aspect of it. But either way, we have to be prepared for that. Displacement is real.
Bubble or Catastrophe?
JON STEWART: So you’re talking about either this is a financial bubble where an incredible amount of capital is being poured into a technology that ultimately will be a bubble that resolves nothing and is not worth the investment, which causes a kind of financial catastrophe — or it’s real and it causes a personal human labor catastrophe. Is that—
DARON ACEMOGLU: I would say I’m somewhere in between. I think the speed of which will be much slower, which will then lead to a lot of money being lost because the investments need to be monetized, and they need to be monetized soon if these investments are going to pay off. So I am in the middle. I think that these capabilities will come at some point, but not as soon as these investments are being motivated by.
But I am uncertain enough that either all of it being a bubble or all of it happening within the next 5 years — can I say with good conscience that’s a zero probability event? I cannot. So many technologists are saying, “Look, in our labs, we have these even more amazing models.” I don’t believe it, but I can’t say that’s necessarily wrong.
JON STEWART: How do you means test these hypotheses? How do you—
DARON ACEMOGLU: I cannot. We cannot. Nobody can, because they’re all based on what’s going to come next year and we don’t have access to it.
JON STEWART: So everything we’re doing is we’re looking backwards. But not forwards. David, you were going to say something.
The Upside Potential of AI
DAVID AUTOR: Okay. So first thing, I don’t think that the success of AI companies and the value of their investments entirely depends on them displacing labor. If we just got much more productive, that would also pay off, right? So if we got more efficient in healthcare, if we got better at transportation, if we did education better. So it doesn’t all have to come from just throwing people out of work.
And it’s also important to remember that although these transitions have been wrenching, we’re infinitely more wealthy than we were 200 years ago. We are much better off.
JON STEWART: None of us wants to live on the margin. But obviously, I don’t think the Rust Belt would say, “Yeah, globalization was great for us.”
DAVID AUTOR: No, they’re not starving, right? They have— look, I don’t mean to be unsympathetic. The standard of living in almost anywhere in America, including in the least privileged places — people have indoor plumbing. They are not food deprived by and large. They have some access to education. They have some safety. It’s much better than conditions in pre-industrial England, 250 years ago.
So although there’s always cost, and I don’t mean to minimize them — I think they’re real and the transitional costs are enormous, and the beneficiaries are not the same as those who are harmed — I think we should recognize there’s enormous upside potential here as well. We shouldn’t only be sentimental about what would be lost. We should also recognize the opportunity to accelerate science, to improve our adaptation to climate change and energy generation, to improve medicine, to do education better. We might do it worse, we could do it better.
And distribute more of the world’s wealth to more of the people in the world. I actually think artificial intelligence, like mobile telephony, can be potentially beneficial to the developing world in a way — by increasing self-sufficiency, by giving access to expertise in engineering, in medicine, that is not readily available.
Pro-Worker vs. Automation-Focused: The Core Disagreement
DARON ACEMOGLU: Can I just jump in there? Because David and I have been studying these things together and separately for the last 30 years, and almost everything you’ll hear from David I agree with. But there is one place of disagreement between me and David, and David put his finger on it. So let me expand, because I think this just again underscores the uncertainty.
David and I completely agree that there is a potential to use AI in what we call a pro-worker way. Meaning you make workers more productive. They become better at their jobs. They gain additional expertise. They start performing new and more important and interesting problem-solving tasks.
The place of disagreement between me and David is that I think that direction requires a complete change in the focus of the industry, and we won’t get it on their current path. The current path is very automation-focused. Whereas I think David thinks, well, whatever the companies do, somehow better things might come out.
DAVID AUTOR: I’m not sure I agree with that.
DARON ACEMOGLU: So I think he’s more optimistic about those productivity gains that could then create meaningful jobs. I think we really are squandering that opportunity. That opportunity is there, but we’re squandering it. And that’s the most important reason why I love being in shows like yours where people listen, because I think we need to change the conversation. The conversation shouldn’t just be about the doom and the gloom or the amazing promise of AI. It should be about, are we actually using these models, these capabilities for the right thing or the wrong thing? That’s the main conversation.
JON STEWART: Well, let me mediate the dispute between you and David before—
DARON ACEMOGLU: Oh, I think we’ve tried. Many people have tried.
JON STEWART: Before it turns physical. I don’t want to get there. I don’t know how close you are to each other’s squares.
DAVID AUTOR: No, I know — I’ve seen a lot of fistfights on this podcast.
JON STEWART: That’s exactly right. And things do get out of control. And if we need to take it to the octagon, we’ll take it to the octagon. I don’t have a problem with any of that.
But I think what we’re talking about are sort of two separate things. So I want to see if we can tease those out a little bit. You said a phrase, Daron, that I think is interesting — “pro-worker.” What David is talking about, I think, is sort of the patina over society that these advances allow us to fight diseases that we didn’t have the ability to fight before. I agree with that.
DARON ACEMOGLU: I agree with that.
Pro-Worker AI and the Ideology of AGI
JON STEWART: But it’s pro-human to a certain extent, but not necessarily pro-worker. So I guess, David, what I would say to you is generally those that are deploying these new things are not concerned about being pro-worker in any way. Now, the increase in productivity may have it, you know, they always say a rising tide lifts all boats. And I always say, unless you don’t have a boat, right? And then really you’re just, then it’s just water and you’re treading it.
But the people that run it, sort of like globalization, what they learned was capital travels and labor doesn’t. So if I can find ways to pay workers less, or to give them less safe working conditions. Globalization was by no means pro-worker for workers that were accustomed to more first-world conditions. But if you were a worker in the Global South, those investments were wildly pro-worker because your conditions— so how do we tease out what we mean by pro-worker, and the standards of society that we’re talking about raising.
DAVID AUTOR: So, Daron and I, along with our colleague Simon Johnson, also Nobel laureate, further increasing my distinction from not having one, just wrote a paper on pro-worker AI. And what we mean is tools that extend the usefulness of human expertise and the range of things that we can do, give people new things to do, things that they didn’t. And let me say, what do we mean by new things to do? I don’t mean sort blocks, but there are 250,000 data scientists in the United States right now. They earn about $120,000 a year at the median. Those didn’t exist 20 years ago.
JON STEWART: Now, what does a data scientist do? Give me—
DAVID AUTOR: A data scientist is someone who basically deals with— we have enormous amounts of data, we have enormous amounts of computing power. How do we process that? How do we organize that and make it accessible? The data that we have on the internet is so complex. It’s video, it’s text, it’s images, and data science is all about how you use that constructively. We had no tools, right? We had statistics, we had computer science, we had no tools for doing anything like that. And now there’s tons of expert work and a lot of new work.
A lot of where the value of human work comes from is demand for new forms of expertise. Like, we’ve had electricians and plumbers for a while. Now we have solar electricians and solar plumbers. There are people who do those fields, but they’re specialized even further. Much of our medical work, right? We didn’t have pediatric oncologists 50 years ago. Or even people who do, like someone who’s a fitness coach, that’s also a new form of work. And often that creates demand, it creates specialization, people earn a premium for that.
It needs to keep moving, right? And so expertise is always being actually devalued by automation and then reinstated by new ideas, new creativity, and new opportunity. And so both of those things happen, but we have much less control and predictability about the new work. It’s easy to predict what will be automated. It’s hard to predict how much new work will be and where it will occur. And most important, who will do it.
Most of the new work of the last 40 years has been for people with high levels of education. And the majority of American adults do not have a college degree. It’s only about 40%. College graduates have done fine for the last 40 years. It’s the majority of people who are not college graduates that we should be concerned about.
And so in our view, pro-worker AI in particular is AI that enables people without as much elite credentials to do more valuable medical care, to do more programming, to do more legal services, to do contracting, skilled repair, right? And we think there’s opportunity there. But I agree with Daron, there’s no guarantee that that’s where we’re going, or where tech firms or even where the market is pointing.
Now I’ll say, with some exceptions that I won’t name, I don’t think most of the tech bros are evil. I don’t think they mean to do harm.
JON STEWART: All right, now you and I are going to have a problem. Right, right.
DAVID AUTOR: But they don’t really know how to control this, right? If you told them, if you said, Dario, this is how you make pro-worker AI, I think he would be very interested in that. I don’t think he knows what that means precisely. I think he’s raising the alarm.
JON STEWART: But are they even interested in that?
DARON ACEMOGLU: No, they’re not interested, Jon. They’re not interested because they’ve been locked into this AGI, artificial general intelligence, craze. Right. And your chops in this industry are measured by how close you can argue, or really go, towards this sort of AGI.
And AGI, if you take it seriously — hopefully I don’t think we have to take it seriously anytime soon — but if you do take it seriously, it means that these models can do everything, everything better than the very, very best experts. And then once combined with advanced robotics that are flexible enough, then they can do all the work better.
So a lot of economic intuitions are based on what David Ricardo introduced, which is comparative advantage. If you have an advantage in winemaking, fine, you’ll make the wine and I’ll do the podcasting. You won’t do both podcasting and winemaking because you have a limited amount of time. Now, if indeed we get to AGI, that framework is out of the window because these models can operate very cheaply and they’ll have an advantage over all human work. I don’t believe we’re getting there anytime soon, but that is the agenda and that’s the agenda that’s driving the industry. That’s the problem.
Owning the Operating System of Society
JON STEWART: Is the agenda AGI in the industry, or is the agenda to own the operating system of our society?
DARON ACEMOGLU: Both, both.
JON STEWART: That’s where I’m more concerned. We’re bringing up where it may go, but some of it does have to do with those that are the owners — Palantir, OpenAI, the owners of these new technologies — and how exploitative they want to be for workers. And also ideologically, what are they going to do if they own— When the companies were laying fiber optic cables or the companies were laying electricity or any of those kinds of things, there was not an ideological component. But when you listen to the guys that are laying the new pipelines for whatever this society is going to be, they are ideological.
DARON ACEMOGLU: 100%, Jon. You nailed it. You nailed it. I think there is an ideology of AI. AGI is part of it. But let me just try to illustrate that going back to what David said, which again, that part was based on our joint work. So I agree, sort of mostly.
DAVID AUTOR: You’re required to agree. You’re disavowing your own work. Your name’s on it, buddy.
DARON ACEMOGLU: So the capability of using AI with non-expert workers to increase their expertise, to allow them to do new things, is definitely there. And I think it’s the most exciting part, but fighting against that is the ideology and the practice of centralizing all information in the hands of a few companies and a few people. And if they control that information and if they want to use it in the way of not making the novices more expert, but getting rid of the novices, getting rid of the experts, then you have a very different world. And that’s the agenda. Now, can they achieve that agenda? Not necessarily, because there are technical barriers to it, but that’s what they’re trying to do. Yes, you’re absolutely right.
JON STEWART: David, okay, so I would make 3 points.
DAVID AUTOR: First, you shouldn’t take Daron and me too seriously about telling you about the future of AI, right? We’re not experts in this. I don’t think you should take Dario Amodei very seriously about projecting the future of the economy. He means well, but it’s like people have been telling us forever we’ll run out of work because we’re automating stuff. That hasn’t happened so far. It doesn’t mean it can’t happen. But just means thinking about it mechanically is not the right way to think about it, right?
Second of all, I don’t even think when there’s AGI that that will actually put all humans out of work. Many, many problems are not computational problems. They’re political and interpersonal problems about who has control, who has ownership rights, who has the information. If I say today, “Here’s a better way to reorganize MIT, I’ve got it, I did it with my AGI,” MIT will not be reorganized tomorrow.
DARON ACEMOGLU: It’s a political problem. Depends on whether you have dictatorial powers or not. If they also have the dictatorial powers, then it will be reorganized.
DAVID AUTOR: Okay. Well, if we also throw democracy out, then we’re in more trouble.
The Atomic Analogy: Technology and Human Nature
JON STEWART: But David, let me talk about it in kind of— you made some really good points about the historical precursors of the Industrial Revolution and globalization. I just want to make a little bit of a point about human nature. When new technologies come along that are truly transformative, I’m thinking of splitting the atom, right? So you have brilliant people working on splitting the atom, and if you split it one way, you can use it to power the world. And if you split it another way, you can blow the world up. Which one did we try first?
So when we talk about AI and we’re talking about the technology, it doesn’t necessarily have to be transformative in the way that we’re talking theoretically. We can talk about how powerful it is for the general tools that humans use to rule over other humans. And I’ll give you an example. Palantir comes across with this incredibly powerful AI-generated systems. And what do they do? They suck information out of the system and then they funnel information about people who are undocumented, and the government then uses that information.
It’s not just about what it might do, it’s about how governments or individuals will use these new powers to game the system and gain advantage over their competitors. Isn’t that a more realistic conversation?
DARON ACEMOGLU: You nailed it exactly, Jon. So I think for the next version of our paper with Simon Johnson—
JON STEWART: Daron, when are we writing a paper together?
DARON ACEMOGLU: Exactly. I was just going to say, you have to become a co-author.
JON STEWART: I want my Nobel. Where’s my Nobel?
DARON ACEMOGLU: The direction of technology is highly malleable. And there is always a worse direction than the one you fear. And sometimes we find it — the more dictatorial, authoritarian, less democratic we are, the more likely we are to find that direction. Nuclear weapons are much more likely under times of war or times of authoritarian control, and nuclear energy becomes much more reasonable if it’s subject to democratic oversight.
Exactly the centralization of information, the ideology of AGI, and the meetings of the mind around the surveillance state and the technology are very worrying precisely because they open those bad doors for us. And anyway, many of the people in the industry would have no problem walking through those doors headfirst.
JON STEWART: And David, I want to ask you about that because you’re making really good points about the ways that these new technologies can be used to uplift. But in my mind, I’m thinking atomic, it’s splitting the atom. And are you concerned? Because I think you are more optimistic about where this thing is going about what I’m raising here.
DAVID AUTOR: Oh, absolutely.
The Enclosure of the Internet and AI Policy Solutions
DAVID AUTOR: I’m very concerned. And I think AI is God’s gift to authoritarians. It’s great for centralized control. It’s great for monitoring. And I think it’s going to— We already see, if we want to see mass surveillance and censorship at scale, go to China and they’re exporting that model. And we’ve privatized a lot of it, we’re still doing it. I’m very concerned about that. So I’m trying to emphasize that there’s opportunity, not that we’re destined to get there.
I think we’re destined to have a range of outcomes, some of them quite terrible, some of them quite good, and very unevenly shared. And the balance may be towards the bad, it may be towards the good, but I think we have to— if we don’t bear in mind that we have an opportunity, we certainly won’t, we’ll certainly squander it.
DARON ACEMOGLU: Yeah, absolutely. But I think we also need to, and this is a first most important observation that David made, but we also need to have the public conversation that those opportunities exist and we’re not currently targeting them. We’re currently targeting something very different, mass automation, surveillance state, a new sort of merger between the security apparatus and tech companies. Those are the things we are contemplating or practicing right now.
DAVID AUTOR: There’s another conversation we’re not having. I just want to loop back to a point you made, John, a little while ago about all these stuff on the internet now kind of being monetized. There’s a really fascinating book by Max Keisy, who’s an economist at Oxford called “The Means of Prediction,” right? So a play on the Marxian phrase, the means of production.
And he makes, I think, what is a brilliant analogy. He says, look, the enclosure movement in medieval Europe, it was when all the common land, all of a sudden the lord said, “Hey, we own that and we’re just going to farm that ourselves.” And it may have been actually a more efficient way of farming, but the commoners were just wiped out by this.
Well, you could say that AI is in some sense enclosing the internet. It’s taking all this common property and monetizing it. All of the stuff we put out there, all our photos and all of our writing and all of our movies. They’re not enclosing it — I mean, it’s still there just where you left it. But of course you never thought your artwork was going to compete with you. You never thought the story you wrote would be regurgitated and sold and you couldn’t sell your work anymore. So I do think this unilateral transfer of property rights is a huge thing that is underrecognized, underdiscussed.
DARON ACEMOGLU: Oh yeah, that’s so important. But can I add one thing?
JON STEWART: 100% agree with David, but it all has an additional really bad effect, which is that he always wants to be the— he wants to be the black swan. The road always wants to walk in and go really dark soul. The black swan. Yes, exactly.
DARON ACEMOGLU: Go for it. But the kind of the useful things that David and I are mentioning that you can do — pro-worker AI — that really requires very high quality data. It requires, if you’re going to build a tool for electricians that makes novice electricians perform the expert tasks that solar electricians and the best seasoned ones can do, you require the data from those electricians dealing with the hardest problems. That data will not be produced unless there are property rights over data and there are data markets in which people can get the returns for the data that they create. But this enclosure thing that David described is a data extraction economy. So it’s creating the opposite.
AI as a Human Expertise Laundering Machine
JON STEWART: Guys, this is blowing my mind. It’s something that I had not thought of at all, but I think that’s what you’re bringing up is so interesting. So as AI strip mines the totality of human expertise and experience — let’s look at it in terms of music. You get royalties. If you write a song and somebody uses that song, they pay you a royalty. If somebody plagiarizes your lyrics or finds a way to take your melody and put it into their song, you are going to be paid for that.
AI is a human expertise laundering machine. It’s basically taking everything that we’ve gotten, training itself, in some ways replacing us, but without that royalty payment. Where the royalty payment goes is to OpenAI or to Palantir or to any of these other places, and if you ask them what they’re doing with it, they’ll say that’s proprietary.
DAVID AUTOR: Yeah. We’re in the Napster era of AI, right? Remember Napster? Like just everybody’s music — rip it, burn it, and share it. That was not viable. We wouldn’t have a music industry if we hadn’t gotten control of that. With Spotify, with Apple Music, where we pay royalties when we listen to those songs — small royalties, but we do pay them.
DARON ACEMOGLU: But the difference is that in the Napster era, it was the consumers who were doing that replication. Now it’s the most powerful corporations humanity has seen who’s doing it.
DAVID AUTOR: But this is a failure of property rights, a failure of legislation. People say, oh no, fair use allows that. Well, fair use never envisioned this. And so, who cares what the law said? It’s not applicable. We should be changing it. People should be compensated and not just once. They should be compensated as their information is reused. And that’s actually a manageable problem. Talk to people at Google who’ve worked on this — they say, “Yeah, we know how to do that. We don’t have an incentive to do it, but we know how to do it. And if the laws— we would support it.”
So I think that by not recognizing that this enclosure is going on, that this sort of property rights are being reallocated — economics doesn’t deal with that. It’s reverse socialism.
JON STEWART: Exactly. They’re taking from the workers and they’re funneling up to these 5 individuals. And it comes back to torture this atomic analogy. You got the sense that people like Oppenheimer or Einstein were aware of the gravity of what was happening and through the crucible of war, maybe made some decisions they might not have made otherwise.
In this environment, I don’t think Altman, if asked, “Should the human race flourish and continue to exist?” — he took a 5-second pause, like, “Let me think about that for a second. That’s a tough one there.” So the nuance of what you are both bringing to the discussion seems utterly absent.
DARON ACEMOGLU: And you nailed it again, the war conditions. Einstein, who was very pacifist, because he was worried about Germany — the Third Reich — supported the atomic weapons and several others. And you know what? Silicon Valley is also creating war conditions. The framing of AGI is either China gets there first and we become their vassal state, or we have to go first. And that’s creating this warlike condition. You have to allow us to do anything we want, even the worst things, because otherwise China is going to do them. So that’s creating the equivalent of the 21st century war condition.
Policy Solutions: Wage Insurance and Beyond
DAVID AUTOR: And Oppenheimer, by the way, spent the rest of his career opposing the H-bomb and eventually was stripped of his security clearance and effectively died a broken man because he was persecuted for trying to control the invention that he was so instrumental in creating.
But maybe it makes sense to talk a little bit about what are some policies that we could have. So I would put them in 3 buckets, but let me start with one that people call wage insurance. Wage insurance — an idea that actually was experimented with during the presidential administration that reigned from 2008 to 2016. I’m not going to say what the president is, but you can guess.
JON STEWART: I don’t recall it, but I think I remember him in a tan suit.
DAVID AUTOR: Handsome guy.
JON STEWART: Very handsome guy. Very handsome guy.
DAVID AUTOR: Anyway, that’s all I remember. But they— it was an idea. The idea was, look, you lose a job in manufacturing. Say you’re making $50,000 a year, $25 an hour. And you can find another job, but it’s going to be like $15 an hour. And not only is that low wage, but you’re like, “Hey, that’s beneath my dignity. I’m not going to take that job.”
So wage insurance says, “Hey, look, we get that. We’re going to make up half the difference for up to $8,000, up to 2 years. Just take the $15 an hour job. You’ll make $20. And then you can look for something better.” And it gives people — it gets people back into the workforce more quickly. It’s like an Earned Income Tax Credit for returning workers. This program was so effective in terms of saving unemployment insurance money and generating additional payroll revenue that it paid for itself.
JON STEWART: How is that different, David, than unemployment insurance?
DAVID AUTOR: Unemployment insurance, you get it while you’re not working. This, you get it if you return to work.
JON STEWART: I see. Yeah.
DAVID AUTOR: And now this needs to be scaled and it makes up—
JON STEWART: So I get what you’re saying. It makes up in some ways the difference that you would have gotten from a job that was paying a little bit more. Okay.
DAVID AUTOR: That’s right. And by the way, this is very politically viable. In America, we’re not very friendly towards people who aren’t working. If you’re working, that’s okay with us. And so an incentive to work — something that’s subsidizing work rather than subsidizing leisure — is something that many people can get behind, especially if it’s pretty cost-effective.
Now we need a bigger demonstration. What was done — and there are people like Brian Kovach at Carnegie Mellon University who is trying to stand up a multi-state demonstration of this. I’ve been speaking with funders trying to get it going. So that’s one really actionable policy. And let me say, this is a no regrets policy.
JON STEWART: I’m in. I like it.
DAVID AUTOR: It’s not like if the Armageddon doesn’t come to pass, we go, “Oh damn, why did we do wage insurance after all?” This is just a good idea. It was a good idea 10 years ago. It’s a good idea now. So let me pause there and turn it over to Daron for the next idea.
DARON ACEMOGLU: Yeah, well, that’s a great policy. Fully behind it. But let me say, before I talk about the next policies, I think the most important step, even before the policies, is actually this conversation. This conversation that needs to just take place much more widely — that there are many different things we can do with AI, and it’s a choice what we do with AI.
That’s what’s lost in the current media environment. For about 10 years, the entire mainstream media was so excited about the tech barons that they couldn’t do anything wrong. Now they’re talking about killer robots and doom. Okay, that’s a useful corrective, but we’re actually missing the most important conversation.
The most important conversation: AI is not one thing. AI is a whole spectrum. And at the one end of the spectrum, as we’ve been emphasizing, there are some terrible things. And at the other end of the spectrum, there are feasible things that we can do that are much better. Who’s going to decide that? Who are we going to empower to make those civilization-changing decisions? Dario Amodei, Sam Altman, Peter Thiel? No, I think the democratic process should have a part in it and people should become more informed about it. I think that conversation is first, and then all the policies have to come on top of that.
Taxing Labor vs. Subsidizing Capital
DARON ACEMOGLU: And then there are many policies that we can worry about. Like, for example, in the United States, we tax labor heavily. We subsidize capital.
JON STEWART: It’s been that way for 50 years.
Policy Solutions: Taxation, Capital Ownership, and the Future of Work
DARON ACEMOGLU: It’s been, well, it’s gotten much worse over the last 25 years. And much, much worse with the Trump administration. And how do you think that changes firms and technologists’ decisions? It makes them more leaning towards automation because automation is being subsidized. That’s right. So let’s change that tax and we can raise more taxes also because we’re just giving a pass to all capital income.
JON STEWART: But it’s kind of a perpetual motion machine because what happens is when these new technologies come along, capital flows towards it in such massive ways, this giant, you know, trillions and trillions of dollars that flow in and building data centers and sucking up water and electricity and money. And then what they do with the profits is they reinvest not just in their technologies, but in their political power.
Oh, 100%. So they take their money and they bring it to bear on Washington. You know, it was a shocking moment to me at the inauguration of an American president to see in the front row in a room of the swearing-in, not the people, but the tech companies that had the closest proximity and access to the president.
DARON ACEMOGLU: And you know what’s worse? We don’t even know who owned who, whether they owned the government or Trump owned them.
JON STEWART: We don’t know which is what. David, you were going to say something though today.
DAVID AUTOR: Well, I just want to talk about another policy.
JON STEWART: Oh, okay, great. I like Daron’s though, the changing of the tax incentives. Sure. That can even out to talk about pro-worker — that makes — we value capital over labor. And I think the pendulum needs to swing back. So I think that was a really important —
DAVID AUTOR: Let me suggest another policy related.
JON STEWART: Yeah, please do.
Universal Basic Capital: A New Approach to Ownership
DAVID AUTOR: Which is what people call universal basic capital, right? So not universal basic income, right? Which is like write people a check every month. But the notion that when people are born, we give them an endowment of capital with voting rights, right? Like shares.
And what does this do? Well, one, it diversifies it. Most people’s, you know, their entire income is bound up in their human capital, right? Your income comes from your ability to produce valuable labor. Well, that’s a pretty risky bet, right? For anyone, right? Because, you know, value of labor changes over time. Specialized skills become — sometimes they become more valuable, sometimes they become worthless.
Right. So we distribute — and by the way, you can call them the Trump accounts if you want, right? They’re already being done.
JON STEWART: I think we’re calling it Trump everything. That’s right.
DAVID AUTOR: I think that’s what — That’s right.
JON STEWART: This is actually the Weekly Show Trump Podcast. That’s right. We just add the word Trump to everything.
DAVID AUTOR: Jerome has the Trump Prize in Economics. That’s right. Yeah, just to return to our main theme. But so what does this do, right? One, it gives people a more diversified portfolio. It’s something they can invest in, right? They can’t spend it until they’re 18. Second, it gives them ownership rights. What are they? Basically, you’re getting — it’s just like getting a bond when you’re born.
DARON ACEMOGLU: Okay. Like the Alaska Fund for everybody.
DAVID AUTOR: That’s right. That’s right. But what it gives people is a diversified income portfolio somewhat. It also redistributes voting rights. They have voting rights over capital, right? And you could even set it up so that even if you sell your stocks, you maintain the voting rights.
JON STEWART: But what is the voting right? Is it a — so the way that I would think about it is it’s reverse — it’s Benjamin Button Social Security. So rather than — it’s a large fund and then when you’re born — you’re the comedian.
DAVID AUTOR: That’s it.
JON STEWART: I just watch a lot of movies. So when you’re born, you are invested into this larger fund that has been — now then the questions come up, well, what is that fund invested in and how does it grow?
DAVID AUTOR: It owns, you know, it owns shares of these tech firms, for example. Right. It owns a piece of the economy. Right. And so then we all have some voting rights. And that’s really important because if labor — there is certainly a risk that labor will become less valuable, and capital more so. And if so, we want more people to have ownership stakes.
Part of the brilliance of the labor market is that in a country without slavery and without labor coercion, everyone owns at most one worker themselves, right? So it’s intrinsically relatively equal, but capital is not like that. So we would like to —
JON STEWART: The reason why I’m slightly dubious about that is — and I’ll tell you why. Yeah. Companies won’t even do that for their own employees.
DAVID AUTOR: No, the government has to do it.
JON STEWART: Publicly. But the government is going to give away shares of privately owned companies? Or buy them.
DAVID AUTOR: That’s fine. Or buy them.
JON STEWART: Okay. Sure. All right. Yeah. All right. Now I’m feeling a little better.
The Risks of Universal Basic Income and the Two-Tier Society
DAVID AUTOR: I’m feeling a little better. But here is the problem.
DARON ACEMOGLU: Here is the problem. I completely agree with David’s — you know, that would be a nice addition to a functioning labor market. Yes. But here is what I want to put a pin on, which is that the tech solution to these problems of universal basic income —
DAVID AUTOR: I didn’t say I hate UBI.
DARON ACEMOGLU: Exactly. Right. So, but yeah, you just — I want to just underscore that, or other schemes where people are somehow given a handout so that they can just not work. I think there are many problems with that.
First of all, I think we don’t know what to do with millions of people who don’t work. That would be highly bad for their mental health, for social peace. But even worse, I think if you create any system like that based on dividends, based on income, based on other things, as long as society knows, “Oh, these are the creators — Peter Thiel, Elon Musk, et cetera — and the rest living off the income that they’ve created,” that would create a horrible two-tier society where there are those with very, very high status and then all the rest.
JON STEWART: Daron, we have a horrible two-tiered system. I know, but it will get even worse. As it exists now. I know.
DAVID AUTOR: I mean, look at Norway, right? They have a sovereign wealth fund that’s worth 2 GDP and it’s coming from oil, but people are public owners of that, right? Right. And they’re doing okay.
DARON ACEMOGLU: And they’re working. And they’re working in Norway.
Makers, Takers, and the Myth of Meritocracy
JON STEWART: And I’m in favor of work, but I want to push back on just a couple of things within that. So the system that’s already been designed is a two-tiered system. And there’s already that sort of Randian philosophy that there are makers and takers. And exactly. But when you have an economic system that requires labor at its cheapest level and you have outside pressure of globalization that continues to drive those wages down and conditions down, well, we’ve created the conditions for that permanent underclass, and then we blame those people.
As though their poverty is a function of vice, is a function of a lack of virtue. And that’s what I want to push back on. I don’t view money that goes into those communities as handouts. I view them as investments. Sure. And we have to find a way within this.
I love the idea of giving people some ownership over the industries that drive the country. I think for too long we have allowed these companies the providence of the stability of this country, the subsidies of this country, the investments of this country, and asked for no vig. And I do think the house should always win and the house should be the American people and there should be a rake.
DARON ACEMOGLU: Right. 100%. Now you nailed it. Yeah, 100%, Jon. Now you’re definitely a cult.
JON STEWART: Give me my prize.
DARON ACEMOGLU: But you also put your finger in passing on something that’s very important. And you might want to have Michael Sandel on the show to talk about this — sort of this ideology of meritocracy that somehow all of those who are so successful are well-deserving and virtuous, and all of those who have lost out of globalization or of technological change, of social change, are losers that deserve their fate.
I think that’s been very, very pernicious. I think you cannot understand the rise of Trump, the rise of anger in this country without that faux meritocracy ideology. And he’s been the most eloquent describer of this. And I think it’s a very, very important thing you put your finger on it.
DAVID AUTOR: Not Trump, Michael Sandel has been the most eloquent.
JON STEWART: Who would’ve thought there’s so much fun to be had at MIT?
DARON ACEMOGLU: Please don’t have Trump on your show, Jon.
Actionable Policy in a Dysfunctional Political System
JON STEWART: And these are really interesting, and I really do like them. And what I love about it the most is these are actionable, specific ideas. What so frustrates me about our political process in this moment, you know, we have this incredibly powerful technology that sits just on the horizon, but we have a political system that is unable to articulate mostly anything but platitudes. “We have to start talking about kitchen table issues with working families” — must get the thing.
DARON ACEMOGLU: So you think creating American AI dominion and cryptocurrency are not actionable issues? Well, let me tell you something.
JON STEWART: As a proud owner of Melania coin, I can tell you that my future is set. But we are in this position. What’s so — I don’t even want to say ironic about it — is we could probably plug these questions into AI and come up with more specific and actionable and interesting solutions than what are being offered by our political system. Right. And that’s the part I can’t wrap my head around. Where do you guys see — why is that the case?
DAVID AUTOR: Well, I actually think that — so the idea of wage insurance is in currency. It’s being discussed. I’ve discussed it with people in the Trump administration. I’ve discussed it with people in the Democratic leadership. I think there’s enthusiasm for that. Or at least, you know, there’s also, I should say, there’s new efforts around doing modernizing training in a way where we can measure it and monetize it and return the revenues. And, you know, Raj Chetty and the group of Opportunity Insights at Harvard, they’re working on this in a really innovative way.
JON STEWART: Harvard, safety school. Talking MIT, baby. Yeah, exactly.
DAVID AUTOR: So I do think there are a set of policies that are — that again, I call no regrets policies. We won’t be — we did them even if the worst doesn’t come to pass and we know how to do them well. They’re not totally out of reach.
So I absolutely agree with Daron — we need to shape the conversation. We need to deploy the technology constructively, but we also, we gotta recognize we are in for a rough ride. Even if it goes well, we’re in for a rough ride because the transition is going to be so fast. So we should have policies that support people, support their income, support job transitions, right? And give them also an ownership stake so that they’re on some of the upside of this, not just the downside. And that’s distributing capital more broadly would have that effect.
Ownership, Regulation, and the Path Forward
JON STEWART: David, I can’t tell you how much I love that and how much I think that in some ways over the last 50 years, I think that’s what’s gone wrong with the economic condition in this country is that labor has never been offered an ownership stake in the value of their productivity.
And Daron, I want to ask you about that. And then, and then, and I’ve so appreciated this conversation, but great. When we talk about productivity gains, because that’s always how it’s framed, it always outstrips wage, always. And maybe that’s just the way that the system is.
DAVID AUTOR: No, it’s not how it was until the mid-1970s. Yeah. But I’m saying since the ’90s.
JON STEWART: Yeah, for 50 years. Absolutely. Since the Reagan revolution.
DARON ACEMOGLU: That’s right. But people say that about, oh, the capitalist system. Well, it was a capitalist system in Europe, in the United States. From 1940s to the mid-1970s where wages grew faster than productivity. Workers with less than a college degree had faster wage gains than managers. That was feasible. There’s nothing in the laws of economics or in the laws of democracy against that. We just chose a different path since 1980.
JON STEWART: And do you think at this point those powerful corporations have— there’s almost a that they kind of have us at an extortion point where they say, “Oh, if you try and do anything to regulate us or you try and do anything to tax us, we’ll leave.” Well, look, this is such an important point.
Can Big Tech Be Regulated?
DARON ACEMOGLU: This is such an important point, Jon. First of all, these corporations are absolutely enormous. I mean, it’s not a fair comparison, but I just did the calculation last week. Each one of the largest 7 tech companies has annual revenues in current dollars twice as large as the entire British Empire’s GDP in the middle of the 19th century. These are enormous, enormous corporations.
They need to be regulated. And, but the rhetoric that they cannot be regulated, AI cannot be regulated, that’s false. China proves it. I don’t approve of what China does. I don’t approve what they intend to do. But they show very clearly AI can be regulated. Tech companies, Alibaba is now completely subservient to the interests of the Communist Party in China.
We could also make Google and OpenAI and Anthropic be much more in line with the democratic priorities in the United States. There is nothing in the laws of economics, in the laws of physics that says these companies cannot be regulated.
DAVID AUTOR: They’re not delicate flowers. When Sam Altman says, “Oh, if you charge us for intellectual capital property, we’ll be put out of business,” that’s not only is it not true, it’s kind of pathetic because they say, “We don’t produce anything of value. If you actually make us pay for our inputs, no one would buy it,” right? That’s crazy. And it’s not true.
So I mean, look, I think, yeah, there’s constructive ways to steer it. We don’t need to shut it down. We don’t need to regulate it to death so it can’t move, right? The US is innovative and that’s great. We have a lot to be proud of in that we have led this technology. We’re building it out quickly. It’s valuable, but it’s an opportunity and we could squander it. We need to steer it. 100%. It will just left to its own, it’s not going to be pro-worker.
DARON ACEMOGLU: What you’re hearing both from me and David is that AI is a very promising technology, but it’s precisely the reason why we’ve got to put the care to make sure that we use it for the right thing.
Closing Thoughts
JON STEWART: Gentlemen, you have done the impossible. You have done the impossible, which is, you have somehow not allayed my fears, but you’ve given me hope that the future is actually not yet been written. And what it does is it creates opportunity. And when you have those opportunities to, to write it in the proper way, but I think what you’ve done really well today is you’ve given specifics.
That none of this is platitude. This is all the specificity of here’s what it could do, here’s the damage it’s going to do, here’s a way to mitigate it, and here’s some ways to give us a shared prosperity for it. And I think that’s, that’s truly— I think that’s the conversation that the two of you— have you thought about having a podcast?
DARON ACEMOGLU: We were hoping we would join you after this. What?
JON STEWART: Oh, yes. Unfortunately, what I’ve done is I had my data scientists, they’ve been strip mining this conversation. I don’t need you. We’re done. I’ve created AI avatars of the two of you, and now we’re done. Fantastic. But guys, that frees some time. Man, thank you so much for this. Thank you. Conversation. I’ve truly appreciated, Daron Acemoglu, Nobel Laureate in Economics, MIT Institute Professor David Autor, Rubenfeld Professor of Economics at MIT. Guys, fantastic, and, really appreciate it. And I hope to continue the conversation with both of you.
DARON ACEMOGLU: This was fantastic. It was a lot of fun. Thanks, Jon.
DAVID AUTOR: Thank you so much for having us on.
JON STEWART: Very cool, guys.
DAVID AUTOR: This is superb, and we love what you’re doing, and it’s great to have this conversation.
JON STEWART: Oh, thanks, man.
Jon’s Closing Remarks
JON STEWART: All right, take care. Holy smokes. I’m feeling something. Are you feeling something at home? Are you listening to this? Are you feeling something? I’m feeling the possibility of futures unwritten, the opportunity that it gives us to correct our path, to put us on a righteous path towards a more positive, productive, equal future. My God.
And I apologize, we don’t have our normal staff chat today because as you can see, I’m on the road. And so we weren’t able to accomplish that. But man, I so appreciated what those gentlemen were saying and the specificity of it. And I hope you did too. And it’s put me in something that I’ve needed for a little bit, which is a better mood.
I am in, I am, I am now, and, and by the way, I, I, I maybe I’m drinking the Kool-Aid too. But I am in a slightly better mood than I was at the beginning of this whole schmiggeggy. But man, that was, I enjoyed that conversation tremendously.
And thanks as always to our fantastic team, lead producer Lauren Walker, producer Brittany Medvedevich, producer Jillian Spear, video editor and engineer Rob Vitola. Who— he and Nicole Boyce, our audio engineer, they had to— they had to work today. Today was a day when I couldn’t figure out how to log into Riverside. They had to do a little extra work today. And as always, our executive producers, Chris McShane and Katie Gray. Very nice. And we shall see you next week.
The Weekly Show with Jon Stewart is a Comedy Central podcast. It’s produced by Paramount Audio and Busboy Productions.
Related Posts
- The Fastest-Growing Jobs in the AI Era – How to Prepare w/ Ryan Roslansky (Transcript)
- AI is Coming for Your Job. Now What? Vlad Tenev (Transcript)
- How ‘Creating Content’ Killed My Creativity – Jim Caddick aka @Caddicarus (Transcript)
- How To Introduce Yourself—And Get Hired: Rebecca Okamoto (Transcript)
- How To Land A Job And Master Corporate Bullsh*t: Fredrik Fornes (Transcript)
