Skip to content
Home » Transcript: Anthropic CEO Dario Amodei’s Interview on Big Technology Podcast

Transcript: Anthropic CEO Dario Amodei’s Interview on Big Technology Podcast

Read the full transcript of Anthropic CEO Dario Amodei’s interview on Big Technology Podcast with host Alex Kantrowitz on “AI’s Potential, OpenAI Rivalry, GenAI Business, Doomerism”, Premiered July 30, 2025.

INTRODUCTION

ALEX KANTROWITZ: Anthropic CEO Dario Amodei joins us to talk about the path forward for artificial intelligence, whether generative AI is a good business, and to fire back at those who call him a doomer. And he’s here with us in studio at Anthropic headquarters in San Francisco. Dario, it’s great to see you again. Welcome to the show.

DARIO AMODEI: Thank you for having me.

ALEX KANTROWITZ: So, let’s recap the past couple months for you. You said I could wipe out half of entry level white collar jobs. You cut off Windsurf’s access to Anthropic’s top tier models when you learned that OpenAI was going to acquire them. You asked the government for export controls and annoyed Nvidia CEO Jensen Huang. What’s gotten into you?

The Urgency of AI Development

DARIO AMODEI: I think Anthropic, myself and Anthropic are always focused on trying to do and say the things that we believe. And I think as we’ve gotten more close to AI systems that are more powerful, I’ve wanted to say those things more forcefully, more publicly to make the point clearer.

I’ve been saying for many years that we have these scaling laws. AI systems are getting more powerful. They’re going from the level of a few years ago they were barely coherent. A couple years ago they were at the level of a smart high school student. Now we’re getting to smart college student, Ph.D. and they’re starting to apply across the economy.

So I think all the issues related to AI, ranging from the national security issues to the economic issues, are starting to become quite near to where we’re actually going to face them. And so I think as these problems have come closer, even though in some form, Anthropic has been saying these things for a while, I think the urgency of these things has gone up.

I want to make sure that we say what we believe and that we warn the world about possible downsides. Even though no one can say what’s going to happen. We’re saying what we think might happen, what we think is likely to happen. We back it up as best we can, although it’s often extrapolations about the future where no one can be sure.

But I think we see ourselves as having the duty to warn the world about what’s going to happen. And that’s not to say I think there’s an incredible number of positive applications of AI. I’ve continued to talk about that. I wrote this essay, “Machines of Loving Grace.” I feel, in fact that I and Anthropic have often been able to do a better job of articulating the benefits of AI than some of the people who call themselves optimists or accelerationists.

So I think we probably appreciate the benefits more than anyone, but for exactly the same reason. Because we can have such a good world if we get everything right, I feel obligated to warn about the risks.

Short Timelines and Exponential Growth

ALEX KANTROWITZ: So all of this is coming from your timeline. Basically, it seems like you have a shorter timeline than most and so you are feeling a sense of urgency to get out there because you think that this is imminent.

DARIO AMODEI: Yes, I’m not sure. I think it’s very hard to predict, particularly on the societal side. So if you say, when are people going to deploy AI or when are companies going to use X dollars of spend of AI or when will AI be used in these applications, or when will it drive these medical cures? That’s harder to say.

I think the underlying technology is more predictable, but still uncertain. Still no one knows. But I think on the underlying technology I’ve started to become more confident there isn’t no uncertainty about it. I think the exponential that we’re on could still peter out. I think there’s maybe 20 or 25% chance that sometime in the next two years the models just start getting, stop getting better for reasons we don’t understand, or maybe reasons we do understand, like data or compute availability.

And then everything I’m saying just seems totally silly and everyone makes fun of me for all the warnings I’ve made, and I’m just totally fine with that, given distribution that I see.

ALEX KANTROWITZ: And so I should say that this is part. Our conversation is part of a profile I’m writing about you. I’ve spoken with more than two dozen people who’ve worked with you, who know you, who’ve competed with you, and I’m going to link that in the show notes. If anybody wants to read it, it’s free to read.

But one of the themes that has come through across everybody I’ve spoken with is that you have about the shortest of any of the major lab leaders, and you just referenced it just now. So why do you have such a short timeline, and why should we believe in yours?

DARIO AMODEI: Yeah, it really depends what you mean by timeline. So one thing, and I’ve been consistent on this over the years, is there are these terms in the AI world, like AGI and superintelligence. You’ll hear leaders of companies say, “We’ve achieved AGI, we’re moving on to superintelligence,” or “It’s really exciting that someone stopped working on AGI and started working on superintelligence.”

So I think these terms are totally meaningless. I don’t know what AGI is. I don’t know what superintelligence is. It sounds like a marketing term. Yeah, it sounds like something designed to activate people’s dopamine. So you’ll see in public, I never use those terms. And I’m actually careful to criticize the use of those terms.

But I think despite that, I am indeed one of the most bullish about AI capabilities improving very fast. The thing I think is real that I’ve said over and over again is the exponential.