Read the full transcript of Anthropic CEO Dario Amodei’s interview on Big Technology Podcast with host Alex Kantrowitz on “AI’s Potential, OpenAI Rivalry, GenAI Business, Doomerism”, Premiered July 30, 2025.
INTRODUCTION
ALEX KANTROWITZ: Anthropic CEO Dario Amodei joins us to talk about the path forward for artificial intelligence, whether generative AI is a good business, and to fire back at those who call him a doomer. And he’s here with us in studio at Anthropic headquarters in San Francisco. Dario, it’s great to see you again. Welcome to the show.
DARIO AMODEI: Thank you for having me.
ALEX KANTROWITZ: So, let’s recap the past couple months for you. You said I could wipe out half of entry level white collar jobs. You cut off Windsurf’s access to Anthropic’s top tier models when you learned that OpenAI was going to acquire them. You asked the government for export controls and annoyed Nvidia CEO Jensen Huang. What’s gotten into you?
The Urgency of AI Development
DARIO AMODEI: I think Anthropic, myself and Anthropic are always focused on trying to do and say the things that we believe. And I think as we’ve gotten more close to AI systems that are more powerful, I’ve wanted to say those things more forcefully, more publicly to make the point clearer.
I’ve been saying for many years that we have these scaling laws. AI systems are getting more powerful. They’re going from the level of a few years ago they were barely coherent. A couple years ago they were at the level of a smart high school student. Now we’re getting to smart college student, Ph.D. and they’re starting to apply across the economy.
So I think all the issues related to AI, ranging from the national security issues to the economic issues, are starting to become quite near to where we’re actually going to face them.
I want to make sure that we say what we believe and that we warn the world about possible downsides. Even though no one can say what’s going to happen. We’re saying what we think might happen, what we think is likely to happen. We back it up as best we can, although it’s often extrapolations about the future where no one can be sure.
But I think we see ourselves as having the duty to warn the world about what’s going to happen. And that’s not to say I think there’s an incredible number of positive applications of AI. I’ve continued to talk about that. I wrote this essay, “Machines of Loving Grace.” I feel, in fact that I and Anthropic have often been able to do a better job of articulating the benefits of AI than some of the people who call themselves optimists or accelerationists.
So I think we probably appreciate the benefits more than anyone, but for exactly the same reason. Because we can have such a good world if we get everything right, I feel obligated to warn about the risks.
Short Timelines and Exponential Growth
ALEX KANTROWITZ: So all of this is coming from your timeline. Basically, it seems like you have a shorter timeline than most and so you are feeling a sense of urgency to get out there because you think that this is imminent.
DARIO AMODEI: Yes, I’m not sure. I think it’s very hard to predict, particularly on the societal side. So if you say, when are people going to deploy AI or when are companies going to use X dollars of spend of AI or when will AI be used in these applications, or when will it drive these medical cures? That’s harder to say.
I think the underlying technology is more predictable, but still uncertain. Still no one knows. But I think on the underlying technology I’ve started to become more confident there isn’t no uncertainty about it. I think the exponential that we’re on could still peter out. I think there’s maybe 20 or 25% chance that sometime in the next two years the models just start getting, stop getting better for reasons we don’t understand, or maybe reasons we do understand, like data or compute availability.
And then everything I’m saying just seems totally silly and everyone makes fun of me for all the warnings I’ve made, and I’m just totally fine with that, given distribution that I see.
ALEX KANTROWITZ: And so I should say that this is part. Our conversation is part of a profile I’m writing about you. I’ve spoken with more than two dozen people who’ve worked with you, who know you, who’ve competed with you, and I’m going to link that in the show notes. If anybody wants to read it, it’s free to read.
But one of the themes that has come through across everybody I’ve spoken with is that you have about the shortest of any of the major lab leaders, and you just referenced it just now. So why do you have such a short timeline, and why should we believe in yours?
DARIO AMODEI: Yeah, it really depends what you mean by timeline. So one thing, and I’ve been consistent on this over the years, is there are these terms in the AI world, like AGI and superintelligence. You’ll hear leaders of companies say, “We’ve achieved AGI, we’re moving on to superintelligence,” or “It’s really exciting that someone stopped working on AGI and started working on superintelligence.”
So I think these terms are totally meaningless. I don’t know what AGI is. I don’t know what superintelligence is. It sounds like a marketing term. Yeah, it sounds like something designed to activate people’s dopamine. So you’ll see in public, I never use those terms. And I’m actually careful to criticize the use of those terms.
But I think despite that, I am indeed one of the most bullish about AI capabilities improving very fast. The thing I think is real that I’ve said over and over again is the exponential. The idea that every few months we get an AI model that is better than the AI model we got before, and that we get that by investing more compute in AI models, more data, more new types of training models.
Initially, this was done by what’s called pre training, which is when you just feed a bunch of data from the Internet into the model. Now we have a second stage that’s reinforcement learning or test time, compute or reasoning, or whatever you want to call it. I think of it as a second stage that involves reinforcement learning.
Now both of those things are scaling up together, as we’ve seen with our models and as we’ve seen with models from other companies. And I don’t see anything blocking the further scaling of that, there’s some stuff about how do we broaden the tasks. On the RL side of it, we’ve seen more progress on, say, math and code, where the models are getting pretty close to a high professional level and less on more subjective tasks. But I think that is very much a temporary obstacle.
The Power of Exponential Growth
So when I look at it, I see this exponential and I say, look, people aren’t very good at making sense of exponentials. If something is doubling every six months, then two years before it happens, it looks like it’s only one-sixteenth of the way there.
And so we are sitting here in the middle of 2025 and the models are really starting to explode in terms of the economy. If you look at the capabilities of the model, they’re starting to saturate all the benchmarks. If you look at revenue, Anthropic’s revenue every year has grown 10x every year.
We’re conservative and we say it can’t grow 10x this time. I never assume anything and actually always am very conservative in saying I think it’s going to slow down on the business side. But we went from zero to 100 million in 2023. We went from 100 million to a billion in 2024. And this year, in this first half of the year, we’ve gone from 1 billion to, I think as of speaking today, it’s well above 4. It might be 4.5.
And so if you think about it, suppose that exponential continued for two years. I’m not saying it will, but suppose it continued for two years. You’re well into the hundred billions. I’m not saying that’ll happen. I’m saying the situation is that when you’re on an exponential, you can really get fooled by it. Two years away from when the exponential goes totally crazy. It looks like it’s just starting to be a thing.
And so that’s the fundamental dynamic. We saw that with the Internet in the 90s, where it was like networking speeds and the underlying speed of the computers were getting fast. And over a few years it became possible to have to basically build a digital global communications network on top of all this, when it wasn’t possible just a few years ago. And almost no one except for a few people really saw the implications of that and how fast it would happen.
And so that’s where I’m coming from. That’s what I think now. I don’t know, if a bunch of satellites crashed, maybe the Internet would have taken longer. If there was an economic crash, maybe it would have taken a little longer. So we can’t be sure of the exact timelines. But I think people are getting fooled by the exponential and not realizing how fast it might be. How fast I think it probably will, although I’m not sure.
Addressing Diminishing Returns
ALEX KANTROWITZ: But so many folks in the AI industry are talking about diminishing returns from scaling. Now, that really doesn’t fit with the vision you just laid out. Are they wrong?
DARIO AMODEI: Yeah. From what we’ve seen, I can only speak in terms of the models at Anthropic, but what I seen in terms of the models at Anthropic. If we look at coding. Coding is one area where I think anthropic models have advanced very quickly. Adoption has been very quick. We’re not just a coding company. We’re planning to expand to many areas.
But if you look at coding, we release 3.5 Sonnet, a model we called 3.5 Sonnet V2, which let’s call it 3.6 Sonnet now 3.7 Sonnet, and then 4.0 Sonnet and 4.0 Opus, and that series of four or five models, each one got substantially better at coding than the last.
If you want to look at benchmarks, you can look at SWE-bench growing from, I think 18 months ago, it was at 3% or something, growing all the way to 72 to 80%, depending on how you measure it. And the real usage has grown exponentially as well. Where we’re heading more and more towards autonomously, you can just use these models.
I think the actual majority of code that’s written at Anthropic is, at this point, written by, or at least with the involvement of one of the Claude models and various other companies have said similar statements to that. So we see the progress as being very fast and the exponential is continuing, and we don’t see any diminishing returns.
The Challenge of Continual Learning
ALEX KANTROWITZ: But there are some liabilities. It seems like with large language models, for instance, continual learning. We had Dwarkesh on a couple weeks ago. Here’s how he put it, and he wrote about it in his substack. “The lack of continual learning is a huge, huge problem. The LLM baseline at many tasks might be higher than an average human, but you’re stuck with the abilities you get out of the box.” So you just make the model and that’s it. It doesn’t learn. That seems like a glaring liability. What do you think about that?
The Potential of Current AI Technology
DARIO AMODEI: So, first of all, I would say even if we never solved continual learning, even if we never solved continual learning and memory, I think that the potential for the LLMs to do incredibly well, to affect things at the scale of the economy, will be very high, right?
If I think of the field I used to be in, biology and medicine, let’s say I had a very smart Nobel prize winner, and I said, “Okay, you’ve discovered all these things, you have this incredibly smart mind, but you can’t read new textbooks or absorb any new information.” I mean, that would be difficult. But still, if you had like 10 million of those, they’re still going to make a lot of biology breakthroughs.
They’re going to be limited, they’re going to be able to do some things humans can’t. And there are some things humans can do that they can’t. But even that, even if we impose that as a ceiling, man, that’s pretty damned impressive and transformative. And even if I said you never solved that, I think people are underestimating the impact.
But look, context windows are getting longer, and models actually do learn during the context window, right? So as I talk to the model during the context window, I have a conversation. It absorbs information. The underlying weights of the model may not change. But just like I’m talking to you here and we’re having a conversation and I listen to the things you say and I think, and I respond to them, the models are able to do that.
And from a machine learning perspective, from an AI perspective, there’s no reason we can’t make the context length 100 million words today, right? Which is roughly what a human hears in their lifetime. There’s no reason that we can’t do that. It’s really inference support. And so again, even that fills in many of the gaps. Not all the gaps, but it fills in many of the gaps.
And then there are a number of things like learning and memory that do allow us to update the weight. So there are a number of things around types of reinforcement learning training. We used to, many years ago, talk about inner loops and outer loops. The inner loop is like, I have some episode, and I learned some things in that episode, and I’m trying to optimize for the lifetime of that episode. And kind of the outer loop is the agents learning over episodes. And so I think maybe that inner loop, outer loop structure is a way to learn the continual learning.
One thing we’ve learned in AI is whenever it feels like there’s some fundamental obstacle. Like two years ago, we thought there was this fundamental obstacle around reasoning. Turned out just to be RL. You just train with RL and you let the model write some stuff down. You let the model write things down to try and figure out objective math problems without being too specific. And we already have maybe some evidence to suggest that this is another of those problems that is not as difficult as it seems, that will fall to scale, plus a slightly different way of thinking about things.
Scale vs. New Techniques
ALEX KANTROWITZ: Do you think your obsession with scale might blind you to some of the new techniques, like Demis Hassabis says, you know, to get to AGI, or you might call it super powerful AGI, whatever human level intelligence is what we’re all talking about. We might need a couple new techniques for that to happen.
DARIO AMODEI: So we’re developing new techniques. We’re developing new techniques every day.
ALEX KANTROWITZ: Okay.
DARIO AMODEI: You know, Claude is very good at code, and we don’t really talk externally that much about why Claude is so good at code.
ALEX KANTROWITZ: Why is it so good?
DARIO AMODEI: Like I said, we don’t talk externally about it.
ALEX KANTROWITZ: I have to ask.
DARIO AMODEI: So, you know, every new version of Claude that we make has improvements to the architecture, improvements to the data that we put into it, improvements to the methods that we use to train it. So we’re developing new techniques all the time. New techniques are a part of every model that we build. And that’s why we’ve said these things about trying to optimize for talent density as much as possible. You need that talent density in order to invent the new techniques.
The Resource Competition
ALEX KANTROWITZ: You know, there’s one thing that’s been hanging over this conversation, which is that maybe Anthropic is the company with the right idea, but the wrong resources. Because you look at what’s happening with xAI and inside Meta, where Elon’s built his massive cluster, Mark Zuckerberg is building this 5 gigawatt data center. And they are putting so much resources towards scaling up and is it possible? I mean, Anthropic, obviously you have raised billions of dollars, but these are trillion dollar companies.
DARIO AMODEI: Yeah. So we’ve raised, I think at this point, a little short of 20 billion.
ALEX KANTROWITZ: It’s not bad.
DARIO AMODEI: So that’s not nothing. And I would also say if you look at the size of the data centers that we’re building with, for example, Amazon, I don’t think our data center scaling is substantially smaller than that of any of the other companies in the space. You know, in many cases, these things are limited by energy, they’re limited by capitalization.
When people talk about these large amounts of money, they’re talking about it over several years. Right. And when you hear some of these announcements, sometimes they’re not funded yet. We’ve seen the size of the data centers that folks are building and we’re actually pretty confident that we will be within a rough range of the size of data centers they built.
Talent Density and Company Culture
ALEX KANTROWITZ: You talked about talent density. What do you think about what Mark Zuckerberg is doing on the talent density front? I mean, combining that with these massive data centers, it seems like he’s going to be able to compete.
DARIO AMODEI: Yeah, so this is actually very interesting because one thing we noticed is that relative to other companies, a lot fewer people from Anthropic have been caught by these. And it’s not for lack of trying. I’ve talked to plenty of people who got these offers at Anthropic and who just turned them down, who wouldn’t even talk to Mark Zuckerberg who said, “No, I’m staying at Anthropic.”
And our general response to this was, I posted something to the whole company Slack, where I said, “Look, we are not willing to compromise our compensation principles, our principles of fairness to respond individually to these offers.” The way things work at Anthropic is there’s a series of levels. When candidate comes in, they get assigned a level and we don’t negotiate that level because we think it’s unfair. We want to have a systematic way.
If Mark Zuckerberg throws a dart at a dartboard and hits your name, that doesn’t mean that you should be paid 10 times more than the guy next to you who’s just as skilled, who’s just as talented. And my view of the situation is that the only way you can really be hurt by this is if you allow it to destroy the culture of your company by panicking, by treating people unfairly in an attempt to defend the company.
And I think actually this was a unifying moment for the company where we didn’t give in. We refused to compromise our principles because we have the confidence that people are at Anthropic because they truly believe in the mission. And I think that gets to kind of how I see this. I think that what they are doing is trying to buy something that cannot be bought and that is alignment with the mission. And there are selection effects here. Are they getting the people who are most enthusiastic, who are most mission aligned, who are most excited to—
ALEX KANTROWITZ: But they have talent and GPUs. You’re not underestimating them.
DARIO AMODEI: We’ll see how it plays out. I am pretty bearish on what they’re trying to do.
The Business of Generative AI
ALEX KANTROWITZ: So let’s talk a little bit about your business, because a lot of people have been wondering, is the business of generative AI a real thing? And I’m also curious. I have questions all the time. You talked about how much money you’ve raised. Close to 20 billion. You’ve raised 3 billion from Google, 8 billion from Amazon, 3.5 billion from a new round led by Lightspeed, who I’ve spoken with. What is your pitch? Because you are not part of a big tech company. You’re out there on your own. Do you just bring the scaling laws and say, “Can I have some money?”
DARIO AMODEI: So my view of this has always been that talent is the most important thing. So if you go back three years ago, we were in a position where we had raised mere hundreds of millions. OpenAI had already raised 13 billion from Microsoft. And of course, the large hypercap tech companies were sitting on 100 billion, 200 billion.
And basically the pitch we made then is we know how to make these models better than others do, right? There may be a curve, there may be a curve of scaling laws. But look, if we are in a position where we can do for 100 million what others can do for a billion, and we could do for 10 billion what they can do for 100 billion, then it’s 10 times more capital efficient to invest in Anthropic than it is to invest in these other companies.
Would you rather be in a position where you can do anything for 10 times cheaper or where you start with a large pile of money? If you can do things 10 times cheaper than the money is a temporary defect that you can remedy if you have this intrinsic ability to build things for the same price much better than anyone else, or as good as anyone else for much lower price. Investors aren’t idiots, or at least they aren’t always idiots.
ALEX KANTROWITZ: Depends which one you go to.
DARIO AMODEI: I’m not going to name any names, but they basically understand the concept of capital efficiency. And so we’ve been in a position three years ago where these differences were like 1000x and now you’re saying with 20 billion, can you compete with 100 billion? And my answer is basically yes, because of the talent density.
I’ve said this before, but Anthropic is actually the fastest growing software company in history at the scale that it’s at. So we grew from 0 to 100 million in 2023, 100 million to a billion in 2024, and this year we’ve grown from 1 billion to, I think I said this before 4.5 billion. So that 10x a year, I mean, every year I suspect that we’ll grow at that scale. And every year I’m almost afraid to say it publicly because I’m like, “No, it couldn’t possibly happen again.” So I think the growth at that scale speaks for itself in terms of our ability to compete with the big players.
Revenue Breakdown and Business Strategy
ALEX KANTROWITZ: Okay, so CNBC says 60 to 75% of Anthropic sales are through the API. That was according to internal documents. Is that still accurate?
DARIO AMODEI: I won’t give exact numbers, but the majority does come through the API. Although we also do have a flourishing apps business. And more recently the max tier which power users use as well as Claude code, which coders use. So I think we have a thriving and fast growing apps business. But yes, the majority comes through these.
ALEX KANTROWITZ: So you’re making the most pure bet on this technology. Like, you know, OpenAI might be betting on ChatGPT and Google might be betting on the fact that no matter where the technology goes, it can integrate into Gmail and Calendar. So why have you made this bet on this? The pure bet on the tech itself?
Business Strategy and API Focus
DARIO AMODEI: Yeah, I mean, I would say, I wouldn’t quite put it that way. I think we’ve, I would describe it more as we’ve bet on business use cases of the model more so than we bet on the API per se. And it’s just that the first business use cases of the model come through the API.
So you know, as you mentioned, OpenAI is very focused on the consumer side, Google is very focused on kind of the existing products that Google has. Our view is that if anything, the enterprise use of AI is going to be greater even than the consumer use of AI. I should say the business use because it’s enterprise, it’s startups, it’s developers, and it’s kind of, you know, power users using the model for productivity.
I also think that being a company that’s focused on the business use cases actually gives us better incentives to make the models better. A thought experiment that I think is worth running is, you know, suppose I have this model and it’s as good as an undergrad at biochemistry, and then I improve it and it’s as good as a PhD student at Biochemistry.
If I go to a consumer, right? If I give them a chatbot and I say, “Great news, I’ve improved the model from undergrad to graduate level in biochemistry.” Maybe, I don’t know, 1% of consumers care about that at all. 99% are just going to be like, “I don’t understand it either way.”
But now suppose I go to Pfizer and I say, “I’ve improved this from undergrad at biochemistry to grad at biochemistry.” This is going to be the biggest deal in the world. They might pay 10 times more for something like that. It might have 10 times more value to them.
And so the general aim of making the models solve the problems of the world, to make them smarter and smarter, but also able to bring many of the positive applications, right? The things I wrote about in “Machines of Loving Grace,” of like, solving the problems of biomedicine, solving the problems of geopolitics, solving the problems of economic development, you know, as well as more prosaic things like finance or legal or productivity insurance. I think it gives a better incentive to kind of develop the models as far as possible. And I think in many ways it’s like, it may even be a more positive business.
So I would say we’re making a bet on the business use of AI because it’s most aligned with kind of the exponential.
The Coding Use Case Decision
ALEX KANTROWITZ: Okay, then briefly, how did you decide to go with the coding use case?
DARIO AMODEI: Yeah. So originally, as happens with most things we’re trying to optimize for making the model better at a bunch of stuff. And coding particularly stood out in terms of how valuable it was. I’ve worked with thousands of engineers, and there was a point about a year, year and a half ago where one of the best I’d ever worked with said every previous coding model has been useless to me. And this one finally was able to do something I wasn’t able to do.
And then after we released it, it started getting quick adoption. This was around the time that, you know, a lot of the coding companies like Cursor, Windsurf, GitHub, Augment Code, started exploding in popularity. And then when we saw how popular it was, we kind of doubled down on it.
My view is that coding is particularly interesting because A, the adoption is fast and B, getting better at coding with the models actually helps you to develop the next model. So it has a number of, I would say, advantages.
Pricing Model Challenges
ALEX KANTROWITZ: And now you’re selling your AI coding through Claude Code. But it’s very interesting. The pricing model has been confounding to some. You can spend $200 a month and get the equivalent. I spoke to one developer. They got the equivalent of $6,000 a month from your API. Ethan has pointed out the more popular that your models get, the more money you’re going to lose if people are super users of this technology. So how does that make sense?
DARIO AMODEI: So actually, pricing schemes and rate limits are surprisingly complicated. So some of this is basically the result of when we released our. When we released Claude Code in the max tier, which we eventually tied together, actually not fully understanding the implications of the ways in which people could use the models and how much they were actually able to get.
So over the last few days, as of the time of this interview, we’ve adjusted that, particularly on the larger models like Opus, I think it’s no longer possible to spend that much with a $200 subscription. And, you know, it’s possible more changes will come in the future, but we’re always going to have a distribution of users who use a lot and users who use some amount.
And it doesn’t necessarily mean we’re losing money that there are some users who get more, you know, who, if you were to measure via API credits, spend, you know, get a better deal on the consumer, on the consumer subscription than they would on the API products. Right. There’s a lot of assumptions there, and I can tell you that some of them are wrong. We are not, in fact losing money.
Cost and Pricing Dynamics
ALEX KANTROWITZ: But I guess there’s another question about whether you can continue to serve these use cases and not raise prices. So just to give you a couple stats, there are some developers that are upset because using Anthropic’s newer models and Cursor is costing them more than it ever has. Startups that I’ve spoken with say Anthropic is down a bunch because they can’t get access to the GPUs. At least that’s what they imagine is happening. And I was just with Amjad Massad at Replit in an interview that we’re going to air next week, who said there was a period of time where the price per token, price to use these models was coming down and it stopped coming down. So is it the. Is what’s happening that these models are just so expensive for Anthropic to run that it’s hitting a wall of its own?
DARIO AMODEI: Again, I think you’re making assumptions here.
ALEX KANTROWITZ: That’s why I’m asking the CEO.
DARIO AMODEI: Yeah, you know, the way I think about it is we think about the models in terms of how much value are they creating, Right. So as the models get better and better, I think about how much value they create. And there’s a separate question about how the value is distributed between those who make the model, those who make the chips, and, you know, those who make the underlying applications. So, again, without being too specific, like, I think there are some assumptions in your question that are not necessarily correct.
ALEX KANTROWITZ: You know, tell me which ones.
DARIO AMODEI: So I’ll say this. I do expect the. I expect the price of providing a given level of intelligence to go down. I expect the price of providing the frontier of intelligence, which will provide kind of increasing economic value that might go up or it might go down. My guess is it probably stays about where it is. But again, the value that’s created goes way up.
So two years from now, my guess is that we’ll have models that cost of the same order of magnitude that they cost today, except they’ll be much more capable of doing work much more autonomously, much more broadly than they are capable of today.
Model Architecture and Costs
ALEX KANTROWITZ: One of the things that Amjad mentioned was he thinks that the bigger models are not as intensive to run or more intensive to run, given their size, because of the architecture, some of these techniques that we talked about, that they’re lighting up only certain sections of the model. So his idea, I’m hopefully conveying this truthfully, is that Anthropic can run these models without too much bulk on the back end, but is still keeping those prices where they are. And I think the line that I’m going to draw there is maybe that to get to software margins, there were some reports that Anthropic is slightly below software gross margins. And you’re going to have to charge a little bit more for these models.
DARIO AMODEI: So, yeah, again, I think larger models cost more to run than smaller models. You know, I think the technique you’re referring to is maybe mixture of experts or something like that. So whether your models are a mixture of experts or not, like, mixture of experts is like a way to run models more cheaply that have a given number of parameters. It’s a way to train models.
But if you’re not using that technique, then larger models that don’t use that technique cost more to run than smaller models that don’t use that technique. And if you’re using that technique, larger models that use that technique cost, you know, more to run than smaller models that are using that technique. So I think, I think that’s sort of a distortion of the situation.
ALEX KANTROWITZ: Basically, I’m just guessing and I’m trying to find out what the truth is from you.
DARIO AMODEI: Yeah, look, so I, you know, in terms of, like, the cost of the models, like, one thing, you’d be surprised by people, you know, people kind of impute this thing to like, “Oh, man, it’s going to be really hard to get the margins from like x percent to y percent.” We make improvements all the time that make the models, like, 50% more efficient than they are before. We are just the beginning of optimizing inference. Inference has improved a huge amount where from, from where it was a couple, a couple years ago to where it is now. That’s why the prices are coming down.
Profitability and Investment Model
ALEX KANTROWITZ: And then how long is it going to take to be profitable? Because I think the loss is going to be like 3 billion this year.
DARIO AMODEI: So I would distinguish different things. There’s the cost of running. There’s the cost of running the model. Right? So for every dollar the model makes, it costs a certain amount that is actually already fairly profitable. There are separate things. There’s, you know, the cost of paying people and like, buildings that is actually not that large in the scheme of things. The big cost is the cost of training the next model.
And I think this idea of, like, the company’s losing money and not being profitable is a little bit misleading. And you start to understand it better when you look at the scaling laws.
So as a thought exercise, these numbers are not exact or even close philanthropic. Let’s imagine that in 2023, you train a model that costs $100 million, and then in 2024, you deploy the 2023 model, and it makes $200 million in revenue, but you spend a billion dollars to train, you know, to train a new model in 2024. So, and then, you know, and then in 2025, the billion dollar model makes 2 billion in revenue and you spend 10 billion to train the next model.
So the company every year is unprofitable. It lost 800 million in 2024, and then 2025 it lost $8 billion. So you know, this looks like a hugely unprofitable enterprise. But if instead I think in terms of, is each model profitable? Right. Think of each model as a venture. I invested 100 million in the model and then I got, then I got, then I got 200 million out of the model the next year. So that model had 50% margins. And you know, and like, and like made me $100 million the next year.
You know, the, the, the company invested a billion dollars. So every model is profitable. They’re all profitable. Every company is unprofitable every year. I’m not claiming these numbers for anthropic or claiming these facts front. But this general, this general dynamic is, is this, this general dynamic the explanation for what is going on?
And so, you know, you know, at any time, if the models stopped getting better or if a company stopped investing in the next model, you know, you would, you know, you would have probably a viable business with the existing models. But everyone is investing in the next model. And so eventually it’ll get, it’ll get to some scale. But the fact that we’re spending more to this fact that we’re spending more to invest in the next model suggest that the scale of the business is going to be larger the next year than it was the year before.
Now of course what could happen is like the model stopped getting better and there’s this kind of one time cost that’s like a boondoggle and we spend a bunch of money, but then the companies, the industry will kind of return to this plateau, to this level of profitability or the exponential can keep going. So I think that’s a long winded way to say I don’t think it’s really the right way to think about things.
The Open Source Challenge
ALEX KANTROWITZ: Right, but what about open source? Because if you stopped, let’s say you stopped investing in the models and open source caught up, then people could swap in open source. Now I’d love to hear your perspective on this, because one of the things people have talked to me about when it comes to the anthropic business is there is that risk eventually that open source gets good enough that you can take anthropic out and put open source in.
The Open Source Debate in AI
DARIO AMODEI: So, people have this set of heuristics about how things work. Back when I was in AI in 2014, there was an existing AI and machine learning research community that thought about things in a certain way and were like, “This is just a fad, this is a new thing, this can’t work, this can’t scale.” And then because of the exponential, all those things turned out to be false.
Then a similar thing happened with people deploying AI within companies to various applications. Then there was the same thought in the startup ecosystem. And I think now we’re at the phase where the world’s business leaders, the investors and the business community, they have this whole lexicon of commoditization, moats, which layer is the value going to accrue to.
Open source is this idea that you can see everything that’s going on, that it has a significance, that it undermines the fact that it undermines business. And I actually find as someone who didn’t come from that world at all, who never thought in terms of that lexicon, this is one of these situations where not knowing anything often leads you to make better predictions than the people who have their way of thinking about things from the last generation of tech.
This is all a long-winded way of saying I don’t think open source works the same way in AI that it has worked in other areas. Primarily because with open source you can see the source code of the model. Here we can’t see inside the model. It’s often called “open weights” instead of “open source” to distinguish that. But a lot of the benefits, which is that many people can work on it, that it’s additive, it doesn’t quite work in the same way.
I’ve actually always seen it as a red herring when I see a new model come out. I don’t care whether it’s open source or not. If we talk about DeepSeek, I don’t think it mattered that DeepSeek is open source. I think I ask, “Is it a good model? Is it better than us at the things that matter?” That’s the only thing that I care about.
It actually doesn’t matter either way because ultimately you have to host it on the cloud. The people who host it on the cloud do inference. These are big models, they’re hard to do inference on. And conversely, many of the things that you can do when you see the weights, we’re increasingly offering on clouds where you can fine tune the model. We’re even looking at ways to investigate the activations of the model as part of an interpretability interface. We did some little things around steering last time, so I think it’s the wrong axis to think in terms of.
When I think about competition, I think about which models are good at the tasks that we do. I think open source is actually a red herring.
ALEX KANTROWITZ: But if it’s free and cheap to run—
DARIO AMODEI: It’s not free. You have to run it on inference. And someone has to make it fast on inference.
Growing Up in San Francisco
ALEX KANTROWITZ: All right, so I want to learn a little bit more about Dario, the person. So we have a little bit of time left. So I have some questions for you about early life and then how you became who you are.
DARIO AMODEI: Yes.
ALEX KANTROWITZ: So what was it like growing up in San Francisco?
DARIO AMODEI: Yeah. The city when I first grew up here had not really gentrified that much when I grew up. The tech boom hadn’t happened yet. It happened as I was going through high school and actually I had no interest in it. It was totally boring to me.
I was interested in being a scientist. I was interested in physics and math. The idea of writing some website actually had no interest to me whatsoever or founding a company. Those weren’t things that I was interested in at all. I was interested in discovering fundamental scientific truth. And I was interested in how can I do something that makes the world better?
I watched the tech boom happen around me, but I feel like there were all kinds of things I probably could have learned from it that would have been helpful now, but I just actually wasn’t paying attention and had no interest in it, even though I was right at the center of it.
ALEX KANTROWITZ: So you’re the son of a Jewish mother, Italian father?
DARIO AMODEI: That is true.
ALEX KANTROWITZ: Where I’m from in Long Island, we call that a pizza bagel.
DARIO AMODEI: A pizza bagel? I’ve never heard that term before.
Family Influence and Values
ALEX KANTROWITZ: So what was your relationship with your parents like?
DARIO AMODEI: I was always pretty close with them. I feel like they gave me a sense of right and wrong and what was important in the world. I feel like imbuing a strong sense of responsibility is maybe the thing that I remember most. They were always people who felt that sense of responsibility and wanted to make the world better. And I feel like that’s one of the main things that I learned from them.
It was always a very loving family, a very caring family. I was very close with my sister Daniela, who of course, became my co-founder. And I think we decided very early that we wanted to work together in some capacity. I don’t know if we imagined that it would happen at quite the scale that it has happened, but I think that was something we decided early that we wanted to do.
ALEX KANTROWITZ: The people that I’ve spoken with that have known you through the years have told me that your father’s illness had a big impact on you. Can you share a little bit about that?
The Path from Physics to Biology to AI
DARIO AMODEI: Yes, he was ill for a long time and eventually died in 2006. That was actually one of the things that drove me to—we don’t think we mentioned it yet in this interview, but before I went into AI, I went into biology.
I’d gone to Princeton wanting to be a theoretical physicist. And I did some work in cosmology for the first few months of my time there, and that was around the time that my father died. That did have an influence on me and was one of the things that convinced me to go into biology, to try and address human illnesses and biological problems.
So I started talking to some of the folks who worked on biophysics and computational neuroscience in the department that I was at at Princeton. And that was what led to the switch to biology and computational neuroscience. And then, of course, after that, I eventually went into AI.
The reason I went into AI was actually a continuation of that motivation, which is that as I spent many years in biology, I realized that the complexity of the underlying problems in biology felt like it was beyond human scale. In order to understand it all, you needed hundreds, thousands of human researchers, and they often had a hard time collaborating or sharing their knowledge and combining their knowledge.
AI, which I was just starting to see the discoveries in, felt to me like the only technology that could bridge that gap, could bring us beyond human scale to fully understand and solve the problems of biology. So yeah, there is a through line there.
ALEX KANTROWITZ: Right. And I could have this wrong, but one thing I heard was that his illness was largely incurable when he had it.
DARIO AMODEI: Yes.
ALEX KANTROWITZ: And there have been advances that have been—can you share a little bit more? Yes, there are advances that have made it much more manageable today.
DARIO AMODEI: Yes, that is true. Actually, only in the maybe three or four years after he died, the cure rate for the disease that he had went from 50% to roughly 95%.
ALEX KANTROWITZ: I mean, it has to have felt so unjust to have your father taken away by something that could have been cured.
The Urgency of Progress and the “Doomer” Label
DARIO AMODEI: Of course. But it also tells you of the urgency of solving the relevant problems, right? There was someone who worked on the cure to this disease that managed to cure it and save a bunch of people’s lives, but could have saved even more people’s lives if they had managed to find that cure a few years earlier than they did.
And I think that’s one of the tensions here, right, that I think AI has all of these benefits, and I want everyone to get those benefits as soon as possible. I probably understand better than almost anyone how urgent those benefits are.
And so I really understand the stakes when I speak out about AI has these risks and I’m worried about these risks. I get very angry when people call me a doomer. I got really angry when someone’s like, “This guy’s a doomer, he wants to slow things down.” You heard what I just said—my father died because of cures that could have happened a few years later. I understand the benefit of this technology.
When I sat down to write “Machines of Loving Grace,” I wrote out all the ways that billions of people’s lives could be better with this technology. Some of these people who on Twitter cheer for acceleration, I don’t think they have a humanistic sense of the benefit of the technology. Their brain’s just full of adrenaline and they want to cheer for something. They want to accelerate. I don’t get the sense they care.
And so when these people call me a doomer, I think they just completely lack any moral credibility in doing that. It really makes me lose respect for them.
The Quest for Impact
ALEX KANTROWITZ: And I’ve been wondering what this word impact has been, because it’s come up so often that those who have been around you have said you’ve been singularly obsessed with having impact. In fact, I spoke with someone who knew you well, who said you wouldn’t watch Game of Thrones because it wasn’t tied to impact, that it was a waste of time and you wanted to be focused on impact.
DARIO AMODEI: Actually, that’s not quite right. I wouldn’t watch it because it was so negative. It was like, these people start off and they’re partly the situation and partly because they’re just horrible people. They create the situation where at the end of it, everyone is worse off than everyone was before. I’m really excited about creating positive sum situations.
ALEX KANTROWITZ: I recommend you watch it. It’s a great show.
DARIO AMODEI: But I hear some parts of it. I was just very reluctant and didn’t watch it for a long time.
ALEX KANTROWITZ: Let’s get back to the impact.
DARIO AMODEI: Let’s get back to the impact.
ALEX KANTROWITZ: So that’s what impact is, is effectively your career has been this quest to have that impact, to be able—tell me if I’m going too far—to prevent other people from being in similar situations.
The Path to Impact Through AI
DARIO AMODEI: I think that’s a piece of it. I have looked at many attempts to help people, and some of them are more effective than others. I think I’ve always tried to ensure there should be strategy behind it. There should be brains behind trying to help people, which often means that there’s a long path to it. It can run through a company and many activities that are technical and not immediately tied to the kind of impact that you’re trying to have.
But the arc is – I’m always trying to bend the arc towards that. I think that’s my picture of it. That’s really why I got into this. Similar to the reason to get into AI was that I saw the problems of biology as almost intractable without it, or at least too slow moving.
I think my reason to start a company was that I had worked at other companies and I just didn’t feel like the way those companies were run was really oriented towards trying to have that impact. There was a story around it that was often used for recruiting, but it became clear to me over the years that story was not sincere.
ALEX KANTROWITZ: I’m going to circle around a little bit because it’s clear that you’re referring to OpenAI here. From what I understand, you had 50% of OpenAI’s compute. I mean, you ran the GPT-3 project. So if anyone was going to be focused on impact and safety wouldn’t have been you.
The OpenAI Years and GPT Development
DARIO AMODEI: Yes, I was. There was a period during which that was true, that wasn’t true the entire time. That was, for example, when we were scaling up GPT-3. So when I was at OpenAI, I and a lot of my colleagues, including the people who eventually founded Anthropic…
ALEX KANTROWITZ: The name you gave them was “Pandas.”
DARIO AMODEI: That isn’t a name I gave them.
ALEX KANTROWITZ: The name they took.
DARIO AMODEI: That isn’t a name they took.
ALEX KANTROWITZ: That’s the name other people called them.
DARIO AMODEI: Maybe it’s a name other people called them. That’s not a name I ever used for my team.
ALEX KANTROWITZ: Okay, sorry, go ahead. That’s good clarification.
DARIO AMODEI: Thank you. So we were involved in scaling up these models. Actually, the original reason for building GPT-2 and GPT-3, it was an outgrowth of the kind of AI alignment work that we were doing. Myself and Paul Christiano and some of the Anthropic co-founders had invented this technique called RL from human feedback. And that was designed to help steer models in a direction to follow human intent.
It was actually a precursor to – we were trying to scale up another method called scalable supervision, which I think is just starting to work many years later to help models follow more kind of scalable human intent. But what we found is even with the more primitive technique, RL from Human Feedback, it wasn’t working with the small language models with GPT-1 that we applied it to and that had been built by other people at OpenAI.
And so the scaling up of GPT-2 and GPT-3 was done in order to study these techniques, in order to apply RL for human feedback at scale. This goes to one thing which is that I think in this field, the alignment of AI systems and the capability of AI systems is intertwined in this way that always ends up being more tied and more intertwined than we think.
Actually, what this made me realize is that it’s very hard to work on the safety of AI systems and the capability of AI systems separately. It’s very hard to work on one and not the other. I actually think the value and the way to inflect the field in a more positive way comes from organizational level decisions or when to release things, when to study things internally, what kind of work to do on systems. And that was one of the things that motivated me and some of the other future Anthropic founders to go off and do it our own way.
The Decision to Leave OpenAI
ALEX KANTROWITZ: But again, if you were driving, if you think capabilities and safety are interlinked and you were the guy driving the cutting edge models within OpenAI, if you left, you knew they were going to be a company that was still doing this stuff.
DARIO AMODEI: That’s right.
ALEX KANTROWITZ: It seems like if you’re driving the capabilities, you’d be the one in the driver’s seat to help it be safe the way that you want it.
DARIO AMODEI: Again, I will say, if there’s a decision on releasing a model, if there’s a decision on the governance of the company, if there’s a decision on how the personnel of the company works, how the company represents itself externally, the decisions that the company makes with respect to deployment, the claims it makes about how it operates with respect to society – many of those things are not things that you control just by training the model.
And I think trust is really important. I think the leaders of a company, they have to be trustworthy people. They have to be people whose motivations are sincere. No matter how much you’re driving the company forward technically, if you’re working for someone whose motivations are not sincere, who’s not an honest person, who does not truly want to make the world better, it’s not going to work. You’re just contributing to something bad.
Addressing Industry Criticism
ALEX KANTROWITZ: So I’m sure you’ve heard the criticism from people like Jensen who say, “Well, Dario thinks he’s the only one who can build this safely.” And therefore, speaking of that word control, wants to control the entire industry.
DARIO AMODEI: I’ve never said anything like that. That’s an outrageous lie. That’s the most outrageous lie I’ve ever heard, by the way.
ALEX KANTROWITZ: I’m sorry if I got Jensen’s words wrong, but…
DARIO AMODEI: No, no, no. The words were correct. But what he’s saying – the words are outrageous. In fact, I’ve said multiple times, and I think Anthropic’s actions have shown it that we’re aiming for something we call a “race to the top.”
I’ve said this on podcasts over the years, and I think Anthropic’s actions have shown it where with a race to the bottom, everyone is competing to get things out as fast as possible. And so I say when you have a race to the bottom, it doesn’t matter who wins. Everyone loses. Because you make the unsafe system that helps your adversary or causes economic problems or is unsafe from an alignment perspective.
The way I think about the Race to the Top is that it doesn’t matter who wins. Everyone wins. So the way the race to the top works is you set an example for how the field works. You say, “We’re going to engage in this practice.”
Setting Industry Standards
So a key example of this is responsible scaling policies. We were the first to put out a responsible scaling policy. And we didn’t say everyone else should do this or you’re bad guys. We didn’t try to use it as an advantage. We put it out, and then we encouraged everyone else to do it.
And then we discovered in the months after that there were people within the other companies who were trying to put out responsible scaling policies. But the fact that we had done it gave those people permission. It enabled those people to make the argument to leadership, “Hey, Anthropic is doing this, so we should do it as well.”
The same has been true of investing in interpretability. We release our interpretability research to everyone and allow other companies to copy it, even though we’ve seen that it sometimes has commercial advantages. Same with things like constitutional AI. Same with the measurement of the dangers of our system. Dangerous capabilities evals.
So we’re trying to set an example for the field. But there’s an interplay where it helps to be a powerful commercial competitor. I’ve said nothing that anywhere near resembles the idea that this company should be the only one to build the technology. I don’t know how anyone could ever derive that from anything that I’ve said. It’s just an incredible and bad faith distortion.
The SBF Connection
ALEX KANTROWITZ: All right, let’s see if we can lightning around like one or two before I ask you the last one, which we’ll have five minutes for. What happened with SBF?
DARIO AMODEI: What happened with SBF?
ALEX KANTROWITZ: I mean, he was… Go ahead.
DARIO AMODEI: I couldn’t tell you. What was the…
ALEX KANTROWITZ: What are you answering?
DARIO AMODEI: I probably met the guy four or five times.
ALEX KANTROWITZ: Okay.
DARIO AMODEI: So I have no great insight into the psychology of SBF or why he did things as stupid or immoral as he did. I think the only thing I had ever seen ahead of time with SBF was a couple people mentioned to me that he was hard to work with, that he was a bit of a “move fast and break things” guy. And I was like, “Okay, welcome to Silicon Valley.”
And so I remember saying, “Okay, I’m going to give this guy non-voting shares. I’m not going to put him on the board. He sounds like a bad person to deal with every day. But he’s excited about AI. He’s excited about AI safety. He’s a bull on AI and he’s interested in AI safety.” So it seemed like a sensible thing to do.
In retrospect, that “move fast and break things” turned out to be much, much more extreme and bad than I ever imagined.
Balancing Impact and Safety
ALEX KANTROWITZ: Okay, so let’s end here. So you found your impact. I mean, you’re working the dream pretty much right now. I mean, think about all the ways that AI can be used for biology. Just to start, you also say that this is a dangerous technology. And I’m curious if your desire for impact could be pushing you to accelerate this technology while potentially devaluing the possibility that it could, that controlling it might not be feasible.
DARIO AMODEI: So I think I have more than anyone else in the industry warned about the dangers of technology. We just spent 10, 20 minutes talking about the frightening large array of people who run trillion dollar companies criticizing me for talking about the dangers of these technologies.
I have US government officials, I have people who run $4 trillion companies criticizing me for talking about the dangers of the technology, imputing all these bizarre motives that bear no relationship to anything I’ve ever said, not supported in anything I’ve ever done. And yet I’m going to continue to do it.
I actually think that as the revenues, as the economic business of AI ramps up and it’s ramping up exponentially – if I’m right, in a couple years it’ll be the biggest source of revenue in the world. It’ll be the biggest industry in the world. And people who run companies already think it.
So we actually have this terrifying situation where hundreds of billions to trillions to, I would say maybe $20 trillion of capital is on the side of “accelerate AI as fast as possible.” We have this company that’s very valuable in absolute terms, but looks very small compared to that – $60 billion dollars. And I keep speaking up even if it makes folks… there have been these articles, some folks in the US government are upset at us, for example, for opposing the moratorium on AI regulation, for being in favor of export controls for chips on China, for talking about the economic impacts of AI. Every time I do that I get attacked by many of my peers.
ALEX KANTROWITZ: Right, but you’re still assuming that we can control it. That’s what I’m pointing out.
The Stakes and Responsibility of AI Development
DARIO AMODEI: But I’m just telling you how much effort, how much persistence, how much despite everything that stacked up, despite all the dangers, despite the risk that it has to the company of being willing to speak up, I’m willing to do it. And that’s why I’m saying that, look, if I thought that there was no way to control the technology, even if I thought this is just a gamble, some people are like, “Oh, you think there’s a 5 or 10% chance that AI could go wrong? You’re just rolling the dice.” That’s not the way I think about it.
This is a multi-step game, right? You take one step, you build the next step of most powerful models, you have a more intensive testing regime. As we get closer and closer to the more powerful models, I’m speaking up more and more and I’m taking more and more drastic actions because I’m concerned that the risks of AI are getting closer and closer. We’re working to address them. We’ve made a certain amount of progress.
But when I worry that the progress that we made on the risks does not, is not fully aligned with, is not going as fast as we need to go for the speed of the technology, then I speed up. Then I speak up louder. And so, you started this interview by saying “What’s gotten into you? Why are you talking about this?” It’s because the exponential is getting to the point that I worry that we may have a situation that our ability to handle the risks is not keeping up with the speed of the technology. And that’s how I’m responding to it.
The Path Forward on AI Safety
If I believe that there was no way to control the technology, which I see absolutely no evidence for that proposition, we’ve gotten better at controlling models. With every model that we release, all these things go wrong. But you really have to stress test the models pretty hard. That doesn’t mean you can’t have emergent bad behavior.
And I think, if we got to much more powerful models with only the alignment techniques we have now, then I’d be very concerned. Then I’d be out there saying everyone should stop building these things, even China should stop building these. I don’t think they’d listen to me, which is one reason I think export controls is a better measure.
But if we got a few years ahead in models and had only the alignment and steering techniques we had today, then I would definitely be advocating for us to slow down a lot. The reason I’m warning about the risk is so that we don’t have to slow down so that we can invest in safety techniques and can continue the progress of the field.
The Multi-Party Race Challenge
It would be a huge economic effort. Even if one company was willing to slow down the technology, that doesn’t stop all the other companies, that doesn’t stop our geopolitical adversaries to whom this is an existential fight for survival. So there’s very little latitude here, right. We’re stuck between all the benefits of the technology, the race to accelerate it and the fact that that is a multi-party race.
And so I am doing the best thing I can do which is to invest in safety technology, to speed up the progress of safety. I’ve written essays on the importance of interpretability, on how important various directions in safety are. We release all of our safety work openly because we think that’s a public good, that’s the thing that everyone needs to share.
So if you have a better strategy for balancing the benefits, the inevitability of the technology and the risks that it faces, I am very open to hear it because I go to sleep every night thinking about it because I have such an incredible understanding of the stakes in terms of the benefits, in terms of what it can do, the lives that it can save. I’ve seen that personally. I also have seen the risks personally.
Real-World Consequences
We’ve already seen things go wrong with the models. We have an example of that with Grok. And people dismiss this, but they’re not going to laugh anymore when the models are taking actions, when they’re manufacturing and when they’re in charge of medical interventions. Right? People can laugh at the risks when the models are just talking. But I think it’s very serious.
And so I think what this situation demands is a very serious understanding of both the risks and the benefits. These are high stakes decisions. They need to be made with seriousness.
Rejecting Extremes on Both Sides
And I think something that makes me very concerned is that on one hand we have a cadre of people who are just doomers. People call me a doomer. I’m not, but there are doomers out there. People who say they know there’s no way to build this safely. I’ve looked at their arguments. They’re a bunch of gobbledygook. The idea that these models have dangers associated with them, including dangers to humanity as a whole, that makes sense to me. The idea that we can kind of logically prove that there’s no way to make them safe, that seems like nonsense to me. So I think that is an intellectually and morally unserious way to respond to the situation we’re in.
I also think it is intellectually and morally unserious for people who are sitting on $20 trillion of capital, who all work together because their incentives are all in the same way. There are dollar signs in all of their eyes to sit there and say we shouldn’t regulate this technology for 10 years. Anyone who says that we should worry about the safety of these models is someone who just wants to control the technology themselves. That’s an outrageous claim and it’s a morally unserious claim.
Anthropic’s Commitment to Research and Transparency
We’ve sat here and we’ve done every possible piece of research. We speak up when we believe it’s appropriate to do so. We’ve tried to back up, when we make claims about the economic impact of AI, we have an economic research council. We have an economic index that we use to track the model in real time. And we’re giving grants for people to understand the economic impact of the technology.
I think for people who are far more financially invested in the success of the technology than I am to just breezily lob ad hominem attacks, I think that is just as intellectually and morally unserious as the doomer’s position.
A Call for Thoughtfulness
I think what we need here is we need more thoughtfulness. We need more honesty. We need more people willing to go against their interest, willing to not have breezy Twitter fights, hot takes. We need people to actually invest in understanding the situation, actually do the work, actually put out the research and actually add some light and some insight to the situation that we’re in.
I am trying to do that. I don’t think I’m doing that perfectly. As no human can. I’m trying to do it as well as I can. It would be very helpful if there were others who would try to do the same thing.
ALEX KANTROWITZ: Well, Dario, I said this off camera, but I want to make sure to say it on as we’re wrapping up. I appreciate how much Anthropic publishes. We have learned a ton from the experiments, everything from recovery, teaming the models to vending machine Claude, which we didn’t have a chance to speak about today. But I think the world is better off just to hear everything going on here. And to that note, thank you for sitting down with me and spending so much time together.
DARIO AMODEI: Thanks for having me.
ALEX KANTROWITZ: Thanks everybody for listening and watching. And we’ll see you next time on Big Technology Podcast.
Related Posts
- Protect Your Data In 2026: A Comprehensible Guide To Set Strong passwords
- The Diary Of A CEO: with AI Pioneer Yoshua Bengio (Transcript)
- Transcript: Intercom’s Eoghan McCabe on Triggernometry Podcast
- Mustafa Suleyman on Silicon Valley Girl Podcast (Transcript)
- NVIDIA CEO Jensen Huang on China, AI & U.S. Competitiveness at CSIS (Transcript)
