Skip to content
Home » People by WTF: w/ Dario Amodei – The AI Tsunami is Here (Transcript)

People by WTF: w/ Dario Amodei – The AI Tsunami is Here (Transcript)

Editor’s Notes: In this episode of People by WTF, Nikhil Kamath hosts an in-depth conversation with Dario Amodei, the co-founder and CEO of Anthropic, about the rapid approach of human-level artificial intelligence. They explore the fundamental “scaling laws” driving AI’s growth, the potential for an “AI tsunami” to reshape global society, and the specific opportunities and challenges facing India in this new era. Dario provides a rare look into his journey from biophysics to AI leadership, emphasizing the critical importance of safety, alignment, and “constitutions” in building responsible models like Claude. This discussion serves as a powerful exploration of whether humanity is truly prepared for the profound changes that widespread, general-purpose intelligence will bring to our economies and daily lives. (February 24, 2026)

TRANSCRIPT:

From Biology to Building Anthropic

NIKHIL KAMATH: What did you do before founding Anthropic?

DARIO AMODEI: Yeah, so I was actually originally a biologist. I did my undergrad in physics, my PhD in biophysics, and I wanted to understand biological systems so that I could cure disease. And the thing I noticed about studying biology was its incredible complexity — for example, if you look at the protein mass spec work that I did trying to find protein biomarkers, it’s just really incredible how much complexity there is, right?

You have a given protein, the RNA gets spliced in a whole bunch of different ways depending on where it is in the cell. Then it gets post-translationally modified, phosphorylated, complexed with a whole bunch of other proteins. And I was starting to despair that it was too complicated for humans to understand.

And then as I was doing this work on biology, I noticed a lot of the early work around AlexNet, which is one of the first neural nets, almost 15 years ago now. And I said, “Wow, AI is actually starting to work. It has some things in common with how the human brain works, but has the potential to be larger and scale better and learn tasks like biology. Maybe this is ultimately going to be the solution to solving our problems of biology.”

So I went to work with Andrew Ng at Baidu, then I was at Google for a year. Then I joined OpenAI a few months after it started and basically led all of research there for several years. But then eventually, myself and a few other employees just kind of had our own vision for how we wanted to make AI and what we wanted the company to stand for. And so we went off and founded Anthropic.

The Fork from OpenAI: Scaling Laws and Safety

NIKHIL KAMATH: How was it — was it like a fork in how OpenAI was thinking into what Anthropic eventually did?

DARIO AMODEI: Yeah, I would say my conviction and the conviction of my co-founders when we founded Anthropic were two things. I think one, we were starting to convince OpenAI of. The other, I didn’t feel that we were convincing them of.

So the first was the conviction in the scaling laws and the idea that if you scale up models — give them more data, more compute — again, there are a few modifications like RL, but not really very much, it’s pretty close to pure scaling — you find incredible increases in performance. I was finding that in 2019 with GPT-2, when we just first saw the first glimmers of the scaling laws. And of course there were a lot of folks inside and outside who didn’t believe it at all. We really made the case to leadership: “This is important, this is going to be a big deal.” And I think they were kind of starting to believe us and ultimately went in that direction.

And there was a second conviction I had, which is: look, if these models are going to be general cognitive agents — general cognitive tools that match the capability of the human brain — we better get this right. The economic implications are going to be enormous. The geopolitical implications are going to be enormous. The safety implications are going to be enormous. It’s going to transform how the world works. And so we need to do it in the right way.

And despite a lot of language and verbiage about doing it in the right way, I was, for a variety of reasons, just not convinced that at the institution I was at, there was a real and serious conviction to do it in the right way.

And so my view is always: don’t argue with someone else’s vision. Don’t try to get someone to do things the way you want to. If you have a strong vision and you share that vision with a few other people, you should just go off and do your own thing. Then you’re responsible for your own mistakes. You don’t have to answer for anyone else’s. And maybe your vision works out, maybe it doesn’t, but at least it’s yours.

NIKHIL KAMATH: Didn’t OpenAI believe in scaling laws, because they went down the same path themselves too, right?

DARIO AMODEI: Well, yeah — we succeeded.

What Are Scaling Laws?

NIKHIL KAMATH: Can you explain what scaling laws are in very simple terms?

DARIO AMODEI: It’s like if you want a chemical reaction to produce oxygen, or start a fire or something like that, you need different ingredients. And if you don’t have enough of one ingredient, the reaction stops. But if you put ingredients together in proportion, you get your explosion or your fire or whatever.

And for AI, those ingredients are data, compute, and the size of the AI model. So the scaling laws just tell you that if you put in the ingredients to the chemical reaction — the ingredients of data and model size — what you get out is intelligence. Intelligence is the product of a chemical reaction.

NIKHIL KAMATH: And what is intelligence?

DARIO AMODEI: Intelligence as measured by the ability to translate language, or the ability to write code, or the ability to answer questions correctly about a story.