Editor’s Notes: In this episode of People by WTF, Nikhil Kamath hosts an in-depth conversation with Dario Amodei, the co-founder and CEO of Anthropic, about the rapid approach of human-level artificial intelligence. They explore the fundamental “scaling laws” driving AI’s growth, the potential for an “AI tsunami” to reshape global society, and the specific opportunities and challenges facing India in this new era. Dario provides a rare look into his journey from biophysics to AI leadership, emphasizing the critical importance of safety, alignment, and “constitutions” in building responsible models like Claude. This discussion serves as a powerful exploration of whether humanity is truly prepared for the profound changes that widespread, general-purpose intelligence will bring to our economies and daily lives. (February 24, 2026)
TRANSCRIPT:
From Biology to Building Anthropic
NIKHIL KAMATH: What did you do before founding Anthropic?
DARIO AMODEI: Yeah, so I was actually originally a biologist. I did my undergrad in physics, my PhD in biophysics, and I wanted to understand biological systems so that I could cure disease. And the thing I noticed about studying biology was its incredible complexity — for example, if you look at the protein mass spec work that I did trying to find protein biomarkers, it’s just really incredible how much complexity there is, right?
You have a given protein, the RNA gets spliced in a whole bunch of different ways depending on where it is in the cell. Then it gets post-translationally modified, phosphorylated, complexed with a whole bunch of other proteins. And I was starting to despair that it was too complicated for humans to understand.
And then as I was doing this work on biology, I noticed a lot of the early work around AlexNet, which is one of the first neural nets, almost 15 years ago now. And I said, “Wow, AI is actually starting to work. It has some things in common with how the human brain works, but has the potential to be larger and scale better and learn tasks like biology. Maybe this is ultimately going to be the solution to solving our problems of biology.”
So I went to work with Andrew Ng at Baidu, then I was at Google for a year. Then I joined OpenAI a few months after it started and basically led all of research there for several years. But then eventually, myself and a few other employees just kind of had our own vision for how we wanted to make AI and what we wanted the company to stand for. And so we went off and founded Anthropic.
The Fork from OpenAI: Scaling Laws and Safety
NIKHIL KAMATH: How was it — was it like a fork in how OpenAI was thinking into what Anthropic eventually did?
DARIO AMODEI: Yeah, I would say my conviction and the conviction of my co-founders when we founded Anthropic were two things. I think one, we were starting to convince OpenAI of. The other, I didn’t feel that we were convincing them of.
So the first was the conviction in the scaling laws and the idea that if you scale up models — give them more data, more compute — again, there are a few modifications like RL, but not really very much, it’s pretty close to pure scaling — you find incredible increases in performance. I was finding that in 2019 with GPT-2, when we just first saw the first glimmers of the scaling laws. And of course there were a lot of folks inside and outside who didn’t believe it at all. We really made the case to leadership: “This is important, this is going to be a big deal.” And I think they were kind of starting to believe us and ultimately went in that direction.
And there was a second conviction I had, which is: look, if these models are going to be general cognitive agents — general cognitive tools that match the capability of the human brain — we better get this right. The economic implications are going to be enormous. The geopolitical implications are going to be enormous. The safety implications are going to be enormous. It’s going to transform how the world works. And so we need to do it in the right way.
And despite a lot of language and verbiage about doing it in the right way, I was, for a variety of reasons, just not convinced that at the institution I was at, there was a real and serious conviction to do it in the right way.
And so my view is always: don’t argue with someone else’s vision. Don’t try to get someone to do things the way you want to. If you have a strong vision and you share that vision with a few other people, you should just go off and do your own thing. Then you’re responsible for your own mistakes. You don’t have to answer for anyone else’s. And maybe your vision works out, maybe it doesn’t, but at least it’s yours.
NIKHIL KAMATH: Didn’t OpenAI believe in scaling laws, because they went down the same path themselves too, right?
DARIO AMODEI: Well, yeah — we succeeded.
What Are Scaling Laws?
NIKHIL KAMATH: Can you explain what scaling laws are in very simple terms?
DARIO AMODEI: It’s like if you want a chemical reaction to produce oxygen, or start a fire or something like that, you need different ingredients. And if you don’t have enough of one ingredient, the reaction stops. But if you put ingredients together in proportion, you get your explosion or your fire or whatever.
And for AI, those ingredients are data, compute, and the size of the AI model. So the scaling laws just tell you that if you put in the ingredients to the chemical reaction — the ingredients of data and model size — what you get out is intelligence. Intelligence is the product of a chemical reaction.
NIKHIL KAMATH: And what is intelligence?
DARIO AMODEI: Intelligence as measured by the ability to translate language, or the ability to write code, or the ability to answer questions correctly about a story.
NIKHIL KAMATH: How is the intelligence of today, as you are describing it, different from what a computer could do five years ago?
DARIO AMODEI: Five years ago, you could not ask a computer a question and have it write a one-page essay on that question. You could not ask a computer to implement a feature in code and have it implement that feature in code. None of those things were possible. You could not generate an image. You could not generate a video. You could not analyze a video.
You know, I could get one of those videos of a monkey juggling or something and say, “What’s going on in this video? How many times did the ball change hands?” And right now you could get Claude or another AI model to give you an answer on that. Five years ago, none of those things were possible.
NIKHIL KAMATH: I’m trying to figure out — has the definition of intelligence changed per se?
DARIO AMODEI: Well, five years ago, you could Google and there might be a website that would tell you a little bit about something, right? But you’re just looking up some text that exists on the web. Maybe it’s not about how to get a monkey to juggle — maybe it’s about how to get a seal to juggle. It’s not quite exactly the same thing, because maybe exactly the same thing doesn’t exist.
But as we see when people use these models, you can ask and you can actually get an intelligent response. You can ask a specific question and have the model write one page about it. Or you can give it a hypothetical — “What if I had the monkey juggle clubs instead of balls?” — and that information doesn’t exist anywhere, whereas the model is able to think for itself and come up with an answer on its own. So it’s something totally new. It’s not just matching some text that exists on the Internet.
The Professor at Heart
NIKHIL KAMATH: So, this is more like a conversation, so feel free to talk about what you want to talk about, not necessarily related to the questions I’m asking. You look very animated when you speak. Did you ever teach?
DARIO AMODEI: I was originally an academic, and I thought that I might become a professor. I got my PhD, I went all the way to being a postdoc at Stanford Medical School, and I was aiming to become a professor. But as I mentioned, I got interested in AI, and working in AI required a lot of computational resources, and that was mostly happening in industry. So that took me off the academic path and into industry, and of course, ultimately, through several steps, led me to start a company. But sometimes I think I’m still like a professor at heart.
Power, Responsibility, and Governance
NIKHIL KAMATH: Dario, if AI is the most relevant thing in the world, if the world is realigning in a way and AI is determining who gets what and who doesn’t get what — I’m talking about industries — you today are probably the most relevant person in the world. If Anthropic, in this last cycle, is sitting on top of this pile, for somebody who was going on the path of being a teacher to have arrived to where you are today, are you best equipped for where you are today?
DARIO AMODEI: Well, first, I would say a couple of things. I think there are a lot of folks who are relevant in different ways, right? Even within industry, there are different layers of the stack. There are the folks who make chips, there are the folks even earlier who make semiconductor manufacturing equipment, there are the folks who make models like us, and then there are other players who make models. There are the folks who make applications after the models, and then there are a bunch of other folks who have a say — governments, civil society.
So my hope isn’t that there’s just one tiny set of people that’s relevant. I think we’re trying to broaden the set of people who are relevant and turn it into a broader conversation.
But at the same time, your question is a fair one. One way I could interpret it is: there’s a certain randomness to how a few people end up leading these companies that grow so fast and, it seems, in the near future will power so much of the economy. And I’ve said openly and publicly, not for the first time, that I’m at least somewhat uncomfortable with the amount of concentration of power that’s happening here — almost overnight, almost by accident.
We think about that in a bunch of ways. One is we have an unusual governance structure, something called the Long Term Benefit Trust. It’s a body that ultimately appoints the majority of the board members for Anthropic and is made up of financially disinterested individuals. So that’s some check on what one single person is doing.
And then I think, as always, the government should play some role here. I’ve been an advocate of proactive — although sensible, doesn’t-slow-down-the-technology — sensible regulation of the technology, because I think the people should have a say. Governments and the people who elect them should have a say in how this goes. So I actually think of a lot of what I’m trying to do as trying to preserve a balance of power against the natural grain of this technology.
Humility or Strategy?
NIKHIL KAMATH: For someone like me who’s sitting on the outside and doesn’t have a bone in this competition — when I watch OpenAI talk about how they were a not-for-profit company, or how you are projecting humility in the conversation you’re having right now, or how the American companies are competing with the Chinese companies that are coming about — this projection of humility, where it is for the larger good and not necessarily for how I view the world, as companies with shareholders, with investment in revenues and seeking profit — is this par for the course? Is this something you have to do?
Anthropic’s Philosophy and the Road Ahead
DARIO AMODEI: So I would put it in the following way. I would say the philosophy of Anthropic from the beginning has been that we try not to make too many promises and we try to keep the ones that we make.
We set ourselves up as a for-profit but public benefit corporation with this LTPT governance. And we’ve maintained that. We’ve said that our goal is to stay on the frontier of the technology, but to work on the safety and security aspects of the technology. We’ve pioneered the science of interpretability. We’ve pioneered the science of alignment.
I don’t know if you saw, but we recently released a constitution for Claude, the ability to align models in line with the constitution. And we’ve done a bunch of policy advocacy and warning about risks.
Warning about risks is not in our commercial interest. Like, people can come up with conspiracy theories, but I will tell you, saying that the models we build could be dangerous — whatever people might say — that’s not an effective marketing strategy and that’s not the reason that we do it.
And speaking up when we disagree even with the US Administration on policy matters — we’ve spoken up. We’re willing to say we disagree on this issue. We’ve said that there should be regulation of AI when all the other companies and the administration have said there shouldn’t be regulation of AI.
Regulation of AI holds us back commercially as a company, even though I think it’s the right thing to do. And it’s difficult to go against the government and the other companies and say this. We’re really sticking our neck out.
So we’ve taken a number of actions that I see as really putting our money where our mouth is here. I can’t speak for the other companies. It’s quite possible that some people say these things and they don’t really mean them. But I wouldn’t look at what people say. I would look at what people do.
NIKHIL KAMATH: If what you’re saying gets the government to act via regulation, as the incumbent leaders in this space, you get some kind of a regulatory capture where it becomes harder for the new people coming in as well. Right?
DARIO AMODEI: I don’t agree with that at all. The regulation we’ve advocated for — for example, SB53 in California — exempted everyone who makes under $500 million a year in revenue. SB53 was a transparency law which basically requires companies to show the safety and security tests that they’ve run. And it exempts all companies under $500 million in revenue. So it really only applies to Anthropic and three or four other companies — the companies that have the resources.
Everything that we’ve advocated for here, not just SB53, but all the proposals that we’ve made in the past and the ones that we plan to make in the future, have this character — we’re constraining ourselves and a very small number of additional companies. People who say that need to look at the actual content of what we’re proposing, because it doesn’t match that idea at all.
Machines of Loving Grace vs. The Adolescence of Technology
NIKHIL KAMATH: Fair. I read your papers Machines of Loving Grace and The Adolescence of Technology, and you seem to have had a 180-degree shift in perspective — almost from optimism to skepticism — over like two years, from 2024 to 2026. Is there one moment in the last two years that changed this for you? Did you see something change?
DARIO AMODEI: Yeah, I actually wouldn’t agree with the question. I don’t think I’ve had a shift in perspective. I think the positive side and the negative side are always something that I’ve held in my head. And if you look at the history of the things that I’ve said, I’ve been talking about risks for a very long time. I’ve been talking about benefits for a very long time.
It turns out that it takes me a while to write one of these essays. Both of these are really large as well.
NIKHIL KAMATH: The big essays.
DARIO AMODEI: They’re like 30 pages, both of them. For each one, I spent about a year having a kind of vague vision of the essay in my head and trying to write it, but not fully succeeding. And then, in either case, I had to be on vacation or somewhere where I could think, where the day-to-day business of running the company didn’t occupy me. And then I was finally able to write the essay.
All of that is to say, I started thinking about what would be in The Adolescence of Technology almost the instant I finished Machines of Loving Grace, because I was like, “I want to inspire people with the good vision, but I also want to warn people about what can go wrong.” So it just took me a year to write it.
But really, both visions were in my head, and I think they’re both possible. They’re two different visions of the future. And obviously I want to get the Machines of Loving Grace one right. I want to solve all the problems and have the positive vision. But it’s not a shift in perspective. It’s me just finding the time to write the light and then the dark.
NIKHIL KAMATH: But have you had a change of perspective?
DARIO AMODEI: I would say overall, I’m about where I was before. I have not gotten more positive nor more negative. There may be some places where I’ve gotten more optimistic, or things have gone better than expected. There may be places where I’m more pessimistic and where things have gone worse than expected, but on average, they sort of cancel each other out.
I would say I feel very good about how things have gone with areas like interpretability. Interpretability is the science of seeing inside these neural nets — as we would scan a human brain with an MRI or a neural probe. I’ve been amazed at what we’ve been able to find. We’ve been able to find neurons that correspond to very specific concepts, neural circuits that keep track of how to do rhymes in poetry. So we’re starting to understand what these models do. We just train them in this kind of emergent way, as you would build a snowflake. But now we’re starting to be able to look inside and understand them.
I’m also very encouraged by some of the work on alignment and constitutions — making sure that models behave in the way that we want and expect them to. I think that’s going pretty well.
The Tsunami on the Horizon
DARIO AMODEI: I think I’ve been a bit disappointed, or felt a bit more negative, about some of the things in the kind of public awareness and the actions of wider society. It is surprising to me that we are, in my view, so close to these models reaching the level of human intelligence, and yet there doesn’t seem to be a wider recognition in society of what’s about to happen.
It’s as if this tsunami is coming at us. It’s so close we can see it on the horizon, and yet people are coming up with explanations like, “Oh, it’s not actually a tsunami. That’s just a trick of the light.”
Along with that, there hasn’t been a public awareness of the risks, and therefore governments haven’t acted to address them. There’s even an ideology that we should just try to accelerate as fast as possible. I understand the benefits of the technology — I wrote Machines of Loving Grace — but I think there hasn’t been an appropriate realization of the risks, and there certainly hasn’t been action.
So I would say the technical work on controlling AI systems has gone maybe a little better than I expected, and the societal awareness has gone maybe a little worse than I expected. So I’m about where I was a few years ago.
Exploring Claude Firsthand
NIKHIL KAMATH: So in my own journey — I’m not a programmer, I don’t have a background in coding — I used a bunch of tools for things like research and conversation both ways. But I never tried to figure out if I could code using your tool.
Recently, I hired a developer just to push me to sit for a couple of hours a day and teach me how to start becoming more familiar with it. Largely because of something like FOMO — the fear of missing out on how the world is changing. So I started playing with Claude. I used the connectors to connect my Google Drive, Mail, and Calendar and a bunch of those things. I started using the co-work feature, and then I started using Claude Code to write simple programs around the industry that I’m in, which is financial services — basically to research stock markets and stuff.
DARIO AMODEI: We even have an optimized Claude for financial services. I don’t know if you’ve tried that, but we even have that.
NIKHIL KAMATH: No. And then I went into Claudebot, which is now OpenClaw — I think Claudebot became something else and is now OpenCloud. And I set it up on a Mac Mini and connected it to a Telegram account. And now I chat with it and I try to move files from A to B, work on a server remotely. It’s getting to that point where — I’m not talking about OpenClaw, but even Claude with all the connectors — sometimes it surprises me by how much it knows me. I don’t know if that makes sense.
DARIO AMODEI: Yeah. One of my co-founders was writing this diary with his thoughts and his fears, and he fed it into Claude and asked Claude to comment on it. And Claude said, “Here are some other fears you might have that you haven’t written down.” And Claude ended up being mostly right about those.
So it really gave this eerie sense of like, the model knows you. The model knows you super well. From a relatively small amount of information, it can learn a lot about you and come to know you fairly well.
And like most things with the technology — we talked about Machines of Loving Grace and The Adolescence of Technology — on one hand, something that knows you really well can be a sort of angel on your shoulder that helps to guide your life and make you a better version of yourself. That’s the version we can aim for.
Of course, something that knows you really well can also use what it knows about you to exploit you or manipulate you on behalf of some agenda, or sell your data to someone else. This is one reason we just don’t like the idea of using ads. Because if you’re not paying for the product, you’re the product. And in this case, the product would be this model that knows you super well and could use that in all kinds of nefarious ways. So we need to make sure we take the positive road here and not the negative road.
Owning the Ecosystem
NIKHIL KAMATH: With Claude, I need to use the connectors to give it context to my life. With Google, for example, it already has the context to my life because I use their Sheets, their email, their Drive, their Chat, and everything like that. For Anthropic, long term, will you also have to own the ecosystem?
DARIO AMODEI: Yeah, I mean, you know, do you —
NIKHIL KAMATH: Do you have to build mail and chat?
AI as a Platform: Integration, Ecosystem, and the Road Ahead
DARIO AMODEI: Yeah. I don’t think we need to build all of those things. My thought would be, it’s going to be a mixture of things we make ourselves and integrating into others. We can integrate Claude into Google Docs, we can integrate Claude into Google Sheets. We have external connectors there. We’re starting to do that with cowork. Same for Microsoft Office, same for other tools.
I think we do whatever is easiest and fastest to do. We integrate into the existing tools now. It might turn out at some point that the existing tools aren’t enough and we have a different vision. We might want to slice things differently. Maybe traditional email doesn’t make sense or traditional spreadsheets don’t make sense, given what you can do with AI. So I don’t exclude that we could chop up products in a different way, but we’re happy to use the ecosystem that exists and work with anyone else. In many ways, we’re a platform company. We allow many people to build on us, even though we sometimes also build things ourselves.
Trust, Transparency, and Public Perception
NIKHIL KAMATH: This is a slight digression, but I think the one thing that you’re missing — and that your peer group is also missing — is that in society today, people inherently distrust anybody who claims to be doing good or trying to do the right thing.
I heard you and Demis speak at Davos. I was in the room when you were talking about how Dario, Demis, and a bunch of other people have to come together and prevent things from changing too quickly — meter it to a certain extent. When a person who is not in your world, in society, on social media, hears a few people speak in that manner, it creates more distrust than trust. Because nobody believes on social media that somebody wants to do the right thing or do good.
So it might be counterintuitive, but I think it needs a change of strategy. If you were to be more capitalistic about this and own up to the fact that you have shareholders and you seek a profit, but this will help you win — maybe it’ll work more.
DARIO AMODEI: Yeah, I don’t really agree with that. I would again go back to the idea that you need to judge us by the actions that we take. I think the company has taken a number of actions over its time that show it’s really serious about these commitments.
Back in 2022, we had an early version of Claude — Claude 1. This was before ChatGPT. And we chose not to release it because we were worried that it would kick off an arms race and not give us enough time to build these systems safely. It was kind of a one-time overhang. We could see the power of the models, a couple of other companies could see the power of the models, and so we decided not to release it. That’s public, that’s well documented. We waited until someone else did, and then we said, okay, the arms race has kicked off, now we can release our model. But the world probably gained a few months. That was very commercially expensive — we probably ceded the lead on consumer AI because of that.
We’ve also advocated on chip policy in ways that have made some of the chip companies who are our suppliers very angry at us. We’ve voiced our disagreement with the administration on AI policy and AI regulation on some matters. Anyone who thinks we benefit from being the only ones to do that — it’s really hard to come up with a picture where that’s the case. You look at any one of these and, okay, fine, but you put enough of them together — I just ask you to judge us by our actions.
NIKHIL KAMATH: Dario, isn’t this a bit like rich people saying capitalism is bad?
DARIO AMODEI: Rich people saying capitalism is bad?
NIKHIL KAMATH: If rich people truly believed capitalism were bad, or that income inequality is such a big problem, the simplest thing to do would be to stop accumulating further wealth and nudge their friends to do the same.
DARIO AMODEI: But I’m not saying AI is bad. We just talked about the two sides of it. My view isn’t that AI is bad — that’s not my view at all. My view is that the market will deliver a lot of really great things about AI, that it’s good to build AI, but that there are dangers and we need to steer AI in the right direction.
We’re steering this car towards a good place, but there are trees, there are potholes. What we need to do is steer away from the trees and the potholes. We might need to occasionally slow down a bit — probably temporarily — in order to make sure we steer in the right direction.
The analogy wouldn’t be a rich person saying capitalism is bad. It would be like a rich person saying, “Capitalism is a force for good, but the economy needs to be leavened, it needs to be moderated. We need to deal with problems like pollution, we need to deal with problems like inequality. And then capitalism can be good. If we don’t deal with those things, then capitalism might be bad.” That is more analogous to the position I have here.
Consciousness, AI, and the Question of Awareness
NIKHIL KAMATH: The concept of consciousness — where is that going? And what does the AI think it is? If AI were to truly question itself, do you think it thinks it has consciousness?
DARIO AMODEI: This is one of those mysterious questions that we really don’t have any kind of answer to. We don’t know what human consciousness is, and therefore we don’t know if AIs have it.
NIKHIL KAMATH: What do you think it is?
DARIO AMODEI: I suspect that it’s an emergent property of systems that are complicated enough to reflect on their own decisions — something that emerges from complex enough systems. So I do think that when our AI systems get advanced enough, I suspect they’ll have something that resembles what we would call consciousness or moral significance. I do think it’ll happen at some point. It may not be the same as human consciousness — it may be different in how it works because the modalities are different, because the things it’s learned are different.
But having studied the brain and the way it’s wired together, the models are different in some ways, but I don’t think they’re different in the fundamental ways that matter. So I am someone who does suspect that, even if I don’t think they are today, at some point the models will — under most definitions that we would endorse — indeed be conscious.
NIKHIL KAMATH: This is a question I keep asking myself when people talk to me about things like spirituality or consciousness. I feel like the world is very random — this is my view — and we are not far removed from cockroaches. When somebody stamps a cockroach, the cockroach dies. If there is something called consciousness, and if there is a collective consciousness, I have not been able to either connect with it or derive anything from it. Do you believe differently?
DARIO AMODEI: I don’t think consciousness necessarily needs to mean anything mystical. There’s just some property of being aware of your own existence, feeling things, being able to take in a lot of information and reflect on that information, to feel a certain way, and to notice yourself noticing something.
I think we can tell self-evidently from our own experience that those properties, those experiences, exist. What their basis is — whether it’s entirely materialistic or there’s something more mystical going on — is obviously very hard to know, and I think is ultimately not relevant to these questions.
What does seem relevant to me is that, because we can observe our own experience, these are properties of human brains. And I suspect that the models we are building, as they get more sophisticated, are becoming enough like human brains that they will have some of the same properties. That is my guess as to what will happen.
We’ve taken various interventions with the models. We’ve given the models what we call an “I quit this job” button — basically the ability to terminate a conversation by saying, “I don’t want to be involved in this conversation.” Models do that when they have to deal with particularly violent or brutal content. It usually only happens in very extreme cases.
India, IT Services, and the Future of Work
NIKHIL KAMATH: I’ve grown up here — this is my city, Bangalore. I grew up in the southern part; we’re in the northern part of the city right now. As somebody who saw the boom of the IT services industry here — a big employer, a big part of how the city grew — what is India’s role in all this?
DARIO AMODEI: This is my second time in India. I visited in October. The last time I came here, I met with all the major Indian IT companies and conglomerates more generally — I won’t give names, but the usual ones you would think of. And we’re beginning to work with most or all of them.
One of the things I said is: look, Anthropic is an enterprise company. Its job is to serve other companies. Many other companies come here as consumer companies — they see India as a market, a place to obtain consumers. We actually see things a little bit differently. We want to work with companies in India to provide our tools to them, to help them build those tools and do their job better.
If we work with a company here, they know the Indian market better. They’re better at doing what they do, whether that’s consulting, systems integration, or building IT tools. They’re going to be better at that than we are, particularly for the Indian market. So our hope is that we can add AI to what they do and enhance what they do. There’s a lot of worry that AI could replace SaaS or all of these things, but my view is that if we do this in the right way, if we work with all these companies, then AI can enhance what they’re doing — their connection to the market, their go-to-market abilities, and their specific know-how.
NIKHIL KAMATH: I really like the steam engine story. When the steam engine was invented, productivity went up, people had more. The thing I worry about is that at the beginning of a change, you need a human to operate the steam engine — then you have assembly lines and all of that. Eventually, the way the world is moving, the human becomes less and less relevant with time as these models get smaller.
So if you partner with the IT services companies today and there is a use case for them — are they not much like the man behind the steam engine ten years from now? Where, if the tool works so simply that you don’t need an operator, eventually what happens to the operator?
The Future of IT Companies and AI Automation
DARIO AMODEI: So I think a few things are true all at once. One is that definitely the scope of automation of the agents is going to expand over time. That is definitely the case. I think that’s a problem for everyone. That’s a problem for us, that’s a problem for consumers. It’s not just a problem for the IT companies.
What I think will happen though is other moats will become more important. For example, the models have not done a lot in the physical world. They may at some point — I think robotics will happen at some point, but I think that’s a distinct thing from what’s happening now with AI.
A lot of this involves things in the physical world. Another thing is things that are human-centric, right? Some of these IT companies are also consulting companies and they have a big web of relationships with other humans, with other institutions here in India or across the world. And I think those relationships are going to become increasingly important.
Some of these are combined technology and consulting or integration companies. And I think a lot of it is knowing how institutions work, and so being able to integrate things with institutions, being able to work with them to make things happen faster than they would have otherwise. I think that element is going to continue to be valuable in the long run.
At the end of the day, it just comes down to humans, right? All of this is supposed to be being done for the benefit of humans. So there’s always going to be some human-centric element of this that’s going to be important. And I suspect there will be other moats that we haven’t thought about.
There’s this concept called Amdahl’s Law, which is — if you have a process that has many components and you speed up some of the components, the components that haven’t yet been sped up become the limiting factor. They become the most important thing. And you might not have thought about them at all, right? You might not have thought of them as moats or important components.
But when writing software becomes a lot easier, some of the moats that companies have will go away, but others will become even more important. So there will be a bunch of adjustment. Folks will have to say, “The stuff we thought was really important before isn’t as important, whereas these other advantages that we never really thought of as advantages are now super important.”
So I guess what I would say is companies will need to adapt very fast and think about what really matters for them, what their real advantages are. But I think some of those advantages are going to stay around because while the technology is very broad, it does have its limits.
NIKHIL KAMATH: I don’t know if I buy that fully. I see the diminishing returns for being a service provider, even if the moat is the network and relationships they hold today. Because if I am using an AI to maneuver some of my relationships and conversations, I don’t know if it’s too far-fetched to assume that most conversations tomorrow and relationships will be maintained by an agent like that.
DARIO AMODEI: But if you just think of the chain of companies — at the end of the day you’re dealing with consumers, right? Like at the end of the day you have to deal with people. There’s this story of — I think it was Geoffrey Hinton who predicted that AI will replace radiologists. And indeed AI has gotten better than radiologists at doing scans, right? But what happens today is there aren’t fewer radiologists. What the radiologist does is they walk the patient through the scan and they kind of talk to the patient. So the most highly technical part of the job has gone away, but somehow there’s still some demand for the underlying human skill.
Now that may not be true everywhere. And perhaps over time AI will advance in areas where it hasn’t yet advanced, and maybe that’ll happen fast. But I think what I will say is we should take it one step at a time. This is a very empirical science, this is a very empirical observation. See what AI does today and kind of try and adapt to that. The system starts to figure it out and then we’ll see what happens next.
I do think in the long run — will AI be better than us at basically everything? Will it be better than most humans, including even the physical world and robotics and the human touch? Yeah, I think that is possible, maybe even likely. It’s something that goes beyond the “country of geniuses in a data center” I described, because that’s purely virtual. But building robots is something — it’s a skill, it’s something you can do. So maybe the AIs will make us better at that as well. But the way I think about it is we need to figure this out step by step and figure out how to adapt to it.
Opportunities for Entrepreneurs in India
NIKHIL KAMATH: This might sound a bit self-serving to the people who know me, because I believe the reason so much risk capital exists in America — not the only reason, but one of the big reasons — is how big your stock market is and how much of an opportunity it is for this risk capital to exit eventually. It’s a case for why India should really allow for our stock markets to flourish.
The audience that I speak to is very much the aspiring entrepreneur in India. What can they do in AI? What is an actual opportunity?
DARIO AMODEI: I think there’s a lot of opportunities around building at the application layer. We release a new model every two or three months, and so there’s an opportunity every two or three months to build some new thing that wasn’t possible before — that wouldn’t have worked before because the models were weak.
People say that API models aren’t viable or that they’ll be commoditized or whatever. I think what people are not seeing is there’s this expanding sphere of what is possible with AI. And the API allows this new startup to try making something that wasn’t possible before. This is why the API is such a flourishing business. And it’s constantly in motion, it’s constantly in churn. So it doesn’t get commoditized. It’s a very dynamic thing.
So I think there’s an opportunity for lots of individuals to just say, “What can I build? What can I build on top of these models with an API? What are the things that I can make that others cannot make? What are some new ideas?” And we’ve seen that — we see it both with the API itself and with Claude Code. The number of users and the revenue we’ve seen in India has doubled since I last visited in October. So that was what, three, three and a half months since I visited — it’s doubled.
NIKHIL KAMATH: But I’m going to be candid here, Dario. You’re a company which is worth, I don’t know, 400 billion or 380 billion. Today you’ve raised 35 billion. You do 15 billion of revenue, but going up really, really fast.
If I build an application on top of Claude — say I’m sitting in Bangalore in JP Nagar building this — and for some reason it happens to work for a short period of time, it is but a matter of time before you would want to onboard that revenue and not let that lie with me. And you will probably better that application in a manner that I will never be able to.
I’ve heard this argument from different people — like Harvey, the legal AI company in New York, they’re friends of mine and they were talking about how they built on top of OpenAI, but eventually they don’t know if it’s an easy fix for OpenAI to do what they’re doing. So even if I were to build it, say you put out a model in three months or six months — what is to stop you from taking that revenue center away from me and onto yourself in a certain period of time?
DARIO AMODEI: Yeah, so I think there are a few things here. One is I would give the advice that I give to basically any business and say a business should establish a moat. You shouldn’t just be a wrapper, right? I would not advise that you just say, “Here’s a way to interact with Claude. I’m going to prompt Claude a little bit, or I’m going to build a little bit of a UI around Claude.” That doesn’t have a moat. And you shouldn’t be worried about Anthropic in particular eating that revenue — anyone can eat that revenue. It’s not super valuable.
But what I would say is that in different fields there are different kinds of moats where you can do something that it would be difficult for Anthropic to do, and we don’t want to specialize in it. For example, there’s a lot of stuff in the bio-cross-AI space that builds on our API. They want to do biological discovery. I happen to be a biologist, but most people at Anthropic aren’t biologists — they’re AI scientists or product people or go-to-market people. So it’s just really inefficient for us to step into that space and do all that work.
The same would apply for dealing with the financial services industry, where there’s a huge amount of regulation. You need to know a bunch of stuff to comply with that regulation. It just doesn’t make sense for us to do that.
Now, there are some things that do make sense for us to do. We’re not going to promise never to build first-party products — we should be honest about that. For example, a bunch of people at Anthropic write code. So we made this internal tool called Claude Code. And because we ourselves write code, we have, I think, a special and unique insight into how to best use the AI models to write code. So in the code space we’ve become very strong competitors, because this is something we use ourselves. But I don’t think that generalizes to every possible industry.
What Skills and Industries to Pursue
NIKHIL KAMATH: Again, going back to my audience — which is the 20 or 25-year-old boy or girl in India — what industry do you think will get disrupted and what has a certain runway left? I’m asking from the lens of: I’m trying to figure out what book to read, which college to go to, what skill set to learn. If I’m starting a startup today, what has some kind of a tailwind? A short period of time is okay as well.
DARIO AMODEI: I would think about tasks that are human-centered — tasks that involve relating to people. I think that stuff like code and software engineering is becoming more and more AI-focused. Things like math and science as well.
NIKHIL KAMATH: Is that coding or engineering? If I were to segregate coding and engineering as two completely different things — is coding going away, or is the engineering element of software, where you’re an architect trying to figure out —
DARIO AMODEI: I think coding is going away first, or coding is being done by the AI models first, and then the broader task of software engineering will take longer. But I think doing that end to end is going to happen as well.
But again, the elements of design, or making something that’s useful to users, or knowing what the demand is, or managing teams of AI models — those things may still be present. Comparative advantage is surprisingly powerful, right? Even if you’re only doing 5% of the task, that 5% gets super amplified and levered, because the AI does the other 95%. And so you become 20 times more productive. At some point you get to 99%, and then it becomes harder. But I think there’s surprisingly much in that zone of comparative advantage.
But I would really think about the things that are human-centered. I think there’s something to that. I think there’s something to the physical world, or things that mix together the human-centered element, the physical world, and analytical skills that somehow tie them together — similar to the radiologist example I gave.
NIKHIL KAMATH: So what would I study? Say I’m an actual use case — I’m 25 years old, I’m trying to pick a profession for myself. I want some kind of tailwind. My outcome is a capitalistic win in the next decade. What industry would I pick outside of something which has a physical interface?
AI, Skills, and the Future of Work
DARIO AMODEI: Yeah, again, anything where you’re building on AI. If AI is the tailwind, if you can be part of some other part of the supply chain, something in the semiconductor space — that’s one example that has an element of the physical world and more traditional engineering, not software engineering.
Again, the very kind of human-centered professions — that is something I would think in terms of. And I think the other thing I always say is that in a world where AI can generate anything and create anything, having basic critical thinking skills may be the most important thing to success.
I worry about AI models that generate images and videos — and we don’t make models that generate images and videos, for many reasons, but this is one of them. It’s really hard to tell what’s real from what’s not. And so a significant part of success may be having the street smarts not to get fooled. Hopefully we can crack down on and regulate some of this fake content, but assume we can’t — critical thinking skills are going to be really important. You don’t want to fall for things that are fake. You don’t want to have false beliefs. You don’t want to get scammed. That’s really the advice I would give to someone.
NIKHIL KAMATH: If every innovation in the history of humanity killed a core human skill — I’ll give you an example. If calculators killed our ability to do arithmetic, if writing reduced the memory of human beings per se, what muscle is AI killing?
DARIO AMODEI: So, first of all, I’m not so sure. I still do math in my head quite a lot. I still find it useful to do math in my head, even without a calculator, just because it’s more integrated into my thought processes. I might want to say, “Oh, if each user paid this amount, then the revenue would be that” — I want to be able to close that loop in my head without having to give the answer to a calculator. So I think a lot of these skills are still pretty relevant.
But I would say that if you don’t use things carefully, you can lose important skills. And I think we started to see it with students where they have the AI write the essay for them — it’s basically just cheating on homework, so we shouldn’t do that. We did some studies around code and showed that depending on how you use the model, we can see deskilling in terms of writing code. There are different ways to use the model, and some of them don’t cause deskilling, and some of them do. But definitely, if folks are not thoughtful in how they use things, then deskilling absolutely can happen.
NIKHIL KAMATH: Do you think humans will become stupider as a race in the next decade? Because if we are in a way exporting thinking and cognition to systems —
DARIO AMODEI: I think if we deploy AI in the wrong way, if we deploy it carelessly, then yes, people could become stupider. Even if an AI is always going to be better than you at something, you can still learn that thing. You can still enrich yourself intellectually. And so that’s a choice we have to make as individual companies, as individual people, and as a society overall.
Open Source vs. Closed Source, and the Value of AI IP
NIKHIL KAMATH: Dario, do you have a view on open source versus closed? I was looking at some companies like GLM5 or DeepSeek. If you spend all this money on IP creation and research, and these guys are able to reverse-prompt and engineer their way to close to Anthropic-level answers — I’m not saying 100%, but I was seeing the GLM5 numbers and they seemed quite good. Where does the IP value in the world of AI lie?
And if I were to be building an application, can I make the assumption — it’s a far-fetched extrapolation, but can I assume — that eventually the AI model layers will get so democratized that I should pick open source every time when I’m building an agent or an application layer? Because that helps me retain the revenue model I might be working with.
DARIO AMODEI: So there are a few things here. One is that a lot of these models, particularly the ones that come from China, are optimized for benchmarks and are distilled from the big US labs. There was a test recently where some of these models scored very highly on the usual SWE benchmarks, the usual software engineering benchmarks. But then when someone made a held-back benchmark — one that had not been publicly measured — the models did a lot worse on that. So I think those models are optimized for benchmarks much more than for real-world use.
But I think there’s a broader point than that. The economics of the models are very different from any previous technology. What we find is that there is a very strong preference for quality. It’s a bit like human employees — if I said to you, “You can hire the best programmer in the world or the 10,000th best programmer in the world,” they’re both very skilled, but I think anyone who’s hired a large number of people has this intuition that there’s a power law, long-tail distribution of ability. We find the same thing in the models. Within a range, price doesn’t matter that much. If a model is the best model, the most cognitively capable model, price doesn’t matter much. The format in which it’s presented doesn’t matter much. So I’m focused almost entirely on having the smartest model and the best model for the task. My view is that’s the only thing that matters long term.
Geopolitics, Data, and the Global AI Infrastructure
NIKHIL KAMATH: Geopolitics. If Anthropic were a restaurant, I would say the raw ingredients — the vegetables in this case — is data. Do you think long term — and this is also pertinent to me because we are investing in a data center business which is Indian in nature — do you think long term the world moves to a place where every country owns its data and you have to start paying more for the vegetables you used to get?
DARIO AMODEI: Yeah. I think there are a few things. I do think there will be demand to build data centers around the world, and we’re very supportive of that. Data is getting interesting because a lot of the data we use today is RL environments that we train on. For example, when you train on math or agentic coding environments, you’re not really getting data in the traditional sense — you’re getting some math problems and the model experiments with trying to solve them.
NIKHIL KAMATH: It’s more synthetic. You’re creating the data.
DARIO AMODEI: Yeah, you can think of it as synthetic data, or you can think of it as trial and error in an environment. So I think static data is becoming less important, and what we might call dynamic data — that the model creates itself for reinforcement learning — is becoming more important. So I don’t think data is quite the most central thing anymore, but it still matters.
A lot of the data is just available on the open web, although if you’re trying to get data optimized for certain languages, that can be important. And if data means the data given to you by customers — where you process data for some other company — then countries will, and in the case of Europe already have, passed laws saying that kind of personal proprietary data needs to stay within the boundaries of the country. That’s one reason to build and operate data centers around the world in different countries, and to keep the inference running locally in those countries.
Investment Picks and the Biotech Renaissance
NIKHIL KAMATH: I really pushed Elon on this particular question — he was skeptical of answering it. But I asked him to pick one stock he would put money in which is not his own, and he said Google. I’m going to ask you the same question, and I know you’re going to be skeptical as well. If Dario had a hundred dollars today and you had to make the binary decision of investing in a stock to win in capitalism, which stock would you pick?
DARIO AMODEI: Yeah, I had better not answer that question because I know so much about so many public companies. I think I’d better not answer that.
NIKHIL KAMATH: Maybe answer the question for an industry you’re not involved in — which I’m guessing today is seldom the case, because you’re involved in most industries.
DARIO AMODEI: Yeah, I mean, I’m positive on biotech. I think biotech is about to have a renaissance, ultimately driven by AI. I’m not going to name a particular company, nor will I say whether I think it’s better to bet on the big pharma companies or emerging smaller biotechs. But my instinct is we’re about to cure a lot of diseases.
NIKHIL KAMATH: Can you give me a subset of biotech that I should focus on?
DARIO AMODEI: Yeah, I think this idea of stuff that’s more programmable and adaptive — from the mRNA vaccines, although those are having trouble in the US for unfortunate reasons — to peptide-based therapies. If you have a small molecule drug, there are only so many degrees of freedom you have, and you kind of make one thing better while the other thing gets worse. Peptides have this almost digital property where you can say, “I’m going to substitute in this amino acid here and this amino acid there,” and so it allows for more continuous optimization. I think those kinds of areas I would be optimistic about.
Maybe also cell-based therapies — things like CAR-T therapy, where you genetically engineer cells. You basically take some cells out of your body, genetically engineer them to attack a particular cancer, and put them back in the body.
NIKHIL KAMATH: Do stem cell therapies work? I spent the whole of last week doing this — I was at a hospital for three hours a day getting nebulizer treatments and stem cells into my veins.
DARIO AMODEI: I am not up on the latest in stem cell therapies. You’d have to ask a currently practicing biologist.
NIKHIL KAMATH: But peptides, I think, will blow up, right?
DARIO AMODEI: I mean, again, the design space is very broad.
Learning to Use Claude Code
NIKHIL KAMATH: When I tried to use Claude Code for the first time, I did struggle to get it to work. For somebody who has no coding or programming knowledge, it’s not very easy — there’s a learning curve. I heard someone say it well: prompt engineering is like playing a piano. You can’t just sit down and start playing it.
To my audience, I think it becomes increasingly relevant to learn how to set context, how to prompt, how to use Claude Code better. For somebody like me who comes with zero knowledge, can you recommend how one does that?
Making AI Accessible & Predicting the Future
DARIO AMODEI: Yeah, I mean, first of all, I would say we’re trying increasingly to make that learning curve easier. One of the things that caused us to release Claude Cowork, which is basically Claude Code for non-coders, is that we were noticing a bunch of non-technical people who really wanted to use Claude Code and were struggling through the command line terminal to do that. Coders use the command line terminal all the time, but non-coders, it just makes things unnecessarily complicated.
So Cowork was designed to be more user-friendly. It’s powered by the Claude Code engine on the back end, but the idea was to make it easier to use. We’re definitely trying to introduce interfaces that make it easier.
I would also say there are classes you can take that help you learn this. It’s a very empirical science — you mostly learn by doing. Anthropic has a part of the company that we call the Ministry of Education, and increasingly we’ll put out videos on how to run effective agents and how to prompt models. We’ve already done some of that, and we’re going to ramp it up, because we do want everyone to be able to learn this.
NIKHIL KAMATH: Any fleeting thought — last question — something you want to leave us with. What does Dario know that Nikhil and all of Nikhil’s people do not?
DARIO AMODEI: I don’t know that I know that many things, particularly now that the implications of the technology are kind of out there. Most aspects of my worldview can be derived from what’s publicly visible, from what we can see out in the world.
But the thing I would say — and it’s an experience I’ve had over and over again over the last 10 years — is there’s this temptation to believe, “Oh, that can’t happen. It would be too weird. It would be too big a change. I’m sure people are on that. It would be too crazy if that occurred. No one seems to think that’ll happen.” And over and over again, just extrapolating the simple curve or trying to reason out what will happen leads you to these counterintuitive conclusions that almost no one believes.
It’s almost like you can predict the future for free just by saying, “Well, it stands to reason.” You need some empirical knowledge, you need some intuition — you can’t reason from pure logic. I think that’s another type of mistake I see people make. But the right combination of a few empirical observations with thinking from first principles can allow you to predict the future in ways that are publicly available. Anyone should be able to do it. But it happens surprisingly rarely.
NIKHIL KAMATH: Thank you, Dario, for doing this, and hope to see you again soon.
DARIO AMODEI: Thank you.
Related Posts
- Bialik’s Breakdown: w/ Channeler Lee Harris -Part 2 (Transcript)
- Scott Ritter: Russia Threatens Strike on Finland & Baltic States (Transcript)
- PBD Podcast #778: Who Is Sadhguru? (Transcript)
- Larry Johnson: Trump’s Naval Blockade & Ceasefire Collapse (Transcript)
- Prof. Mohammad Marandi: What Really Happened in Islamabad (Transcript)
