Skip to content
Home » AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Transcript)

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Transcript)

Read the full transcript of Professor of Computer Science at Princeton University Arvind Narayanan’s talk titled “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” on April 17, 2025.

The presentation was followed by a discussion with Daron Acemoglu, MIT Institute Professor and Co-Director of the Shaping the Future of Work Initiative, along with audience Q&A.

Opening Remarks

ASU OZDAGLAR: Maybe we should get started, right? Hi, everyone. It’s a pleasure to welcome you all to tonight’s talk with Professor Arvind Narayanan. The Schwarzman College of Computing is honored to co-host this event with MIT’s Shaping the Future of Work initiative. We’re excited to have this unique convergence of minds and missions at the intersection of technology, society, and future of work.

We’re honored to be joined by Professor Arvind Narayanan from Princeton, also the co-author of the book AI Snake Oil. It’s such a critical time when there’s so much debate and discussion around the promise and parallel of AI with many people focusing on existential risk, Arvind and Sayash’s book brings a breath of fresh air and provides a balanced perspective on how we can navigate the hype and reality of AI. I personally recommend this book to everyone.

Arvind in the book draws a parallel, a very effective parallel with snake oil, whose sellers promise miracle cures with false pretenses, sometimes ineffective but harmless, but in other cases, harms extending to loss of health or life, very similar to AI. AI snake oil is AI that does not and cannot work. And the goal of the book is to identify AI snake oil and to distinguish it from places where AI can work very effectively, especially in high stakes settings such as hiring, health care and justice.

I’m thrilled to represent the Schwarzman College of Computing as the deputy dean of academics. And our dean, Dan Huttenlaacker, is also here with us tonight. And it’s truly a pleasure to be here with the dynamic leaders of the Shaping the Future of Work initiative, Daron Acemoglu and David Otter. Simon is not here.

Shaping the future of work brings an evidence-based lens to economic and policy impacts of automation. And the Schwarzman College is reimagining how we do research and teach computing with social implications at our core. What unites these efforts and why we’re so excited to have Arvind here tonight is a shared commitment to clarity, rigor, and technical expertise in how AI technology is developed and deployed.

Tonight’s presentation and conversation promises to enlighten us, make us think about these important issues. And with that, please join me in welcoming Professor Daron Acemoglu from the Department of Economics, Institute Professor and Faculty Co-Director of Shaping the Future of Work Initiative.

DARON ACEMOGLU: Thank you very much. Thank you, Asu, and thank you for everybody for being here. This is a great event, and I’m delighted that people have recognized it as a great event and are here.

I want to say just two more words about the initiative for shaping the future of work, which is co-led by myself, David Otter and Simon Johnson, who unfortunately couldn’t be here. And part of the reason why I want to say that is because I want to emphasize how synergistic Arvind’s agenda is to what we want to do. We’ve launched this initiative because we’re worried about the future of work, the future of inequality, the future of productivity in the age of digital technologies and AI.

And part of the reason we are concerned is precisely about how AI and other technologies are going to be used. And the perspective, as the word shaping suggests, is one in which we argue that the future of these technologies is not given, is not preordained, but different technologies have different consequences and we want to understand those consequences and we want to steer technology by a variety of channels, mostly coming from the academic research we’re doing and our collaborators are doing and our affiliates are doing, towards the more socially beneficial directions.

And I think I cannot imagine somebody better than Arvind to actually give much greater depth and breadth to this, because Arvind is a professor of computer science at Princeton and the director of the Center for Information Technology and Policy is bringing, even without the book, a unique perspective, great technical expertise, but a very clear-eyed and deep understanding of many applications of AI.

And that is exactly the space where we need to be, not excessive optimism, not excessive pessimism, but understand what are the things that AI can do productively, what are the things it cannot do at the moment, perhaps never, and what are the things that it can do but are not going to be great. So Arvind’s book, AI Snake Oil, which you’re going hear about, is full of amazing insights ranging from predictive AI to generative AI, large language models, to social media, to machine learning and the mistakes you can make with machine learning. I think we’re going to get a glimpse of many of these excellent points and hopefully a lot of food for thought for everybody.

Arvind is going to speak for twenty, twenty-five minutes, and then we’re going to have a little bit of a conversation for fifteen minutes or so, and then we’re going to open it up for Q and A. So please give a warm welcome to Arvind, and we’re really delighted to have him here.

Presentation

ARVIND NARAYANAN: Hello, everybody. Thank you, Daron and Asu, for such kind words. It’s really my pleasure to be here today. And I really mean it because the origin story of this book is actually right here at MIT. So let me tell you how that happened.

This was way back in 2019 when I kept seeing hiring automation software. And the pitch of these AI companies to HR departments was, look, you’re getting hundreds of applications, maybe a thousand for each open position. You can’t possibly manually review all of them.