Skip to content
Home » Why AI Is Incredibly Smart and Shockingly Stupid: Yejin Choi (Transcript)

Why AI Is Incredibly Smart and Shockingly Stupid: Yejin Choi (Transcript)

Here is the full transcript and summary of Yejin Choi’s talk titled “Why AI Is Incredibly Smart and Shockingly Stupid” at TED conference.

In this TED talk, computer scientist Yejin Choi speaks about the incredible intelligence of large-scale language models, as well as their limitations, such as small mistakes and concerns about safety and sustainability. She argues that the development of common sense is vital in ensuring ethical decision-making and that blindly scaling up AI models and training them with raw web data is not effective due to misinformation and societal biases.

Listen to the audio version here:

TRANSCRIPT:

So I’m excited to share a few spicy thoughts on artificial intelligence. But first, let’s get philosophical by starting with this quote by Voltaire, an 18th century Enlightenment philosopher, who said, “Common sense is not so common.”

Turns out this quote couldn’t be more relevant to artificial intelligence today. Despite that, AI is an undeniably powerful tool, beating the world-class “Go” champion, acing college admission tests and even passing the bar exam.

I’m a computer scientist of 20 years, and I work on artificial intelligence. I am here to demystify AI. So AI today is like a Goliath. It is literally very, very large. It is speculated that the recent ones are trained on tens of thousands of GPUs and a trillion words. Such extreme-scale AI models, often referred to as “large language models,” appear to demonstrate sparks of AGI, artificial general intelligence.

Except when it makes small, silly mistakes, which it often does. Many believe that whatever mistakes AI makes today can be easily fixed with brute force, bigger scale and more resources.

What possibly could go wrong? So there are three immediate challenges we face already at the societal level. First, extreme-scale AI models are so expensive to train, and only a few tech companies can afford to do so. So we already see the concentration of power.

But what’s worse for AI safety, we are now at the mercy of those few tech companies because researchers in the larger community do not have the means to truly inspect and dissect these models. And let’s not forget their massive carbon footprint and the environmental impact. And then there are these additional intellectual questions.

Can AI, without robust common sense, be truly safe for humanity? And is brute-force scale really the only way and even the correct way to teach AI? So I’m often asked these days whether it’s even feasible to do any meaningful research without extreme-scale compute. And I work at a university and nonprofit research institute, so I cannot afford a massive GPU farm to create enormous language models.

Nevertheless, I believe that there’s so much we need to do and can do to make AI sustainable and humanistic.

Know Your Enemy

We need to make AI smaller, to democratize it. And we need to make AI safer by teaching human norms and values. Perhaps we can draw an analogy from “David and Goliath,” here, Goliath being the extreme-scale language models, and seek inspiration from an old-time classic, The Art of War,” which tells us, in my interpretation, know your enemy, choose your battles, and innovate your weapons.

Let’s start with the first, know your enemy, which means we need to evaluate AI with scrutiny. AI is passing the bar exam. Does that mean that AI is robust at common sense? You might assume so, but you never know.

So suppose I left five clothes to dry out in the sun, and it took them five hours to dry completely. How long would it take to dry 30 clothes? GPT-4, the newest, greatest AI system says 30 hours. Not good.

A different one. I have 12-liter jug and six-liter jug, and I want to measure six liters. How do I do it? Just use the six liter jug, right? GPT-4 spits out some very elaborate nonsense. Step one, fill the six-liter jug, step two, pour the water from six to 12-liter jug, step three, fill the six-liter jug again, step four, very carefully, pour the water from six to 12-liter jug. And finally, you have six liters of water in the six-liter jug that should be empty by now.

OK, one more. Would I get a flat tire by bicycling over a bridge that is suspended over nails, screws, and broken glass? Yes, highly likely, GPT-4 says, presumably because it cannot correctly reason that if a bridge is suspended over the broken nails and broken glass, then the surface of the bridge doesn’t touch the sharp objects directly.

ALSO READ:  Apple’s iPad Air 2 Keynote - October 2014 Media Event (Full Transcript)

OK, so how would you feel about an AI lawyer that aced the bar exam yet randomly fails at such basic common sense? AI today is unbelievably intelligent and then shockingly stupid. It is an unavoidable side effect of teaching AI through brute-force scale.

Some scale optimists might say, “Don’t worry about this. All of these can be easily fixed by adding similar examples as yet more training data for AI.” But the real question is this. Why should we even do that? You are able to get the correct answers right away without having to train yourself with similar examples.

Choose Your Battles

Children do not even read a trillion words to acquire such a basic level of common sense. So this observation leads us to the next wisdom, choose your battles. So what fundamental questions should we ask right now and tackle today in order to overcome this status quo with extreme-scale AI?

I’ll say common sense is among the top priorities. So common sense has been a long-standing challenge in AI. To explain why, let me draw an analogy to dark matter. So only five percent of the universe is normal matter that you can see and interact with, and the remaining 95 percent is dark matter and dark energy.

Dark matter is completely invisible, but scientists speculate that it’s there because it influences the visible world, even including the trajectory of light.