Here is the full transcript of Graham Morehead’s talk titled “Why Can’t AI ‘Think’ Like Us?” at TEDxSpokane conference.
Listen to the audio version here:
TRANSCRIPT:
The Mysterious Shed in Rural Japan
You find yourself somewhere in rural Japan, walking along. You find this little shed, some little structure. A woman approaches. In her hands, she has something written on paper. It’s Japanese characters. She slides it into an input slot, goes around the other side, waits a couple of minutes, and another piece of paper emerges. She looks at it, smiles, and walks away.
Then a man approaches, and he does the same. He’s got something written in Japanese on a piece of paper, slides it in, walks around the other side of the shed, waits a minute, and receives something. You run up to him and ask, “What are you doing? What is this?” He said, “Well, it’s the question box.” Anything I want to know, we can just write it down, put it in the input slot, and the answer comes out the output slot.
You see a hundred people wait in line, and all during the day, they keep asking questions, and answers keep coming out. At the end of the day, the door is open. Inside is some American dude that doesn’t speak a lick of Japanese. “How are you doing this?” you ask him. He says, “Come here.” And he folds out this massive book, and this book has huge pages with two columns.
He says, “Whenever I get a question, I look for it in the left column of this page. When I find it, I just draw whatever’s to the right. I didn’t even know I was answering questions until you told me.” Those people felt understood.
Were they? If you put a fluent Japanese speaker in that shed, he would understand them, but the answers would be the same. Is there any difference at all? “What does it mean to be understood?”
Understanding and Connection
There could be a way to distinguish between the two cases, when you have the guy who doesn’t speak Japanese and the guy who does. You have them put up all the pieces of paper on a board and connect them with red yarn, like a conspiracy board. You may have seen those in the movies, like “A Beautiful Mind.” How would the non-Japanese speaker connect the dots?
How would the Japanese speaker do it? Certainly differently, right? No matter how they would be, it would be different. This is a little bit like the difference between the connections made in your mind and in an AI. Now, I hear a lot of you are afraid of AI. You shouldn’t be, but we need to understand it a little bit.
Now, you might be afraid it’s going to take your job. That’s one of the questions I get. And maybe every company in the future will have two employees, a man and a dog. Because you can do so much with AI, why have people do all the jobs, right? And the man’s job would be to feed the dog. The dog’s job is to keep the man away from the computer.
AI Misunderstandings
GPT is not ready to take your job. I asked GPT, “Please tell me about the early life and great works of the 17th-century philosopher, McGruff the Crime Dog.” He very happily gave me a confident answer about his early life, his influences. He had a philosophy of positivity toward humanity. He was quite controversial at the time, but his parents supported him and he did well. Good for him.
What we connect in our mind, the shape of those connections, is everything here. How do humans connect thoughts? “Go home. Blue sky.” We know those thoughts are connected because they’re next to each other. Adjacency communicates something.
But there’s limits to this. “Jack gave Jill a book.” Well, gave is connected to three things: Jack, the giver; Jill, the recipient; and the book, the gift. How can gave be next to three words at once? You can’t do that because every word comes out of your mouth one at a time. Somehow, in your brain, you turn that sequence into a tree. A tree like the one you see there.
And the links have different colors because there’s a giver, there’s a recipient, and there’s a gift. And they’re all treated differently by your brain. In English, we use word order. In some languages, they use suffixes. There’s other ways. But every language creates that tree inside your head. Language is tree-shaped. AI thought is not tree-shaped.
Let’s look at the structure of AI. LLMs, like GPT, and what everyone’s excited about these days, start off with something called self-attention. Now I’m going to get a little hand wave. If you’re professional in this, I’m skipping some details. You can thank me later. Attention is like a number between zero and one. If you put all your attention on one thing, that’s like a one. If you’re totally ignoring something, it’s like a zero.
Well, we didn’t have time to teach the computer those trees I showed you. So we let it guess. And it looks like that. Some connections are dark, which means they’re closer to one. And some are light, so they’re closer to zero. And then you get a matrix. It’s just the words all connected to each other. That’s all it is.
But it’s just numbers. It’s a fuzzy version of a tree. It’s no longer a tree. It’s a fuzzy amalgamation of trees. Well, they tried this on some text. It didn’t quite work, but they liked the idea. So let’s try two. It didn’t quite work, but they liked the idea. So let’s try four. Didn’t work, but they liked the idea. So I said, “Let’s try eight.” Didn’t work yet, but they liked the idea so much, they said, “Let’s stack it together with another neural network.” We’ll call it an encoder.
And then we’ll make a stack of these encoders and a stack of decoders. All of them filled with matrices. We’re so far removed from human thought now, that is not even funny. It’s matrices all the way down. I called a meat grinder for matrices. When my daughter was three years old, I could have a reasonable conversation with her. But GPT, the one you use, has been trained on over a million years of English at a natural exposure of 20,000 words a day.
Over a million years! It’s not learning the way we learn. It’s not like us. There’s no simple tree structure inside there. So I’m not afraid of AI. As an AI researcher for over a couple of decades, I don’t fear AI. Instead, I think about what business I’m in. I’m in the biomimicry business, trying to reverse engineer a human brain.
No matter what task you’re doing, the principles of reverse engineering apply the same. Whether you’re teaching a computer to talk, listen, walk. When you have to teach a computer to do something, you start from scratch. You feel like you’re thrust into a dark forest of unknowns. And you have to sit there and figure out, how do I even get started?
It’s only then when you realize just how many untold lessons Mother Nature must have learned over the billions of years that life has been here on Earth. No, I don’t fear that artificial brain. You shouldn’t either. Instead of fearing that artificial brain, you should stand in awe of the one you’ve been given. Thank you.