Here is the transcript and summary of a Practical Wisdom Podcast conversation with Noam Chomsky titled ‘ChatGPT, Universal Grammar and the Human Mind: Unlocking Language and AI Mysteries’.
Listen to the audio version here:
TRANSCRIPT:
Samuel Marusca: Hello, and welcome to the Practical Wisdom podcast with me, Samuel Marusca. The following is a conversation with Noam Chomsky. The focus of this episode is ChatGPT, Language and Mind. Noam is an MIT linguist and philosopher who made significant contributions to the study of language, psychology, logic, philosophy, or political science.
Noam is perhaps best known for his theory of universal grammar, which revolutionized the study of language and provided new insights into the workings of the human mind. Noam is a prolific writer and has authored over a hundred books on topics ranging from language, cognition, to politics. He is, without a doubt, one of the most brilliant minds of our time.
And now, dear friends, here’s my conversation with Noam Chomsky.
Thank you very much for joining, and welcome to Practical Wisdom. If I may just go ahead with the first question, I’d like to have a little bit of a discussion about ChatGPT, consciousness, and the language of thought.
So, you recently published an article in the New York Times entitled The False Promise of ChatGPT, in which you argue that machines like ChatGPT are far from achieving true intelligence, and you discuss the limitations of AI, artificial intelligence, especially weak AI.
Limitations Of AI
Although I can see the potential and many applications of ChatGPT, I also notice that AI in general performs better in programming or writing code, or playing chess, for example. So, for example, a chess computer can beat a grandmaster. So, AI performs well in closed systems governed by strict rules. But ChatGPT, for instance, performs much worse when it comes to external systems and wider everyday life situations, like simple reasoning, or semantics, or history, where it produces inaccurate output. Why do you think this is the case, and what does this tell us about the limitations of AI?
Noam Chomsky: Defeating a grandmaster in chess is a triviality. That was a PR campaign, perfectly obvious back in the 1950s. I remember discussing this in the early days of AI, that if you bring a dozen grandmasters, have them sit for 10 days working out every possible program, every possible move, likely move, and you feed it into a huge memory, then you’ll be able to defeat a grandmaster who has 45 minutes to think about the next move. Very exciting. PR for IBM has no further significance.
You look at the current systems, they’re not doing the things you described. These are systems that are scanning astronomical amounts of data with billions of parameters and supercomputers, and are able to put together from the materials they’ve scanned, something that looks more or less like the kind of thing that a person might produce. It’s essentially high-tech plagiarism.
I mean, maybe it has some use, maybe not. It’s teaching us nothing. It does just as well for impossible systems as for possible systems. It’s certainly going to cause plenty of harm, that’s clear. Whether it might have some utility, we don’t know. Maybe. I don’t think it’s been shown. But all the talk about sentience and intelligence and so on is beside the point.
Samuel Marusca: You mentioned that ChatGPT, for instance, makes use of a wide, a huge amount of data. And in your article, in the New York Times article, you said that the human mind, and I quote, “The human mind is a surprisingly efficient and even elegant system that operates with small amounts of information.” So I think this is also known as the poverty of the stimulus.
So while AI models can operate with small amounts of data, ChatGPT, as you say, uses a vast amount of data from the internet produced by humans. How do you reconcile the idea of the poverty of the stimulus in human language acquisition with the fact that AI models require vast amounts of data to produce these articles?
Noam Chomsky: It simply shows they’re irrelevant. A two-year-old child or two, three-year-old child has basically mastered the essentials of language with very sparse data. We want to understand language learning, cognition. We want to see how that works. The fact that some program that scans extraordinary amounts of data gets something superficially similar to what a two-year-old child does basically tells us nothing.
Samuel Marusca: So I guess you would probably agree with Alan Turing, who wrote in his famous 1950 paper about the question of whether machines can think. He wrote, and I quote, “This may be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century, the use of words in general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” And I end the quote here.
ChatGPT – Form Of Thinking Or Consciousness?
So the founders of ChatGPT say that this system is trained, the AI is trained to do some kind of reasoning, although it’s probably not the best term to use. And it does produce human-like responses, which is what most people can see. What is your view on Turing’s perspective and ChatGPT’s capabilities as a form of thinking? Do you think this is just a problem of definition?
Noam Chomsky: It’s like asking whether submarines swim. You want to call that swimming? Okay, submarine swim. What the programs are doing is, again, scanning huge amounts of data, finding statistical regularities enough so that you can make a fair guess as to what the next word will be in some sequence. Is that thinking? Does submarine swim? It’s the same question.
Samuel Marusca: One of the reasons, I guess, of this is, of course, these systems don’t have consciousness. Now, there are several theories of consciousness. There’s Bertrand Russell’s theory of consciousness and matter. There’s also John Searle’s famous Chinese room argument, which challenges the idea that machines can truly understand language or have genuine consciousness. What is your opinion on Searle’s argument and the idea of weak AI? Do you think machines can ever truly understand language or have genuine consciousness? Or is this something fundamentally different? There’s something fundamentally different about human cognition that can never be replicated by machines.
Pages: First |1 | ... | Next → | Last | View Full Transcript