Skip to content
Home » Transcript: Noam Chomsky on ChatGPT, AI, Universal Grammar, Language and Mind

Transcript: Noam Chomsky on ChatGPT, AI, Universal Grammar, Language and Mind

Here is the transcript and summary of a Practical Wisdom Podcast conversation with Noam Chomsky titled ‘ChatGPT, Universal Grammar and the Human Mind: Unlocking Language and AI Mysteries’.

Listen to the audio version here:


Samuel Marusca: Hello, and welcome to the Practical Wisdom podcast with me, Samuel Marusca. The following is a conversation with Noam Chomsky. The focus of this episode is ChatGPT, Language and Mind. Noam is an MIT linguist and philosopher who made significant contributions to the study of language, psychology, logic, philosophy, or political science.

Noam is perhaps best known for his theory of universal grammar, which revolutionized the study of language and provided new insights into the workings of the human mind. Noam is a prolific writer and has authored over a hundred books on topics ranging from language, cognition, to politics. He is, without a doubt, one of the most brilliant minds of our time.

And now, dear friends, here’s my conversation with Noam Chomsky.

Thank you very much for joining, and welcome to Practical Wisdom. If I may just go ahead with the first question, I’d like to have a little bit of a discussion about ChatGPT, consciousness, and the language of thought.

So, you recently published an article in the New York Times entitled The False Promise of ChatGPT, in which you argue that machines like ChatGPT are far from achieving true intelligence, and you discuss the limitations of AI, artificial intelligence, especially weak AI.

Limitations Of AI

Although I can see the potential and many applications of ChatGPT, I also notice that AI in general performs better in programming or writing code, or playing chess, for example. So, for example, a chess computer can beat a grandmaster. So, AI performs well in closed systems governed by strict rules. But ChatGPT, for instance, performs much worse when it comes to external systems and wider everyday life situations, like simple reasoning, or semantics, or history, where it produces inaccurate output. Why do you think this is the case, and what does this tell us about the limitations of AI?

Pages: First |1 | ... | Next → | Last | View Full Transcript