
Here is the transcript and summary of a Practical Wisdom Podcast conversation with Noam Chomsky titled ‘ChatGPT, Universal Grammar and the Human Mind: Unlocking Language and AI Mysteries’.
Listen to the audio version here:
TRANSCRIPT:
Samuel Marusca: Hello, and welcome to the Practical Wisdom podcast with me, Samuel Marusca. The following is a conversation with Noam Chomsky. The focus of this episode is ChatGPT, Language and Mind. Noam is an MIT linguist and philosopher who made significant contributions to the study of language, psychology, logic, philosophy, or political science.
Noam is perhaps best known for his theory of universal grammar, which revolutionized the study of language and provided new insights into the workings of the human mind. Noam is a prolific writer and has authored over a hundred books on topics ranging from language, cognition, to politics. He is, without a doubt, one of the most brilliant minds of our time.
And now, dear friends, here’s my conversation with Noam Chomsky.
Thank you very much for joining, and welcome to Practical Wisdom. If I may just go ahead with the first question, I’d like to have a little bit of a discussion about ChatGPT, consciousness, and the language of thought.
So, you recently published an article in the New York Times entitled The False Promise of ChatGPT, in which you argue that machines like ChatGPT are far from achieving true intelligence, and you discuss the limitations of AI, artificial intelligence, especially weak AI.
Limitations Of AI
Although I can see the potential and many applications of ChatGPT, I also notice that AI in general performs better in programming or writing code, or playing chess, for example. So, for example, a chess computer can beat a grandmaster. So, AI performs well in closed systems governed by strict rules. But ChatGPT, for instance, performs much worse when it comes to external systems and wider everyday life situations, like simple reasoning, or semantics, or history, where it produces inaccurate output.
Noam Chomsky: Defeating a grandmaster in chess is a triviality. That was a PR campaign, perfectly obvious back in the 1950s. I remember discussing this in the early days of AI, that if you bring a dozen grandmasters, have them sit for 10 days working out every possible program, every possible move, likely move, and you feed it into a huge memory, then you’ll be able to defeat a grandmaster who has 45 minutes to think about the next move. Very exciting. PR for IBM has no further significance.
You look at the current systems, they’re not doing the things you described. These are systems that are scanning astronomical amounts of data with billions of parameters and supercomputers, and are able to put together from the materials they’ve scanned, something that looks more or less like the kind of thing that a person might produce. It’s essentially high-tech plagiarism.
I mean, maybe it has some use, maybe not. It’s teaching us nothing. It does just as well for impossible systems as for possible systems. It’s certainly going to cause plenty of harm, that’s clear. Whether it might have some utility, we don’t know. Maybe. I don’t think it’s been shown. But all the talk about sentience and intelligence and so on is beside the point.
Samuel Marusca: You mentioned that ChatGPT, for instance, makes use of a wide, a huge amount of data. And in your article, in the New York Times article, you said that the human mind, and I quote, “The human mind is a surprisingly efficient and even elegant system that operates with small amounts of information.” So I think this is also known as the poverty of the stimulus.
So while AI models can operate with small amounts of data, ChatGPT, as you say, uses a vast amount of data from the internet produced by humans. How do you reconcile the idea of the poverty of the stimulus in human language acquisition with the fact that AI models require vast amounts of data to produce these articles?
Noam Chomsky: It simply shows they’re irrelevant. A two-year-old child or two, three-year-old child has basically mastered the essentials of language with very sparse data. We want to understand language learning, cognition. We want to see how that works. The fact that some program that scans extraordinary amounts of data gets something superficially similar to what a two-year-old child does basically tells us nothing.
Samuel Marusca: So I guess you would probably agree with Alan Turing, who wrote in his famous 1950 paper about the question of whether machines can think. He wrote, and I quote, “This may be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century, the use of words in general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” And I end the quote here.
ChatGPT – Form Of Thinking Or Consciousness?
So the founders of ChatGPT say that this system is trained, the AI is trained to do some kind of reasoning, although it’s probably not the best term to use. And it does produce human-like responses, which is what most people can see. What is your view on Turing’s perspective and ChatGPT’s capabilities as a form of thinking? Do you think this is just a problem of definition?
Noam Chomsky: It’s like asking whether submarines swim. You want to call that swimming? Okay, submarine swim. What the programs are doing is, again, scanning huge amounts of data, finding statistical regularities enough so that you can make a fair guess as to what the next word will be in some sequence. Is that thinking? Does submarine swim? It’s the same question.
Samuel Marusca: One of the reasons, I guess, of this is, of course, these systems don’t have consciousness. Now, there are several theories of consciousness. There’s Bertrand Russell’s theory of consciousness and matter. There’s also John Searle’s famous Chinese room argument, which challenges the idea that machines can truly understand language or have genuine consciousness. What is your opinion on Searle’s argument and the idea of weak AI? Do you think machines can ever truly understand language or have genuine consciousness? Or is this something fundamentally different? There’s something fundamentally different about human cognition that can never be replicated by machines.
Noam Chomsky: First of all, let’s disentangle the terminology. Machines don’t do anything. I have a computer in front of me. It’s basically a paperweight. It doesn’t do anything. What the computer in front of me is capable of doing is implementing a program. That’s it.
What’s a program? Well, a program is a theory written in a notation in which machines can implement. It’s a strange kind of theory, the kind that you don’t find in the sciences. For a program to function, every question has to be answered. You can’t have unanswered questions. It’s not like the sciences. There are many unanswered questions, even in physics.
You could arbitrarily give an answer where you don’t know it. Okay, that would be like a program. So the question is whether this strange kind of theory can be a theory of intelligence, of consciousness, and so on. Why not? We could have theories of consciousness. These approaches aren’t getting anywhere near it, but it’s possible, certainly imaginable, that there’ll be a scientific theory of human intelligence.
In fact, we know quite a bit about that already. Lots of unanswered questions, but progress. Maybe you can say something about consciousness. If you can, you could program it. If you answer the unanswered questions, you could run it on a computer. There’s nothing magical about this.
Samuel Marusca: So do we actually know what causes consciousness? Is it the neuron firings in the brain? What exactly causes consciousness?
Noam Chomsky: We can talk about what causes consciousness. You can find out what kinds of neural structures are involved in conscious experience. It’s a scientific problem, not very easy. One reason it’s a hard problem is that, first of all, the brain is an extremely complex object. Very little is understood about it.
The basic models that are used, the neural net models, are probably the wrong models. There’s principled reasons to believe that. But another problem is that you can’t do experiments with humans for ethical reasons. You can’t raise human children in artificial environments. You can’t put electrodes into single cells in the cortex to figure out what’s going on. We know a lot about human vision, but that’s because of invasive experiments with other animals, which have about the same visual systems as humans do.
You can’t do that with language and consciousness, because there aren’t any other organisms. So it’s a very hard problem, and the problem is not advanced in the least by complex simulations. That tells us nothing.
As far as John Searle’s argument is concerned, I wasn’t particularly impressed by it. The way we use the word think, when rooms don’t think. We’re back to submarine swimming. It’s a point that Wittgenstein made. He said, people think, maybe dolls and spirits. That’s his aphoristic way of saying think is a word that we use for what people do. It has some open texture. We may apply it to things that are sort of like people, but that’s essentially terminology. Nothing’s involved.
Wanna Contractions
Samuel Marusca: We know that a very small part of language is external, and you do claim that probably 99% of language is internal to the mind. You wrote extensively about I-language, the idea that language is internal, individual, and intentional, spelled with an F. Most people think that we have an introspection into our mind and consciousness by means of our language. Is this really the case? How accessible is language to consciousness? Can you explain how wanna contractions work, and what happens in our mind when we make these contractions?
Noam Chomsky: Virtually none of what’s going on in our use of language internally is available for introspection. You have to study it the way you others study other systems of the body. For example, we have a second nervous system called the enteric nervous system. Gut, brain, and sometimes called huge nervous system, billions of neurons, has many of the same properties as the nervous system that’s up here. It’s the system that keeps our body functioning. You can’t introspect into it.
The only time we know anything about it is if you have a stomachache, something wrong with it. Well, you take a look at the systems that are language, thought, reasoning, reflection, and so on. We have no idea what’s from introspection. We know almost nothing. There is something called inner speech. Talk to yourself. That’s actually external speech. It has the properties of externalized language. You just aren’t using the articulatory apparatus, but it’s not what’s going on internally.
We have good evidence of the normal scientific kind of what’s happening internally, but you can’t introspect into it any more than you can introspect into how your enteric nervous system is functioning. That shouldn’t surprise us very much. What reaches consciousness, awareness, is fragments, little fragments of whatever is going on in our minds, but the actual processes are beyond awareness. You’re going to have to study them the way you study any other topic in science.
You can’t introspect into how your visual system is converting saccadic eye motions, which give successive dots on the retina. You can’t introspect into how that’s turning into my seeing a person. You have to study that from the outside, from what philosophers sometimes call the third person point of view. Same with language. It’s not going to be any different. You should have no illusions about that. Language and thought have to be studied like other topics in science. You’re not going to get very far by introspection, mostly misleading.
Samuel Marusca: So, what about wanna contraction? There are some rules based on which we make these contractions. So, sometimes we use wanna and sometimes we use want to. What is the basis for that and what happens in our mind when we make these contractions?
Noam Chomsky: You mean you’re thinking of things like wanna contraction?
Samuel Marusca: Yes.
Noam Chomsky: So, what do you want to read, but not who do you want to take the train tomorrow, meaning I want him to take the train. Well, there’s theory. You can’t introspect. We can get the data by introspection. Yes, we can get the data. But then to explain the data, you have to have theoretical analysis. There are several theories about what’s in fact going on in the mind when you make these distinctions. And you have to be evaluated like other scientific theories.
Samuel Marusca: So, when we make a decision, is that based on a mix of conscious and unconscious mental acts?
Noam Chomsky: Certain parts, probably superficial, of decision-making are conscious, but an awful lot is going on that’s simply inaccessible to our consciousness. Consciousness, remember, gives a superficial picture of complicated things that are going on in the mind.
I mean, just producing this sentence that I’m now producing requires extensive computation, which is extremely rapid. In fact, if you think about it, more rapid than neural nets can even transmit. But all we can do is say, well, here’s the output. I have some data about it. Now I can treat it like any other data. And data in itself doesn’t tell you anything.
I mean, take familiar cases. Take, say, the moon delusion. You look at the moon and the horizon, it’s much bigger than when it’s high in the sky. Nobody understands that. There’s no successful theory about it. Nevertheless, everyone, every scientist assumes that the moon hasn’t changed size, even though you don’t have an explanation. So, you dismiss the data because you don’t understand it. And you try to find a theory.
The point is that data does not provide explanations on its sleeve. It doesn’t tell you what the data is. Data is not evidence. Evidence is a relational concept. Evidence for something. Data is just data. You don’t know what it is until you have some theoretical framework in which you can interpret it.
And the same is true of all of these questions. So, just looking at the data, if you look at astronomical amounts of data, like current AI, then you can simulate things. But simulation is not explanation.
The Sally–Anne experiment
Samuel Marusca: The Sally–Anne experiment, as we know, shows that children under the age of four cannot understand false belief. I’d like to talk about that for a brief moment. Children at this age don’t recognize that other children exist as independent thinking beings. They cannot read their mind. Children can’t entertain the age that someone else could have a representation of the world which deviates from reality. So, what happens between the ages of three and four?
We know even as adults it’s difficult to hold or represent two distinct points of views in our mind. So, in your opinion, is the theory of mind autonomous or dependent on the language faculty?
Noam Chomsky: Nobody has any idea. I mean, there’s millions of questions you can ask where the answer is not given, not available. So, why do we see the moon as larger on the horizon? Well, actually, there’s no good explanation for that. Where is 90% of what constitutes the universe? Where is it? Physicists can’t find it. They know from theoretical reasons that mass energy in the universe has got to be there. Otherwise, the theories don’t work. They can’t find it.
Well, physics doesn’t go out of business for that reason because they can’t find 95% of what they know is there. What’s a particle? You ask a dozen quantum physicists, what’s a particle? They’ll say, well, we’re not really sure. You know, it could be this, could be that. There are lots of unanswered questions in the sciences. It’s a strange belief among humans that we should have answers in the domain of mental life of the kind that we don’t even find in the most advanced sciences. They’re hard questions.
The Nature Of Human Thought And Language
And in the case of human mental life, it’s multiply hard because you cannot do the experiments. You can think of lots of experiments that could give you answers to these questions, but you can’t do them. And since humans are alone, no comparable organisms can’t do the experiments on other organisms and draw conclusions as you can with the visual system.
Well, that’s the situation in which we’re in when we try to investigate what’s the nature of human thought, human reflection, human language. There is a tradition that goes back to classical Greece, classical India, included the main figures in the early scientific revolution, Galileo, Descartes, others, tradition which holds that language and thought are closely interrelated, intimately interrelated, maybe the same thing. Language generates thought, thought is what is generated by language.
Well, if so, when you study language, you’re studying our most fundamental properties. And there are many things about it that we know nothing about, like how do I decide to produce this sentence instead of talking about the weather? Well, we can say something about it, but it’s not an explanation, basically. The right response is nobody knows. It’s a question that we have no idea about, like many other questions.
Samuel Marusca: You mentioned Galileo and Descartes and the fact that previously people thought that language and thought are the same thing. Many people even today say…
Noam Chomsky: Galileo and Descartes thought that as well, and for centuries afterwards.
Samuel Marusca: Many people today say that if you want to be fluent in a language, you need to think in that language. And we know Jerry Fodor wrote extensively about the language of thought hypothesis. So, do we think in the language we speak, or do we think in a different language of thought, as Fodor claims? If we think in a distinct language from our natural language, what is the relation between the natural language and the language of thought?
Noam Chomsky: What is the language of thought other than natural language? Jerry Fodor, who was a close friend, did very important work. What did he say about the language of thought? Well, it’s basically English. I mean, whatever language you speak yields linguistic expressions, which are the formulation of thoughts. It’s very possible that all languages are identical or virtually identical in these systems. I don’t know for certain, but that’s the way research is tending.
If that’s the case, then what the internal language yields is a language of thought. Is there another language of thought? Well, you need some argument for that. Why isn’t this the language of thought?
Theory of Universal Grammar
Samuel Marusca: Universal grammar is a famous theory you propose that shows that humans are born with an innate ability to learn language. So, what you propose is that there are specific grammatical rules that are hardwired into our brains, which allow us to understand and produce language. These rules are believed to be universal across languages.
Eric Weinstein made an analogy and claimed that the brain has a Chomskyan pre-grammar or universal grammar of religion, meaning that there is an innate longing for something sacred in all of us. Do you believe that there are universal cognitive structures that underlie both language acquisition and religious beliefs? What’s your take on this?
Noam Chomsky: No point having an opinion about it, since nothing is known about the general structure of religious belief if there is such a thing. If somebody can come along with an account, an explanatory account of the nature of the fundamental properties that enter into religious belief in all humans, then we’ll be able to talk about it. Until that time, you can’t.
Incidentally, the idea that there’s a universal grammar that’s common to humans that leads to capacity to acquire language, that’s not my belief. It’s your belief. It’s everybody’s belief that thinks about it. If you didn’t have some kind of innate structure, an infant would just hear a lot of noise, the way a monkey does, or a chimpanzee. You put a monkey, a chimpanzee, and an infant in exactly the same environment, the infant instantly, at birth, probably even before birth, is picking out of the noise language-related elements and pursuing a determined course of development and growth, which yields basically full knowledge of the essentials of language by three or four. A chimpanzee is just hearing noise.
Well, either that’s magic, or there’s some innate capacity in the human infant. Since we don’t believe in magic, we assume there’s an innate capacity in the human being. There’s a name for the theory of that, whatever it is. We don’t know. We try to learn what it is, but the theory of it is called universal grammar.
There is good empirical evidence that whatever this is, is shared among humans. If you raise an infant from Papua New Guinea tribe that hasn’t had human contact for 20,000 years, you raise it in Cambridge, Massachusetts, it’ll go to MIT and become a quantum physicist.
And conversely, we don’t know any distinction in this respect. So, you don’t know everything, but there’s good reason to believe that it’s a common human capacity. And we then try to investigate to find out what its properties are. There’s been a fair amount of progress in that, plenty unknown.
Samuel Marusca: What about music and arithmetic? Can music or arithmetic be considered as components of language? Do you think there are any striking similarities between the structure of language and the structure of music and arithmetic?
Noam Chomsky: Well, on that, there is quite interesting work. With regard to arithmetic, we now have some plausible answers, not established, but plausible, to questions that greatly troubled Charles Darwin and Alfred Russell Wallace, the two founders of the theory of evolution. They were very much concerned with what they regarded as a serious paradox. They assumed, they didn’t have the evidence, but they assumed, apparently correctly, that all humans have arithmetical capacity. All humans, maybe the capacity has to be brought out by triggering simulation, but that’s normal for instinctive behavior.
But all humans basically know that the numbers go, the natural numbers go on forever, addition works this way, and so on. Well, they were troubled by that, because it obviously couldn’t have been developed by natural selection, since the capacity was never even used until very recently in human evolution, and then by small numbers of people.
So, Darwin and Wallace disagreed. Wallace thought there must be some other factor in evolution, and Darwin disagreed. He thought there must be some way to do it, but they left with a paradox. Well, we now have a possible answer. If you look at current contemporary theories about the nature of universal grammar, it turns out if you take those assumptions and you simplify them to the limit, you imagine a language that has only one element in it, one word, if you like, and uses the simplest possible forms, you get something like arithmetic.
Could be the answer. Could be the reason why it’s there. Might be either an offshoot of language, or the same evolutionary step that yielded language, yielded this general property.
With regard to music, the question was really opened for discussion by, about 50 years ago, by Leonard Bernstein and his Charles Eliot Norton lectures at Harvard on language and music, which raised some interesting questions about commonality of structure. Since then, there’s been quite a bit of research and interesting ideas about common properties of certain musical genres, particularly tonal music and the Western classical tradition and linguistic structure. Could be that they, again, kind of like arithmetic, have the same roots. There’s interesting work on this.
There’s even John Mikhail, a philosopher, now teaches at Georgetown Law School, did a very interesting thesis on these topics about 30 years ago, I guess, where he also discussed how systems of morality might have common features, which, again, relate to the structures that we discover in the nature of language and so on, its basic generative structure. He also initiated empirical work on this that’s been extended significantly since by Mark Hauser, particularly, that may be another domain which pulls together. These are all important and interesting research topics.
Samuel Marusca: You mentioned Leonard Bernstein’s lectures on music and the phonology of music and language. Does this support the idea of language universals, the idea that we all speak the same language, basically?
Noam Chomsky: Well, it raises questions about whether there is something very fundamental in human cognition which yields all of these consequences. Language may be, as I said before, may turn out to be internal. The internal language, the one that’s functioning inside, that we can’t gain any consciousness of, could be that it’s pretty well shared among people.
There, by now, is reasonably good evidence, not total, but reasonably good evidence that the variety and the apparent complexity of language lies mostly in very peripheral aspects of language. Namely, the way that the internal system is translated into, mapped into, some sensory motor system, usually speech, could be sign, could be touch. These systems of externalization are not really part of language, and they differ from the internal language in fundamental respects.
So, for example, external speech has words appearing in linear order, one word after another. There’s very good reason to believe that the internal system, the system we use in thinking and reasoning and so on, has no linear order. It just deals with abstract structures, not order. It’s a fundamental difference. Well, languages differ in how they use this property of linear order. Looks like quite a lot of variety.
In fact, it was believed until pretty recently that there are languages in which there’s totally free word order. Some Australian indigenous languages were thought to have completely free word order, as distinct from English, which has a fairly rigid word order. Well, deeper study has shown that that’s actually not true. They do have free external order, but internally, you look at the structure of these languages and the way the thoughts are interpreted and so on, they seem to have the same structural properties as languages like English. That’s how research proceeds.
Same is true in biology. Go back 50 or 60 years, it was believed widely among biologists that organisms vary almost without limit, and each organism has to basically be studied on its own. Well, now it’s known that that’s not true, that there are deep homologies which hold billions of years, form the same, yield the same basic structure for organisms with superficial variation. That’s the progress of science. And it’s taken place in the study of the mind as well, to an extent.
Nature of Language And The Relationship Between Word And Object (Quine)
Samuel Marusca: You previously mentioned Wittgenstein. Wittgenstein raised some interesting ideas about language in his philosophical investigations, his later work, especially in relation to semantics. And in fact, some of these ideas existed before in The Meaning of Meaning, for example, by Ogden and Richards. And I think it’s 100 years since the publication of that work. It was published in 1923, 100 years ago.
And Ogden is famous in the UK also. He’s the one who succeeded to record James Joyce’s voice, and also he was the first translator of Wittgenstein into English. So Wittgenstein has developed his theory of language games, and famously said in his earlier work, the limits of my language are the limits of my world.
I know in your earlier work, you were critical of Wittgenstein, just as you’ve been of B. F. Skinner. What is your perspective on Wittgenstein’s ideas about language games, and the relationship between language and the world?
Noam Chomsky: The relationship between language and the world is what is called semantics in technical terminology. The terminology of Braga, Tarski, Carnap, Quine, so on. Semantics is the relation between elements of language and things in the world. That’s why you have books like Quine’s Word and Object. What’s the relation between a word and an object?
Well, it turns out it’s very likely that there is no such relationship for natural language. There is no relation of what’s called reference, a word referring to a thing. Intuitively, it seems like that, but when you think about it, it just doesn’t work. What you find is that people use words to refer to things, but that’s an action. It’s what John Austin called a speech act. There is an act of referring, but that doesn’t follow that there’s a relation of reference.
And when you look closely, it turns out there are no fixed relations between words or the minimal meaning-bearing elements of language and entities in the outside world. There’s a much more indirect relationship between them. Actually, this was known by Aristotle, discussed by him, in fact. The observations, I think, were correct. We now have to expand them in many ways.
But the relation between language and the world is one that’s not well understood. We have ways of referring to the world, talking about the world. You can find many of the properties, but much of it is cloaked in mystery.
Samuel Marusca: You mentioned John Austin’s work, How to Do Things with Words, and his speech act theory. He also wrote about performative utterances. And obviously, you wrote a lot about external language, which is the idea that language exists outside individual minds. Did John Austin’s speech act theory influence the development of your ideas of external language?
Noam Chomsky: Well, I knew Austin pretty well, spent some time with him in the 50s. Very intelligent, very thoughtful, perceptive analyst, very much interested in language structure. In fact, he was actually teaching a monograph of mine in his last teaching. He was actually talking about language use, the structure of language. These are different topics. He was very clear about the confusion.
You look at his work on performatives and speech act theory, he’s saying, well, exactly the way the title of his book is, how do we use words? How do we use language? That’s different from what’s the nature of language. Just as what’s the nature of arithmetic is different from how we use arithmetic. It’s a distinction that Aristotle made between possession of knowledge and use of knowledge. That’s a fundamental distinction. And Austin’s work was in use of knowledge. How do we use language in normal, in the interchanges among people when thinking about the world? Very important topic.
It’s not the topic of structure of language, the nature of the instrument. You can ask, what’s the nature of the instrument? How is it used? You can study the nature of a violin. It doesn’t tell you how the great violinist plays a violin. That’s a different topic.
Samuel Marusca: I guess Aristotle made the distinction between the possession of language and the use of language, sound and meaning. And you, of course, made the distinction between competence and performance.
Noam Chomsky: Competence and performance are modern terms for Aristotle’s possession of knowledge and use of knowledge. Those are my terms. The reason why I use those terms instead of Aristotle’s terms is that contemporary philosophy has developed a certain ideology about the notion of knowledge. And I just want to avoid those discussions. I think they lead us astray.
The conception of knowledge, there’s a technical notion of knowledge used in philosophy, which is not the same as the ordinary word knowledge. And just to avoid being embroiled in debates about the technical discussion, I decided to change the term. It didn’t help very much, I should say. These are other points that Wittgenstein made that you should not be trapped in the technical usage.
The technical usage is an interesting one, but it’s not the notion of knowledge. So there’s no notion in the philosophical literature of knowledge of something. You can talk about knowing how to do something, knowing that something, but you can’t talk about knowing something. He knows the construction business, he knows American history, he knows English. Those don’t fall under knowing how, they don’t fall under knowing that.
Our use of knowledge and our conception of knowledge is not based on a justified true belief. It has other features and aspects. To find a study justified true belief is an interesting concept, but it’s not the concept of knowledge.
The Sapir-Whorf hypothesis
Samuel Marusca: The Sapir-Whorf hypothesis is also known as linguistic relativity. And this hypothesis proposes that the language we speak influences the way we think and perceives the world around us. It suggests that language shapes our understanding of reality and that different languages may create different cognitive structures and worldviews. I think today most linguists only accept a weak form of the hypothesis.
But in your opinion, what is your stance on the Sapir-Whorf hypothesis and its claim that language influences our perception of reality?
Noam Chomsky: This is the so-called Sapir-Whorf hypothesis. It has a kind of superficial plausibility. It’s been studied empirically for about 70 years now, and it’s been very hard to find any evidence for it. There are cases where evidence seems to appear, but further study is a lot of glitter. Further studies, a lot of Gleitman’s, others showed that it was just mistaken, that if you looked at it more deeply, it wasn’t happening.
Maybe there’s just doubtless some effect in very superficial things. Like, for example, in the colors, this was shown by Eric Lenneberg 70 years ago. In the color spectrum, there are languages that make different distinctions when you have a word for a particular color. There are languages that don’t make a distinction between, say, pink and red, something in that spectrum.
Well, you can show that that has influences on remembering what color patch you saw if it’s at the border, but that’s almost pretty trivial. Of course, that’s true. If there are things beyond that, they’re not well supported. In fact, if you take a look at Whorf’s main example, what he argued is that Hopi, the language he was studying, didn’t have tense systems, past, present, and future. He said, speakers of what he called standard average European, like English, we view time as a kind of a line with ourselves standing on it, looking forward in one direction, over our shoulder in the other direction. He said that’s a reflection of the tense system that we have. Hopi doesn’t have that tense system, so he argued he’d see time relativistically and some other problem with that, which was pointed out 70 years ago.
English doesn’t have a tense system, but nevertheless, if you look at English as past and non-past, but nothing else, we don’t have future. Future is a modality. Will, may, must, or the structure of English doesn’t have a tense system. Nevertheless, we see time as a line with ourselves standing on it, which means that the structure of the language is telling us nothing about our perception of time.
Well, it’s true for us, why isn’t it true for the Hopi? In fact, it’s probably universally true, whatever the language is. That’s the kind of proposal that was made and that fell apart on the analysis, not very deep analysis in this case, but so far there’s virtually no substantial evidence for it.
Samuel Marusca: You mentioned Eric Lenneberg’s work on colours, and I know Claude Lévi-Strauss, the structuralist, also analysed colours, particularly red and green, using binary oppositions. How did we, as a society, come to make this distinction that red means no or stop, and green means yes or go? What is the origin of this cultural distinction and the semantics of it?
Noam Chomsky: It’s possible that it’s perfectly arbitrary, but I doubt it. I suspect it would be pretty hard to devise a culture in which red means go and green means stop, because it’s probably something about our innate conception of the way colours relate to action. But again, we’re entering into unknown terrain here. You have to investigate this.
Here I should say that I think Wittgenstein, Kripke, and others who’ve followed this have been very seriously misled. So Wittgenstein raised a famous question about, I can’t draw it, but a line with an arrow head pointing out, so a line which – I don’t know what I mean, a line within something like that. He said, why do we follow the arrow in this direction and not in the other direction, an arrowhead at one end of the line? And he argued it’s just social convention, but it surely isn’t.
If you did experimental work, you would find that probably dogs would interpret it that way, probably infants would, because it’s just part of our built-in nature to interpret geometrical structures as entailing some kinds of things with regard to action. It hasn’t been studied, but it undoubtedly would show that result. So the idea that these are some kind of social conventions I think is very unlikely.
A good deal of philosophical argument, like Kripke’s, I think falls apart when you think about these things. Philosophers have been extremely unwilling to consider the possibilities of innate built-in structure. I think that’s due to a residual impact of the empiricist tradition, which was just mistaken completely.
And in fact, even Hume recognized this. The ultimate empiricist, if you read Hume carefully, he pointed out that, as he put it, the experimental reasoning itself is based on animal instinct, instinctive capacities of some kind. It has to be, otherwise we get nowhere.
Samuel Marusca: I find the arrows analogy very interesting. And of course there are around 7,000 languages on earth. Most of them are still oral. Not all of them have a writing system, perhaps fewer than half, less than half of the languages of the world still have a writing system. So is writing also a similar convention? Most of us write from left to right and follow this linear activity.
Noam Chomsky: Well, writing follows externalized language, not what’s internal. Couldn’t, because nobody knows, doesn’t even know what it is. So writing systems, not only very few languages, but very recently in human history, very small parts of the, even in a language that had writing, most of the population didn’t know anything about it. So it’s a narrow element of pretty modern human history, back to maybe Samaria or Egypt. It follows the structure of externalized language in various ways.
There are different writing systems, of course, but in one or another way, they reflect the properties of the externalized language and try to mimic it in some fashion.
Universal Grammar And The Poverty Of The Stimulus
Samuel Marusca: As we close, can I just ask you, how did the concept of huge universal grammar develop from its inception until now?
Noam Chomsky: Well, the earliest proposals were about 75 years ago. It looked at the, there’s been a kind of a conundrum, a problem that has been faced over these years and becoming sharper and clearer today. Whatever universal grammar is, it must be rich enough to account for the gap between the data available to a child, which is very scanty, and the rich knowledge that’s obtained by three or four years old. That’s basically universal grammar that bridges that gap, the innate structure. Same for every other kind of growth.
So it looks on the surface as if it has to be very rich. On the other hand, if you look at the evolutionary record, about which not much was known until quite recently, but by now we have some knowledge of it, there’s very strong evidence that language emerged pretty suddenly, and along with modern humans. There’s no evidence for symbolic activity of any kind.
Shortly after that, humans began to disperse. We know that from genomic evidence. They all seem to have the same language, basically. So it seems it was all in language, basically. So it seems it was all in place before they dispersed. Well, shortly after the dispersal, you get pretty rich symbolic activity, almost modern.
Well, all of this seems to indicate that whatever the fundamental core of universal grammar is, it appeared pretty much along with modern humans. So it should be quite simple. I have a conundrum. How can it be very simple and very complex? How can it account for the diversity of languages? Well, the work in universal grammar, which is the same as linguistic theory, it’s the same topic, over the years has moved to simplify and sharpen fundamental principles to the point where maybe we can now see how the conundrum is resolved with fundamental, simple, very simple principles for universal grammar, complemented by rich reliance on principles of computational efficiency.
These are not part of language, they’re just natural laws, basically. Here’s the way the world works in terms of computational efficiency. These are elements and explanation of language which were not understood or thought of until pretty recently. When you bring these in, rich concepts of computational efficiency, simple elementary elements of composition and relation to semantic interpretation that are specific parts of language, you can’t explain a fair amount, not everything by any means, but that’s the direction in which, in my view, things are moving.
Samuel Marusca: And you do make this distinction and observation that languages don’t evolve, they just change, languages change. So, we were born with the faculty of language, with the language faculty, which remains the same throughout human history, but languages change, they don’t evolve. Is that right?
Noam Chomsky: There’s no evidence that the language faculties changed, and there seems to be no difference in language or other cognitive faculties among the variety of existing humans. So, it looks as if it’s been fixed and it hasn’t changed. Languages, of course, change, they change from generation to generation.
My grandchildren speak somewhat differently than I do. They have locutions that I don’t use, and so on and so forth. So, superficially, language changes, superficially, language changes quite rapidly. But that seems to be the external language, not the internal part, as far as we know. That’s where research is tending, I think.
Samuel Marusca: Finally, now, this podcast is called Practical Wisdom. It’s based on Aristotle’s ideas in his Ethics, namely that wisdom is not just an academic exercise, but a virtue of moral reasoning that we should use in our daily life. Can I, as we close, can I ask you, what does wisdom mean to you?
Noam Chomsky: Well, I think here I could mention again the work of John Mikhail that I mentioned before, opened the study of how the basis of our moral reasoning, our practical wisdom, if you like, might have fundamental properties of generation and construction that do relate in interesting ways to the basic cognitive processes that underline language, arithmetic, maybe other mental faculties. These are areas to be on the border of inquiry and investigation.
Samuel Marusca: Now, thank you very much for your time. It’s been a pleasure talking to you. I really appreciate your time and your insights on language and mind and AI. Thank you very much.
Noam Chomsky: Thank you. It’s good to talk to you.
Want a summary of this insightful conversation? Here it is.
SUMMARY:
The conversation with Noam Chomsky, titled ‘ChatGPT, Universal Grammar and the Human Mind: Unlocking Language and AI Mysteries,’ delves deep into the intricate relationship between language, cognition, and artificial intelligence. Chomsky, a renowned linguist and cognitive scientist, sheds light on various intriguing concepts, offering his perspectives on language’s fundamental nature, its ties to human cognition, and its potential implications for AI.
Chomsky emphasizes that while AI systems like ChatGPT are remarkable achievements, they remain far from understanding language and cognition as humans do. He contends that true language comprehension requires a grasp of internal structures, something that current AI models lack. He notes that humans innately possess a universal grammar, an innate cognitive structure enabling language acquisition, and suggests that AI might benefit from a similar foundational structure.
Discussing the Sapir-Whorf hypothesis, which posits that language shapes thought, Chomsky acknowledges its initial appeal but stresses the lack of substantial evidence to support its strong form. He challenges the notion that language dictates thought, pointing out that our innate cognitive structures likely influence both language and perception.
Chomsky also addresses the nature of human thought and the challenges it poses for study. He draws parallels between language, music, arithmetic, and other cognitive domains, suggesting that they might share common underlying structures. He points to the complex interplay of internal structures and externalization, like speech and writing systems, in shaping linguistic expression.
The conversation touches on the evolving nature of linguistic theories and Chomsky’s concept of universal grammar. He proposes that this core cognitive structure likely emerged alongside modern humans, while languages themselves change over time. Chomsky also highlights the significance of computational efficiency principles in understanding language evolution and change.
In conclusion, the conversation with Noam Chomsky offers a captivating exploration of the intricate relationship between language, cognition, and AI. While ChatGPT and AI systems showcase impressive feats, Chomsky’s insights highlight the enduring complexities that language and cognition present, pushing the boundaries of both scientific inquiry and technological advancement.
For Further Reading:
How Language Shapes the Way We Think: Lera Boroditsky (Transcript)
Transcript: Noam Chomsky On China, Artificial Intelligence, & The 2024 Presidential Election
Education: For Whom and For What?: Noam Chomsky (Transcript)
Sam Altman: GPT-4, ChatGPT, and the Future of AI (Transcript)
Related Posts
- Transcript of Victor Davis Hanson 2025 Commencement Address at Hillsdale College
- Transcript of MAGA And The Fight For America – Stephen K. Bannon
- Transcript of Human Dignity in the Age of AI: Yuval Noah Harari
- Transcript of 4 Tips For Developing Critical Thinking Skills – Steve Pearlman
- Transcript of Trump’s University of Alabama Commencement Speech 2025