Here is the full transcript of Luca Longo’s TEDx Talk: The Turing Test, Artificial Intelligence and the Human Stupidity at TEDxVicenza conference.
Luca Longo – AI researcher
In 2016, I was awarded a prize by the National Forum for Teaching & Learning supported and sponsored by the Ministry of Education in Ireland, peculiarly named: “National Teaching Hero”. The reason for this award was my availability towards my students and my capability to create less formal and more comfortable educational environments.
Let’s imagine this place as a large university classroom. I’m used to enter the classroom, first ten minutes, when students enter and take place, I plug my computer to the speakers and turn a classical music on. I think this is the first step to build less formal and more comfortable educational environments and try to keep the attention of students at a high level.
Unfortunately, this is not always easy. In my lessons, I employ a method in use since the ancient times: storytelling! Pedagogically speaking, storytelling is a method based upon the use of narratives, aimed at transmitting knowledge to students.
I would start my lesson exactly with this method, by explaining, describing a topic on everyone’s lips nowadays: Artificial Intelligence. Like every story narrated to children, I’d like to begin my story with “Once upon a time” Second World War, 1942, United Kingdom, Bletchley Park: a mansion house in the north of London.
There was a thirty years old guy: Alan Turing. Alan graduated at King’s College, Cambridge, and he obtained a research doctorate in Logic at Princeton University, in the USA. At that time, the Germans made use of a special machine: Enigma. It was like a typewriter: the operator typed some keys, but, instead of printing those letters on a paper sheet, other letters were printed, according to an encoding mechanically set under the machine.
The Germans used this machine to communicate with each other. Anybody had listened in on this sheet had in front a meaningless text. It was encrypted!
Alan Turing was one of the leading figures at Bletchley Park: he and his team implemented a machine, the one you can see behind me, able to decipher the texts written by the German.
Due to this invention we believe that the war terminated two years earlier, saving many human lives. After the war Alan Turing continued his research in Logic and he is considered the father of Computer Science, the father of Artificial Intelligence. With his Turing machine, he formalizes the concept of computer even before the computer was actually built.
In 1950 he published a paper on the journal “Mind”: “Computing Machinery and Intelligence,” where he proposed the Turing test. The question behind the Turing test is a well-defined one: Can machines think? It is at that time that Artificial Intelligence began.
Probably most of you have watched the movie “The Imitation Game”: The game of imitation, I am going to describe it to you. Let us suppose that a person is here, pressing keys on a computer keyboard, who asks, pose some questions and on the opposite side of the computer there is a machine M and an operator O. Alternately, the machine M and the operator O answer to the person P. It is said that the machine M passes the Turing test if the person P is not able to understand when the answers come from the machine and when the answers come from the operator.
This is The Imitation Game. The machine must have special features in order to pass the Turing test. It has to interpret natural language: the question asked by the person. It has to represent knowledge in order to formulate answers. It has to think in an automatic mode in order to formulate such answers. And it has to learn automatically.
There are many approaches to study Artificial Intelligence. One of them is the cognitive approach: it is based upon the human thinking. According to this approach, there are two ways to study the human thinking: either we try to capture thoughts right when they occur, or we try to model thoughts at a psychological level. For this reason, we say that Artificial Intelligence is closely connected to neuronal and cognitive sciences and to psychology.
According to this approach, the assumption is that, if we can have a true representation of the human thought, then we can transfer it to a machine. Another approach is the one based upon the laws of rational thought. Probably most of you has heard about the Aristotle’s syllogism. Socrates is a man; all men are mortal; then Socrates is mortal. This is a deductive reasoning. If we have two truthful premises, we can infer a truthful conclusion.
Deductive logic comes from here. According to the laws of rational thought approach, we try to build deductive arguments and to transfer them to a machine. Another approach is the rational agent approach. A rational agent, an entity, has to act, has to adapt itself to the context, has to fix goals to itself, and be able to carry them out, and it has to act in a rational way. Therefore, the Turing test is related to intelligent agents.
By rephrasing the Turing statement, “Can a machine think?”, we can now say: is it possible to build a machine, an artificial agent, able to think, able to show understanding and rationality? Artificial Intelligence, therefore, aims at developing artificial intelligent entities. Your mobile phone is an entity. By developing artificial entities we try to understand intelligence as a psychological construct. Once we have a better knowledge about this concept, we try to develop artificial intelligent entities to support humans: it’s a cycle. But let’s see now whether machines are able to think.
I’d like to briefly describe you the state of the art of Artificial Intelligence and I’d like to use five classes to classify artificial agents according to their abilities: we have sub-human and par-human agents, over-human, super-human agents and then we have optimal ones. I want to explain them with some examples.
Optimal agents are the ones which act better than all the people and you can’t do better than that. For instance, agents which solve the Rubik cube, those that play at “Four in a row” in the best way, or at Tic-tac-toe. Consider that some years ago, a boy, given an initial state of the Rubik cube, solved it in 473 seconds. Some months ago an agent, a robot was developed, which can solve it in 063 seconds. We have super-human agents which act better than all the humans, for instance in the chess game, in the Scrabble game. Some years ago the chess Russian champion Kasparov was defeated by an artificial agent.
We have over-human agents, the ones which almost act better than most of the humans, for instance in the Texas hold ’em poker, in answering the Quiz Show questions. We have par-human agents, which act almost like all the humans, for instance in cognitive activities, such as crosswords or image classification.
Finally, we have sub-human agents, which act worse than all the humans. Examples include objects classification, handwriting recognition, vocal recognition, translation from a language into another one. But if there is something that artificial agents nowadays are not able to do is for instance, disambiguation: Are we talking about the apple as a fruit, or are we talking about the brand of the Apple Corporation? And one thing that agents are not able to do is reasoning in the real world under situations of uncertainty.
These are the main limitations of Artificial Intelligence, and because of these, it is believed that we are far away to pass the Turing test. Now, let’s try to understand whether machines will be able to think. Some years ago, in America, a concept has been coined, named “Technological Singularity” by Ray Kurzweill, a world-renowned expert in Artificial Intelligence. Let’s imagine a timeline. Let’s imagine a line indicating the human intelligence, increasing.
Let’s now imagine a red line indicating the machine intelligence, with an exponential trend. This trend follows the Moore’s law, whereby the computational complexity, for instance, as measured by the number of transistors embedded in a chip, doubles every two years and quadruples every three years. According to Ray Kurzweill, in 2010 we should have been able to use this computational complexity to emulate the human brain — I didn’t see anything. In 2020, with 1,000 dollars we will have access to this computational capacity. In 2025, according to Ray Kurzweill, we will be able to scan our brain in a very accurate way.
And eventually, in 2029 machines will pass the Turing test! And then, in 2045, he refers to that point in time when the technological singularity will happen, when the machines, machine intelligence, will follow an exponential trend that will significantly affect the human intelligence. In his paper, published on the journal “Mind,” Turing not only proposed his test, but he also suggested nine objections against his own test. These objections are nine objections against Artificial Intelligence.
Some years ago, when I was a student at the University of Varese, I attended a course on “Epistemology, Deontology and Ethics in Computer Science” held by Prof Gaetano Aurelio Lanzarone. Unfortunately, he passed away some years ago. One of the assignments we had to do was to propose a tenth objection against the Turing test, against Artificial Intelligence. I was the only one who proposed a tenth objection expressed as a mathematical equation, that I labelled “human stupidity”. I’d like to explain it in simple terms.
Let’s assume we take the intelligence of all humans and we put it all together. The Sum symbol of the equation on the left hand side. And we transfer this intelligence as a whole to a machine. Then, we get an equality of intelligence. But in some way the machine becomes more intelligent than us. Though, if it is true that were us who have transferred our intelligence to a machine and it becomes more intelligent than us, it is also true, as well, that we let the machine become more intelligent than us.
Then, in order to conclude my story, and referring back to the initial question: “Can machines think?”, I’d like to leave you with an open question: Does really makes sense for us to let them think?