I’d like to briefly describe you the state of the art of Artificial Intelligence and I’d like to use five classes to classify artificial agents according to their abilities: we have sub-human and par-human agents, over-human, super-human agents and then we have optimal ones. I want to explain them with some examples.
Optimal agents are the ones which act better than all the people and you can’t do better than that. For instance, agents which solve the Rubik cube, those that play at “Four in a row” in the best way, or at Tic-tac-toe. Consider that some years ago, a boy, given an initial state of the Rubik cube, solved it in 473 seconds. Some months ago an agent, a robot was developed, which can solve it in 063 seconds. We have super-human agents which act better than all the humans, for instance in the chess game, in the Scrabble game. Some years ago the chess Russian champion Kasparov was defeated by an artificial agent.
We have over-human agents, the ones which almost act better than most of the humans, for instance in the Texas hold ’em poker, in answering the Quiz Show questions. We have par-human agents, which act almost like all the humans, for instance in cognitive activities, such as crosswords or image classification.
Finally, we have sub-human agents, which act worse than all the humans. Examples include objects classification, handwriting recognition, vocal recognition, translation from a language into another one. But if there is something that artificial agents nowadays are not able to do is for instance, disambiguation: Are we talking about the apple as a fruit, or are we talking about the brand of the Apple Corporation? And one thing that agents are not able to do is reasoning in the real world under situations of uncertainty.
These are the main limitations of Artificial Intelligence, and because of these, it is believed that we are far away to pass the Turing test. Now, let’s try to understand whether machines will be able to think. Some years ago, in America, a concept has been coined, named “Technological Singularity” by Ray Kurzweill, a world-renowned expert in Artificial Intelligence. Let’s imagine a timeline. Let’s imagine a line indicating the human intelligence, increasing.
Let’s now imagine a red line indicating the machine intelligence, with an exponential trend. This trend follows the Moore’s law, whereby the computational complexity, for instance, as measured by the number of transistors embedded in a chip, doubles every two years and quadruples every three years. According to Ray Kurzweill, in 2010 we should have been able to use this computational complexity to emulate the human brain — I didn’t see anything. In 2020, with 1,000 dollars we will have access to this computational capacity. In 2025, according to Ray Kurzweill, we will be able to scan our brain in a very accurate way.
And eventually, in 2029 machines will pass the Turing test! And then, in 2045, he refers to that point in time when the technological singularity will happen, when the machines, machine intelligence, will follow an exponential trend that will significantly affect the human intelligence. In his paper, published on the journal “Mind,” Turing not only proposed his test, but he also suggested nine objections against his own test. These objections are nine objections against Artificial Intelligence.
Some years ago, when I was a student at the University of Varese, I attended a course on “Epistemology, Deontology and Ethics in Computer Science” held by Prof Gaetano Aurelio Lanzarone. Unfortunately, he passed away some years ago. One of the assignments we had to do was to propose a tenth objection against the Turing test, against Artificial Intelligence. I was the only one who proposed a tenth objection expressed as a mathematical equation, that I labelled “human stupidity”. I’d like to explain it in simple terms.
Let’s assume we take the intelligence of all humans and we put it all together. The Sum symbol of the equation on the left hand side. And we transfer this intelligence as a whole to a machine. Then, we get an equality of intelligence. But in some way the machine becomes more intelligent than us. Though, if it is true that were us who have transferred our intelligence to a machine and it becomes more intelligent than us, it is also true, as well, that we let the machine become more intelligent than us.
Then, in order to conclude my story, and referring back to the initial question: “Can machines think?”, I’d like to leave you with an open question: Does really makes sense for us to let them think?