Alex Wissner-Gross – TRANSCRIPT
Intelligence, what is it? If we take a look back at the history of how intelligence is being viewed, one seminal example has been Edsger Dijkstra’s famous quote that the question of whether a machine can think is about as interesting as the question of whether a submarine can swim. Now, Edsger Dijkstra, when he wrote this, intended it as a criticism of early pioneers of computer science like Alan Turing.
However, if you take a look back and think about what have been the most empowering innovations that enabled us to build artificial machines that swim and artificial machines that fly, you find that it was only through understanding the underlying physical mechanisms of swimming and flight that we were able to build these machines.
And so, several years ago, I undertook a program to try to understand the fundamental physical mechanisms underlying intelligence. Let’s take a step back. Let’s first begin with a thought experiment. Pretend that you’re an alien race that doesn’t know anything about Earth biology or Earth neuroscience or Earth intelligence, but you have amazing telescopes and you’re able to watch the Earth and you have amazingly long lives so you’re able to watch the Earth over millions, even billions of years.
And you observe a really strange effect, you observe that over the course of the millennia, Earth is continually bombarded with asteroids up until a point and that at some point, corresponding roughly to our year 2000 AD, asteroids that are on a collision course with the Earth, that otherwise would have collided, mysteriously get deflected or detonate before they can hit the Earth.
Now, of course, as Earthlings, we know the reason would be that we’re trying to save ourselves, we’re trying to prevent an impact. But if you’re an alien race that doesn’t know any of this, that doesn’t have any concept of Earth intelligence, you’d be forced to put together a physical theory that explains how, up until a certain point in time, asteroids that would demolish the surface of the planet, mysteriously stop doing that. So, I claim that this is the same question as understanding the physical nature of intelligence.
So, in this program that I undertook years ago, I’ve looked at a variety of different threads in crossed science across a variety of disciplines, pointing, I think, towards a single underlying mechanism for intelligence. In cosmology, for example, there has been a variety of different threads of evidence that our universe appears to be finely tuned for the development of intelligence, and in particular, for the development of universal states that maximize the diversity of possible futures.
In gameplay, for example in Go, everyone remembers in 1997 when IBM’s Deep Blue beat Gary Kasparov at chess. Fewer people are aware that in the past ten year or so, the game of Go, arguably a much more challenging game because it has a much higher branching factor, has also started to succumb to computer game players for the same reason. The best techniques, right now, for computers playing Go, are techniques that try to maximize future options during gameplay.
Finally, in robotic motion planning, there has been a variety of recent techniques that have tried to take advantage of abilities of robots to maximize future freedom of action in order to accomplish complex tasks. And so, taking all of these different threads and putting them together, I asked, starting several years ago, is there an underlying mechanism for intelligence that we can factor out of all of these different threads? Is there, as it were, a single equation for intelligence? And the answer, I believe, is yes.
What you’re seeing is probably the closest equivalent to an E=mc2 for intelligence that I certainly have ever seen. So, what you’re seeing here is a statement of correspondence that intelligence is a Force (F) that acts so as to maximize future freedom of action. It acts to maximize future freedom of action or keep options open with some strength (T), with the amount of the diversity of possible accessible futures (S), up to some future time horizon (Ƭ). In short, intelligence doesn’t like to get trapped, intelligence tries to maximize future freedom of action and keep options open.
And so, given this one equation it’s natural to ask: So, what can you do with this? How predictive is it? Does it predict human-level intelligence? Does it predict artificial intelligence? So, I’m going to show you now a video that will, I think, demonstrate some of the amazing applications of just this single equation.
(Video clip: Recent research in cosmology has suggested that universes that produce more disorder or “entropy” over their lifetimes should tend to have more favorable conditions for the existence of intelligent beings such as ourselves. But what if that tentative cosmological connection between entropy and intelligence hints at a deeper relationship? What if intelligent behavior doesn’t just correlate with the production of long-term entropy, but actually emerges directly from it? To find out, we developed a software engine called ENTROPICA designed to maximize the production of long-term entropy of any system that it finds itself in. Amazingly, ENTROPICA was able to pass multiple animal intelligence tests, play human games and even earn money trading stocks; all without being instructed to do so. Here are some examples of ENTROPICA in action: just like a human standing upright without falling over, here we see ENTROPICA automatically balancing a pole using a cart. This behavior is remarkable, in part, because we never gave ENTROPICA a goal, it simply decided on its own to balance the pole.
This balancing ability would have applications for humanoid robotics and human assistive technologies. Just as some animals can use objects in their environments as tools to reach into narrow spaces, here we see that ENTROPICA, again on its own initiative, was able to move a large disk, representing an animal, around so as to cause a small disk, representing a tool, to reach into a confined space holding a third disk and release the third disk from its initially fixed position. This tool usability would have application for smart manufacturing and agriculture. In addition, just as some other animals are able to cooperate by pulling opposite ends of a rope at the same time to release food, here we see that ENTROPICA is able to accomplish a model version of that task. This cooperative ability has interesting implications for economic planning and a variety of other fields.
ENTROPICA is broadly applicable to a variety of domains. For example, here we see it successfully playing a game of pong against itself illustrating its potential for gaming. Here, we see ENTROPICA orchestrating new connections on a social network where friends are constantly falling out of touch and successfully keeping the network well connected. This same network orchestration ability also has applications in health care, energy and intelligence. Here we see ENTROPICA directing the paths of a fleet of ships successfully discovering and utilizing the Panama Canal to globally extend its reach from the Atlantic to the Pacific.
By the same token, ENTROPICA is broadly applicable to problems in autonomous defense, logistics and transportation. Finally, here we see ENTROPICA spontaneously discovering and executing a buy low, sell high strategy on a simulated range traded stock successfully growing assets under management exponentially. This risk management ability would have broad applications in finance and insurance. – Video Ends.)
So, what you’ve just seen is that a variety of signature human intelligent cognitive behavior such us tool use and walking upright and social cooperation, all follow from a single equation which drives a system to maximize its future freedom of action. Now, there’s a profound irony here.
Going back to the beginning of the usage of the term robot, the play RUR, there was always a concept that if we develop machine, intelligence, there will be a cybernetic revolt, that machines would rise up against us. One major consequence of this work is that maybe all of these decades we’ve had the whole concept of cybernetic revolt in reverse. It’s not that machines first become intelligent and then megalomaniacal, and try to take over the world. It’s quite the opposite: that the urge to take control of all possible futures is a more fundamental principle than that of intelligence; that general intelligence may, in fact, emerge directly from this sort of control grabbing, rather than vice versa.
Another important consequence is goal seeking. I’m often asked how does the ability to seek goals follow from this framework and the answer is: the ability to seek goals, for example if you’re playing the game of chess, to try to win that game of chess in order to accomplish worldly goods and accomplishments outside of that game, will follow directly from this in the following sense: Just like you would travel through a tunnel, a bottleneck, in your future path space in order to achieve many other diverse objectives later on or just like you would invest in a financial security reducing your short term liquidity in order to increase your wealth over the long term, goal seeking emerges directly from a long term drive to increase future freedom of action.
Finally, the famous physicist Richard Feynman once wrote that if human civilization were destroyed and you could pass only a single concept on to our descendants to help them rebuild civilization, that concept should be that all matter around us is made out of tiny elements that attract each other when they’re far apart, but repel each other when they’re close together. My equivalent to that statement to pass on to descendants to help them build artificial intelligence, or to help them to understand human intelligence, is the following: Intelligence should be viewed as a physical process that tries to maximize future freedom of action and avoid constraints in its own future. Thank you very much.
- The Girl Who Fell From The Sky: Emma Carey (Transcript)
- John Lennox: The Truth About AI, Consciousness, and God (Transcript)
- How to Hack Your Brain When You’re in Pain: Amy Baxter (Transcript)
- How To Talk To The Worst Parts Of Yourself: Karen Faith (Transcript)
- Uncommon Sense: Moving from a Problem-Focused to Solution-Focused Mindset: Mel Gill (Transcript)