Max Tegmark – TRANSCRIPT
Consciousness – we’ve all wondered about the mystery of consciousness. But there are really two separate mysteries, as the famous philosopher David Chalmers has emphasized. First, there is the mystery of how our brain processes information, which David calls, “the easy problems of consciousness,” and even though they’re actually very hard, we’ve made huge progress in recent years building computers that can play chess, that can process natural language, that can answer quiz show questions, that can drive cars, and so on.
But then, there is a second mystery of consciousness that David calls, “the hard problem of consciousness.” Why do we have subjective experience? If I’m driving a car, I’m having a subjective experience of colors, of sounds, emotions, thoughts. But why? Does a self-driving car have any subjective experience? Does it feel like anything at all to be a self-driving car? Raise your hand if you have any sort of background in physics.
Uh, some wolves in sheep’s clothes here tonight. I am a physicist, too from my physics perspective, a conscious person is simply food rearranged. So, why is one arrangement conscious and not the other? Moreover, from my physics perspective, food is just a bunch of quarks and electrons arranged in a certain way, so why is one arrangement, like your brain, conscious while another arrangement, like a bunch of carrots, not? This physics perspective goes against the idea that philosophers like to call dualism, that conscience is explained by adding something beyond physics, some extra ingredient, a life forest, elan vital, or a soul. This idea of dualism has gradually lost popularity among scientists, because if you were to measure what all the particles in your brain are doing and find that they perfectly obey the laws of physics, then that would mean that this purported soul is having absolutely no effect on what you’re doing.
Whereas, if you were to measure instead that these particles in your brain are not obeying the laws of physics, because they’re being pushed around somehow by the soul, then that brings the soul into the domain of physics. Because you can now just measure all these new forces the soul is exerting and study the properties physically of it just as you would study the properties of a new field or a new particle like a Higgs boson.
From my physics perspective, a bunch of moving quarks and electrons are nothing but a mathematical pattern in space-time. A bunch of numbers specifying positions, and motions, and various properties of these particles like electric charge, and other numbers you can see in this table here. From this physics perspective, that hard question of consciousness that David Chalmers posed gets transformed into a form I like much better.
Because we can now start, instead of starting by asking the hard question of why some arrangements of particles feel conscious, we can start with a hard fact. That some arrangement of particles like your brains, are conscious; and not others. We can ask, “What are these special physical properties these arrangements have to have to be conscious?” Neuroscientists have had a lot of progress recently, including right here, in figuring out what subjective experiences correspond to different neuron firing patterns in your brain, which they call neural correlates of consciousness. I want to generalize this idea and ask what subjective experiences correspond to different kind of particle motions, which you might call physical correlates of consciousness.
But before that, this whole physics perspective really begs the question: how can something as complicated as consciousness possibly be explained by something as simple as particles? I think it’s because consciousness is a phenomenon that has properties above and beyond the properties of its particles.
We physicists call phenomena that have properties above and beyond those over their parts: emergent phenomena. Let me explain this with an example that’s simpler than consciousness: wetness. A water droplet is wet, but an ice crystal or a gas cloud is not wet even though they are made of the exact same kind of water molecules. So, it’s not the molecules, it’s not the particles that make the difference; it’s the pattern into which they are arranged. So it makes no sense whatsoever to argue about whether a single water molecule is wet or not, because the phenomenon of wetness only emerges when you take a vast number of water molecules and you arrange them in this special pattern we call liquid.
So solids, liquids, and gases are all emergent phenomena in that they have properties above and beyond those that are particles, they have properties that the particles don’t have. I think that just like solids, liquids, and gases, consciousness too is an emergent phenomenon, because, if I drift off into sleep, and my consciousness goes away, I’m still made out of the exact same particles. The only thing that changed is the pattern into which my particles were arranged.
And if I were to freeze to death, then, my consciousness would definitely go away, but I would still consist of exactly the same particles. It’s just that they were now rearranged to make me rather an unfortunate pattern. So we physicists love studying what happens when you take a lot of particles, and you put them together in different patterns. We love to study what properties emerge; and often, these properties are numbers that we can just go out and measure like how viscous something is, how compressible it is, and so on. We can use these to classify stuff. For example, if some stuff is very viscous so it’s rigid, we call it a solid. Otherwise, we call it a fluid.
If the fluid comes really hard to compress, we call it a liquid. Otherwise, we call it a gas or a plasma depending on how it conducts electricity. So, could there be some other number like this that quantifies consciousness? That’s exactly what the neuroscientist Giulio Tononi thinks. He’s defined such a quantity that he calls integrated information, Phi, which is basically a measure of how much different parts of a system know about each other. He and his colleagues have managed to measure a simplified version of this quantity using EEG after magnetic stimulation, and it’s worked really, really well this consciousness detector of theirs, managing to identify consciousness in patients that are awake or who are dreaming, but not patients who are anesthetized. Or who are in deep sleep.
They even correctly identify consciousness in two patients with Locked-in syndrome, paralyzed, and totally unable to communicate in any way. So this is potentially very useful for doctors in the future. But I want to generalize this now to non-biological systems, as well. For example, we can ask the question of some future super-intelligent computer: is it conscious or not? To do this let’s look at systems, let’s look at states of matter with emergent phenomena that have something to do with information. To store information has to have the physical properties that have some states that are just very long-lived.