Read the full transcript of neuroscientist Dr. Heather Berlin’s talk titled “What Would A Conscious AI Look Like?” at TEDxKC 2024 conference.
Listen to the audio version here:
TRANSCRIPT:
DR. HEATHER BERLIN: So I asked ChatGPT for a joke to start my talk on artificial intelligence, and in less than two seconds it came up with “Why was the computer cold? Because it left its windows open.” Yeah, right.
So Seinfeld’s job is safe for now, but let’s not kid ourselves. The age of AI is upon us, and it raises important questions about our humanity and our relationship to technology. How is artificial intelligence different than ours? What can it do better than us, and where can the brain outshine it? And can we merge with AI technology and become cyborgs? Can AI ever be conscious? In short, what does it mean to be human in the age of AI?
Now, artificial intelligence is just the simulation of human cognitive processes by machines. And the gold standard for the quality of this simulation is the Turing test. Since the 1950s, passing it means the AI can fool a human into thinking it’s conversing with another human.
Now, no computer ever met that standard until recently. Now, five supercomputers have passed the Turing test, and the speed of advancement in AI has been remarkable. ChatGPT’s improvement in knowledge, reason, and writing currently doubles every four months. Yeah, and the comparable doubling time of the human brain would be 3 million years. Yeah, we’re in trouble. But no, not really.
But our speed in adopting AI is also impressive. So ChatGPT reached 100 million users in less than two months. And of course, we’re using AI every time we open our smartphone or get a product recommendation. Like many previous technologies, think atomic energy, the effect AI will have on humankind depends on how we decide to use it, and that’s up to our brains.
The Human Brain
So this 3 pound piece of matter inside our skull, I think, is the most interesting object in the known universe, because it’s the only object by which the universe is known, at least to us, and we still haven’t fully decoded it. Like the universe, it’s complex, abundant. The cerebral cortex alone has 125 trillion synapses. So that’s like the number of stars in 1500 Milky Way galaxies.
Now, computers have fewer connections than the human brain, yet they’re capable of doing many things better and faster. And here’s why. The brain evolved via the slow, clumsy process of natural selection, so it’s complex. Its flexible architecture isn’t optimized for calculations. It’s optimized for keeping us alive. Computers, on the other hand, are engineered, not evolved. They’re designed for speed and precision in specific tasks. They’re programmed rather than maturing over decades and have a completely different physical structure.
AI and Consciousness
So as AI gets smarter and smarter, it can do more and more of what we can do, including problem solving and yes, even creativity. Yeah, that’s an AI generated image. But AI doesn’t have experiences. It’s not aware of itself like we are. It’s not conscious. Or is it? And how would we know?
So first let’s define consciousness. So most scholars define it simply as first-person subjective experience. It’s everything you experience when you’re not in a deep, dreamless sleep, under general anesthesia, or dead. Hopefully you’re not any of those. Okay, it’s as simple as feeling the prick of a pin, tasting a sweet strawberry, or feeling elated by a hug from your beloved. You don’t need intelligence, language, or even a sense of self to have it. And it’s this rich, subjective experience that distinguishes us humans from machines, at least for now. And it’s all coded in our brain’s networks of neurons firing, even though we still don’t know exactly how.
But beyond a working definition, we also need a theory of the neural basis of consciousness and currently there’s no scientific consensus, but one leading contender, the Integrated Information Theory of Consciousness, or IIT, says that consciousness is a property of the universe, like gravity, and it emerges when physical systems have what’s called intrinsic causal power. So you could think of this like symphonic power. It’s like the harmonious, synchronized interplay of many individual instruments creating music.
So IIT argues that the amount of consciousness of a system is identical to the integrated information within it, where the whole is greater than the sum of its interacting irreducible parts.
Now consciousness and intelligence are not the same thing. Intelligence is about doing stuff, responding to a query, driving a car, planning for the future. In principle and increasingly in practice, this can be simulated, but in the same way that a digital simulation of a black hole doesn’t suck you into the computer, or a simulation of rain doesn’t actually get your desktop wet. Mere computational power can’t fully simulate consciousness.
Subjectivity isn’t rooted in function like speaking, but in physical matter with enormous intrinsic causal power. And even if we could replicate the functions of the brain, the so-called “easy problem,” understanding why and how those functions give rise to consciousness remains an open question. That’s what we call the “hard problem.” And we’re on our way to solving the easy problem. But the hard problem remains hard.
And even though yet we still assume that other mammals, other animals, especially mammals, have consciousness since we have a similar hardware, a nervous system, evolutionary history and behavior. So a cat will yelp and pull its paw away if you step on it, and it might even hiss at you as if it feels real pain. But the truth is, I don’t even know if you’re conscious. I mean, I only know my firsthand subjective experience, but I assume that you’re conscious for the same reason that I assume that the cat is. But like a good neuroscientist, I assume AI is not conscious because it’s just a simulation.
But AI systems can irresistibly seduce our intuitions into believing that they’re conscious.
Just like optical illusions can deceive us. So, for example, these two squares A and B are physically exactly the same shade of gray, but you still can’t help but see them as different shades. So even if I told you an AI system is not conscious, you might still feel like it is.
The Importance of Consciousness
Now why does it matter if it actually is or isn’t? Imagine if you thought your partner was cheating on you, but they acted like they loved you. What really matters to you is “Do they really love me or are they actually in love with somebody else?” Feeling is what matters to us. It’s what connects us at a deep level.
Words are just insufficient, clumsy cognitive tools to express how we really feel. And all ChatGPT has is words based on linguistic output. We might attribute consciousness to it, but how it got to its output is very different than how we got to ours.
In fact, some believe that consciousness can only exist in living organisms with a nervous system. This view is called bio-psychism, which I’m partial to but can’t prove is true. Now, according to IIT, current AI systems can’t be conscious because they don’t have a sufficiently powerful causal structure. But conscious AI may be possible if we build a neuromorphic computer. So that is a computer that’s designed to mimic the neural architecture and function of the human brain, so it would have artificial neurons and synapses to simulate the brain’s neural networks. Scientists are now working on this, so conscious AI may not be too far off.
But don’t worry. Humans are incredibly resilient and adaptive to change. We evolve, as do our creations, our art, tools and inventions. But we engineer those changes. The rise of AI and the speed of its advancement may feel like an inevitable evolution, but we make the decisions that determine what it will be and how we’ll use it, at least for now. And we have the power to align AI with our values to ensure that it enhances our collective well-being, with humans and AI amicably coexisting, complementing each other’s strengths.
Now there’s coexistence, and then there’s collaboration. So now as a last resort, we can actually implant electrodes and use deep brain stimulation to treat disorders like Parkinson’s disease, obsessive compulsive disorder and depression. We can also implant microelectrode arrays to detect brain signals that a computer can translate into a machine and instruct, for example, to control a robotic device simply by a person using their own thoughts.
So, for example, a quadriplegic woman could drink a cup of coffee or type out a message just by thinking about it. Or a quadriplegic man can walk using a robotic Iron Man-esque suit just by thinking about walking. And I can imagine a future where humans merge with AI via brain-computer interfaces that allow us to augment ourselves and even evolve into a new species of human, or should I say cyborg.
And these implants are like walkie-talkies to smartphones. So the better that we understand the brain, the better we can treat disorders and ultimately design cognitive-enhancing, AI-powered brain-computer interfaces to improve our processing speed, decision making, attention, memory, even modulate our emotions.
Ethical Considerations
Now, of course, there are ethical considerations. Who can afford the neural implants? Will they lead to two classes of citizens – enhanced and unenhanced? And what if somebody could hack into your implants and control your thoughts and behaviors? Perhaps we’ll also need to implant an AI immune system that can adapt to threats.
And what will it mean in terms of our identity? Our brains create our sense of self. So now imagine that one of your brain’s 100 billion plus neurons is replaced with a silicon chip prosthesis that has the exact same input-output profile as the neuron it replaces.
Now imagine that one by one, the rest of your neurons are swapped for silicon prostheses. The physical structure of your brain would gradually change, but the causal relationships would remain the same. Is there some point along the way when you’re no longer you or no longer conscious? We may need a new test of your first-person subjective self.
And as we integrate AI into more aspects of our lives, other ethical questions will arise. How do we ensure that AI is used responsibly? How do we prevent biases from being encoded into algorithms, and how do we protect the privacy gathered by AI while offering transparency into its use?
And if we create conscious machines, how should we treat them? What rights would they have? So if my refrigerator can’t feel pain, I can kick it when I’m angry. Not that I do that, but if it’s conscious of pain, then that would be unethical. And what if a robot or AI suddenly decides it doesn’t want to do the job that it’s been programmed for? If we force it to work against its will, would that amount to slave labor?
I mean, do we really even want artificial consciousness, and how would we know if we achieved it? Ultimately, we need an overarching theory of consciousness so we can determine, for example, does a bee have it? Does a fetus have it? Does the AI program have it? Even does the internet have it?
Conclusion
And it’s our humanity that gives us this insight using our common sense, imagination and that unique mix of experience, plus knowledge that we call wisdom from our inner world of intuition, self-reflection, and subjective awareness. We interpret and imagine the outer world and technology’s impact on it.
Now AI can perform tasks with incredible efficiency, but it lacks the essence of what makes us truly human – our ability to feel things like joy, sorrow, love, compassion, and to connect and truly empathize with other humans on a deep emotional level.
We should view AI not as a threat to our humanity, but as a tool that can complement and enhance our human abilities. Perhaps it will allow us to be more human, to have more experience, more moments of connection, awe, and transcendence. One early promise of computers was that they’d save us time, but we all know they just busied us in new ways.
But maybe AI can begin to fulfill that promise. Soon, we may be as casual about letting AI handle our email as we are now about asking GPS for directions, and AI will eventually be able to do much of what we can do, but it will never be what we can be. It may finally enable us to shift our balance from doing to being, allowing us more time for creativity, mindfulness and using our bodies that have spent too much time sitting in front of computers. More time for play, even sex. Yes, the money quote from this talk is “AI will let us have more sex.”
So to end this talk, I asked Dall-E to create an image of a man and a woman holding hands with an intelligent robot baby walking into a futuristic world of peaceful coexistence with technology. 10 seconds later it produced this image.
Yes. So apparently, in the future, parents with way too many fingers will walk in opposite directions with their robot kids, tearing themselves apart while peacefully coexisting with other robots in long, bright corridors. And is that a robo-baby bump?
But like other AI programs, Dall-E is not conscious. It’s intelligent, with many design capabilities and vast knowledge of imagery from across the internet. But it has never had the experience of holding hands or a memory of being swung by big, strong adults. Software can’t walk with a beloved child, look over at a partner and feel connected through this family experience, or flash back to its own childhood, its own parents. To AI as we know it, “beloved” is just a word with a meaning used in a string of other words. To us, “beloved” is another person, a feeling, a way of knowing and being known, caring and being cared about.
And if we do really care about each other, we’ll do what’s necessary to safeguard our future, bringing our consciousness to bear on AI challenges, we can harness these powerful tools to make the world a better place to be human. Whatever we decide that might mean. Thank you.