Skip to content
Home » TRANSCRIPT: What Would A Conscious AI Look Like? – Heather Berlin

TRANSCRIPT: What Would A Conscious AI Look Like? – Heather Berlin

Read the full transcript of neuroscientist Dr. Heather Berlin’s talk titled “What Would A Conscious AI Look Like?” at TEDxKC 2024 conference.

Listen to the audio version here:

TRANSCRIPT:

DR. HEATHER BERLIN: So I asked ChatGPT for a joke to start my talk on artificial intelligence, and in less than two seconds it came up with “Why was the computer cold? Because it left its windows open.” Yeah, right.

So Seinfeld’s job is safe for now, but let’s not kid ourselves. The age of AI is upon us, and it raises important questions about our humanity and our relationship to technology. How is artificial intelligence different than ours? What can it do better than us, and where can the brain outshine it? And can we merge with AI technology and become cyborgs? Can AI ever be conscious? In short, what does it mean to be human in the age of AI?

Now, artificial intelligence is just the simulation of human cognitive processes by machines. And the gold standard for the quality of this simulation is the Turing test. Since the 1950s, passing it means the AI can fool a human into thinking it’s conversing with another human.

Now, no computer ever met that standard until recently. Now, five supercomputers have passed the Turing test, and the speed of advancement in AI has been remarkable. ChatGPT’s improvement in knowledge, reason, and writing currently doubles every four months. Yeah, and the comparable doubling time of the human brain would be 3 million years. Yeah, we’re in trouble. But no, not really.

But our speed in adopting AI is also impressive. So ChatGPT reached 100 million users in less than two months. And of course, we’re using AI every time we open our smartphone or get a product recommendation. Like many previous technologies, think atomic energy, the effect AI will have on humankind depends on how we decide to use it, and that’s up to our brains.

The Human Brain

So this 3 pound piece of matter inside our skull, I think, is the most interesting object in the known universe, because it’s the only object by which the universe is known, at least to us, and we still haven’t fully decoded it. Like the universe, it’s complex, abundant. The cerebral cortex alone has 125 trillion synapses. So that’s like the number of stars in 1500 Milky Way galaxies.

Now, computers have fewer connections than the human brain, yet they’re capable of doing many things better and faster. And here’s why. The brain evolved via the slow, clumsy process of natural selection, so it’s complex. Its flexible architecture isn’t optimized for calculations. It’s optimized for keeping us alive. Computers, on the other hand, are engineered, not evolved. They’re designed for speed and precision in specific tasks. They’re programmed rather than maturing over decades and have a completely different physical structure.

AI and Consciousness

So as AI gets smarter and smarter, it can do more and more of what we can do, including problem solving and yes, even creativity. Yeah, that’s an AI generated image. But AI doesn’t have experiences. It’s not aware of itself like we are. It’s not conscious. Or is it? And how would we know?

So first let’s define consciousness. So most scholars define it simply as first-person subjective experience. It’s everything you experience when you’re not in a deep, dreamless sleep, under general anesthesia, or dead. Hopefully you’re not any of those. Okay, it’s as simple as feeling the prick of a pin, tasting a sweet strawberry, or feeling elated by a hug from your beloved. You don’t need intelligence, language, or even a sense of self to have it. And it’s this rich, subjective experience that distinguishes us humans from machines, at least for now. And it’s all coded in our brain’s networks of neurons firing, even though we still don’t know exactly how.

ALSO READ:  ChatGPT, AI, and the Crazy Future That Already Happened: James Skinner (Transcript)

But beyond a working definition, we also need a theory of the neural basis of consciousness and currently there’s no scientific consensus, but one leading contender, the Integrated Information Theory of Consciousness, or IIT, says that consciousness is a property of the universe, like gravity, and it emerges when physical systems have what’s called intrinsic causal power. So you could think of this like symphonic power. It’s like the harmonious, synchronized interplay of many individual instruments creating music.

So IIT argues that the amount of consciousness of a system is identical to the integrated information within it, where the whole is greater than the sum of its interacting irreducible parts.

Now consciousness and intelligence are not the same thing. Intelligence is about doing stuff, responding to a query, driving a car, planning for the future. In principle and increasingly in practice, this can be simulated, but in the same way that a digital simulation of a black hole doesn’t suck you into the computer, or a simulation of rain doesn’t actually get your desktop wet. Mere computational power can’t fully simulate consciousness.

Subjectivity isn’t rooted in function like speaking, but in physical matter with enormous intrinsic causal power. And even if we could replicate the functions of the brain, the so-called “easy problem,” understanding why and how those functions give rise to consciousness remains an open question. That’s what we call the “hard problem.” And we’re on our way to solving the easy problem. But the hard problem remains hard.

And even though yet we still assume that other mammals, other animals, especially mammals, have consciousness since we have a similar hardware, a nervous system, evolutionary history and behavior. So a cat will yelp and pull its paw away if you step on it, and it might even hiss at you as if it feels real pain. But the truth is, I don’t even know if you’re conscious. I mean, I only know my firsthand subjective experience, but I assume that you’re conscious for the same reason that I assume that the cat is. But like a good neuroscientist, I assume AI is not conscious because it’s just a simulation.

But AI systems can irresistibly seduce our intuitions into believing that they’re conscious.