Skip to content
Home » Transcript: Can We Survive AI? John Lennox on Deepfakes, Death, and the Divine Upgrade

Transcript: Can We Survive AI? John Lennox on Deepfakes, Death, and the Divine Upgrade

Read the full transcript of mathematician, bioethicist, and Christian apologist John Lennox’s lecture titled “Can We Survive AI? John Lennox on Deepfakes, Death, and the Divine Upgrade”, at Confident Faith Conference, 2024.

John Lennox: Good morning, ladies and gentlemen. I’m delighted you’ve come to hear a geriatric individual.

Living with AI Today

Can we live with AI? We’re already living with AI, with narrow AI. And a narrow AI system is a computer, a huge database, and an algorithm that picks out something from that database.

It simulates intelligence. It’s not really intelligent. And another important thing, it decouples intelligence from consciousness. The narrow AI we live with, with which we’re very familiar, digital assistance, online shopping, medicine, and I’d just point out it’s just been announced by Sam Altman, the CEO of OpenAI. He’s going to transform American Medicare by introducing personal chatbots, which sounds a very interesting potential for the future.

Autonomous vehicles and also a use of AI and facial recognition to pick out criminals from a crowd. All of these things we take upon ourselves voluntarily, but there are people who are forced to live with AI because the same facial recognition that’s used for picking out criminals in a crowd is being used to suppress minorities, in particular, the minority Muslim population of Uighurs in Xinjiang in Northwest China.

That intensive surveillance is intrusive to a colossal extent and is being exported all over China and possibly eventually to the West. Autonomous weapons we’re familiar with, and also the threat to democracy from deep fakes.

Ken McCallum, who’s the director general of MI5 in the UK, says the fabric of society could be undermined by AIs impersonating real people so that it would no longer be possible to distinguish truth from falsehood. Deep fake technology is a threat to democracy and could be harnessed by hostile states to sow confusion and disinformation at the next general election.

The Moral Problem of AI

And when we think of the existing AI, one of its leaders, Joshua Bendigo, wrote about the morality of AI systems. People need to understand that current AI and the AI we can foresee in the reasonable future does not and will not have a moral sense or moral understanding of what is right and what is wrong.

And when we think about the question what we can live with, my mind immediately goes back to two famous dystopias, George Orwell’s nineteen eighty four and Huxley’s Brave New World. And a brilliant analysis of those two books was given by Neil Postman, who said Orwell warns that we will be overcome by an externally imposed oppression.

But in Huxley’s vision, no big brother is required to deprive people of their autonomy, maturity, and history. People will come to love their oppression and to adore the technologies that undo their capacities to think. Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us.

And at the present time, we are in a love hate relationship with our technologies. Both things appear to be happening on the grand scale simultaneously.

The Fundamental Questions

And some time ago, the brilliant entomologist wrote this, E.O. Wilson, the real problem of humanity is the following. We have paleolithic emotions, medieval institutions, and godlike technology, and it’s terrifically dangerous. Until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago, Where do we come from? Who are we? Where are we going? Rationally, we are on very thin ground.

Those are the three big questions that the philosopher Immanuel Kant asked. E.O. Wilson is wrong, of course. There are many serious philosophers dealing with precisely those questions. Where do we come from? Who are we? And where are we going?

The Path to Artificial General Intelligence

And where we’re going in the opinion of some is towards artificial general intelligence or AGI, which involves building an AI system that equals or exceeds human capacities. In other words, constructing a super intelligence. So, we’re in the realm of transhumanism.

Now, if that were simply a statement coming from the science fiction area, we would probably all ignore it. But this kind of thinking is part and parcel of some of the statements of the most brilliant scientists on our planet.

One of them from this country is Lord Rees, the UK astronomer royal. He wrote, we can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us, even though they may have an algorithmic understanding of the way we behaved.

ALSO READ:  The Secrets of People Who Love Their Jobs: Shane Lopez (Transcript)

The Spectrum of Expert Opinion

So if we ask the question, can we live with AI? You will find a whole palette of scenarios, all of them coming from highly intelligent people. So here’s a sample.

Stephen Hawking, the late brilliant genius from Cambridge. The real risk with AI isn’t malice, but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Elisa Yudkovsky of the Machine Intelligence Research Institute, if somebody builds a too powerful AI under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

And that kind of view, which is at one extreme, has led to pressure from the Center for AI Safety to mitigate the risk of extinction. This, they write on their website, should be a global priority alongside other societal scale risks such as pandemics and nuclear war.

But on the other hand, there are highly intelligent voices saying something very different. Jobst Landgrebe and Barry Smith wrote a book Why Machines Will Never Rule the World Artificial Intelligence Without Fear. And they argue that just as physics shows the impossibility of constructing a perpetual motion machine, so the mathematics of complex systems show that it is not possible, nor will it be, to engineer AGI machines, even at the cognitive level of a crow, and that therefore the singularity will never happen.

I’m very sympathetic as a mathematician actually to that viewpoint, and it’s supported by one of the most brilliant mathematicians on the planet who lives in this city, Sir Roger Penrose.

Current Risks vs.