Skip to content
Home » Transcript of John Lennox on 2084: Artificial Intelligence and the Future of Humanity

Transcript of John Lennox on 2084: Artificial Intelligence and the Future of Humanity

Read the full transcript of John Lennox’s lecture titled “2084: Artificial Intelligence and the Future of Humanity”, Dec 20, 2023. John Lennox is an Irish mathematician, bioethicist, and Christian apologist.

TRANSCRIPT:

Introduction

JOHN LENNOX: Are you sitting comfortably, ladies and gentlemen? Well, so am I. I learned to do this in Siberia, and then I discovered it was totally biblical. Rabbi sit to teach. And I’m so thrilled at this late stage of my life to have been allowed to come into contact with the Lanier Foundation. It’s a real high point for me to be invited to come amongst you to this facility which has enormous potential for reaching the world for Christ.

The initiatives are mind blowing, but they’re real, and I believe they’re going to have a wonderful future. It’s been an honor for me to get to know the senior people, Mark and his wife, the two Davids, and they have welcomed me with a warmth that I think is really characteristic of this part of the world.

The Dangerous Intersection of Technology and Humanity

E.O. Wilson, a brilliant American entomologist, said that “the real problem of humanity is the following: We have paleolithic emotions, medieval institutions, and godlike technology, and it’s terrifically dangerous.” Until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago – Where do we come from? Who are we? Where are we going? – rationally, we’re on very thin ground.

I disagree with them about the philosophers. We are fortunate, I think, in the Christian world to have some distinguished philosophers, some in this room tonight, who have not abandoned these big questions, which are the famous three questions of Immanuel Kant.

The late Lord Jonathan Sacks, the chief rabbi of the United Kingdom, brilliantly formulated: “Science takes things apart to see how they work. Religion puts them together to see what they mean.” And tragically, the emphasis over the centuries on the natural sciences and their domination has, as neuroscientist polymath Ian McGilchrist has said, left us in a world where we understand how almost everything works and we know the meaning of nothing.

Dystopian Visions of the Future

These scary things aren’t just the product of overheated imagination of science fiction writers. They are coming from some of the most distinguished minds in our generation. Lord Martin Rees, our Astronomer Royal, says: “The abstract thinking by biological brains has underpinned the emergence of all culture and science, but this activity spanning tens of millennia at most will be a brief precursor to the more powerful intellects of the inorganic post human era. So in the far future, it won’t be the minds of humans but those of machines that will most fully understand the cosmos.”

We are all familiar with two famous dystopias written about the future: Aldous Huxley’s “Brave New World” and George Orwell’s “1984.” My title, “2084,” was given to me by a very famous atheist in Oxford, who when he discovered I was writing on artificial intelligence, said, “I’ve got a title for you. You should call it 2084.”

Neil Postman, in his fascinating book “Amusing Ourselves to Death,” contrasted these two analyses of the future: “Orwell warns that we will be overcome by an externally imposed oppression, big brother. But in Huxley’s vision, no big brother is required to deprive people of their autonomy, maturity, and history. People will come to love their oppression, to adore the technologies that undo their capacities to think.”

So Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us. And in our current culture, it seems to me that both things are happening simultaneously. We have a love-hate relationship with what is going on.

Contrasting Views on AI’s Dangers

The response to technologically driven oppression in terms of surveillance varies widely. Some people think, like this famous web engine developer: “AI doesn’t mean that the end of humanity is nigh. It’s maths, code, computers built by people, owned by people, controlled by people. The idea that it will, at some point, develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstition, a superstitious hand wave. In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you because it’s not alive. AI is a machine and it’s not going to come alive any more than your toaster.”

But here is a different view by Geoffrey Hinton, known as the “Godfather of AI,” who left Google recently to be free to speak about what he saw as the dangers: “As soon as it gets really complicated, we don’t actually know what’s going on anymore than we know what’s going on in your brain. We designed the learning algorithm. But when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things. But we don’t really understand exactly how they do those things. One of the ways in which these systems might escape control is by writing their own computer code to modify themselves and that’s something we need to seriously worry about.”

What is Artificial Intelligence?

Artificial intelligence comes in two sorts. There’s narrow AI, and a narrow AI system typically does one thing and one thing only that normally requires an intelligent human being. Radiology gives us a very good example. The AI system for analyzing chest X-ray images uses an algorithm – that’s just simply a set of step-by-step instructions embedded in computer software that selects a fit for my X-ray of my lungs from a huge database of X-rays of other people that are labeled by doctors with the diseases that they represent, and it then gives a diagnosis.

That diagnosis these days will normally be better than what your local hospital can give you. There has been a recent phenomenal development in adaptive radiotherapy for tumors. Artificial intelligence reduces two weeks’ work to five minutes, and that kind of advance is spectacularly beneficial to human beings.

Like any technology, AI is like a sharp knife.