Skip to content
Home » AI, Man & God: Prof. John Lennox (Full Transcript)

AI, Man & God: Prof. John Lennox (Full Transcript)

John Anderson (Former Deputy Prime Minister of Australia) was joined on his podcast by mathematician, bioethicist and Christian apologist Professor John Lennox. Here in this discussion is centered on the current and future impacts of artificial intelligence technology. Below is the full transcript of the podcast:

Listen to the MP3 Audio here:


JOHN ANDERSON: It’s an extraordinary privilege for me to be in Oxford and able to talk personally to Professor John Lennox, Emeritus Professor of Mathematics at the University of Oxford, for years a Professor of Mathematics at the University of Wales in Cardiff. He’s lectured extensively all over the world. He’s written widely. Interestingly, he’s spent a lot of time in Russia and Ukraine after the collapse of Communism and is deeply grieved to see what is happening there and the idea that young men on both sides, that he and others have taught and mentored, may now be fighting one another into the dust and these dangerous times in which we live.

But amongst these many writings, he’s gifted us a very useful book. He tells me he’s already updating it on artificial intelligence and the future of humanity called 2084, which says a lot in the sense that we all know about 1984. I think you’re telling us that there are some troubling things coming up.

John, thank you so much for your time.

PROF. JOHN LENNOX: It’s my pleasure to be with you.


JOHN ANDERSON: Can we begin — over the past two years during the COVID pandemic, but also with climate change, we hear this phrase a lot in Australia and it seems internationally, trust the science. Strikes me that in our allegedly secular age, trust and faith are still seen as pretty important. We haven’t walked away from them.

Do you think those who are accused of not trusting the science are frequently seen as somehow rationally and even morally deficient? In an age of crisis, is science becoming a new savior in inverted commas?

PROF. JOHN LENNOX: Well, trusting the science is fine if it’s kept to the things of which science is competent. But unfortunately, over the past few years, there has developed a trust in science that we now call scientism, where science is regarded essentially as the only way to truth, the only option for a rational thinking person, and everything else is fairy stories and all the rest of it.

And I take great exception to that because it’s plainly false. It’s false logically because the very statement science is the only way to truth is not a statement of science. And so if it’s true, it’s false. So it’s logically incoherent to start with.

But going a little bit more into it, it has had huge influence because of people like the late Stephen Hawking, for example, who wrote in one of his books, he said that philosophy is dead and it seems now as if scientists are holding the torch of truth. And that’s scientism.

The irony of it is, of course, that he wrote it in a book where it’s all about philosophy of science. And it’s pretty clear that Hawking, brilliant as he was as a mathematical physicist, really is a classic exemplar of what Albert Einstein once said, the scientist is a poor philosopher.

And my response to it is very much would be couched in the kind of attitude that Sir Peter Medawar, he’s a Nobel Prize winner in Oxford here, once wrote, he said, it’s so very easy to see that science, meaning the natural sciences are limited in that they cannot answer the simple questions of a child: Where do I come from? Where am I going to? And what is the meaning of life?

And it seems to be immensely important that we recover that. And what Medawar went on to say is we need literature, we need philosophy, and we need theology as well, in my view, in order to answer the bigger questions.

Now, the late Lord Sacks, brilliant philosopher, he was the chief rabbi of the UK and the Commonwealth and so on.

JOHN ANDERSON: And one of the guests on this series.

PROF. JOHN LENNOX: And one of the guests on this series, well, I’m delighted to hear it, he once wrote a very pithy statement that I found very helpful. He said, you know, science takes things apart to understand how they work, and I suppose to understand what they’re made of. Religion puts them together to see what they mean.

And I think that encapsulates the danger in which we’re standing. Science has spawned technology. We’ve become addicted to technology, particularly the more advanced forms of it, like AI in my book, like virtual reality, the metaverse, all this kind of stuff. We’ve become addicted to it, but we’ve lost a sense of real meaning.

And in particular, we’ve lost our moral compass. Einstein, again, to quote him, made the point long ago. He said, you can speak of the ethical foundations of science but you cannot speak of the scientific foundations of ethics. Science doesn’t tell you what you ought to do. It will tell you, of course, if you put strychnine in your granny’s tea, it will give her a very hard time, in fact, it would kill her. But it can’t tell you whether you ought to do it or not to get your hands on her property.

And so we’re left in a scientistic moral vacuum. And therefore, I feel very strongly that as a scientist of sorts, I need to challenge this. Science is marvelous, but it’s limited to the questions it can handle. And let’s realize it does not deal with the most important questions of life.

And they’re the question, who am I? What can life and does life mean? And where do we get a moral compass?


JOHN ANDERSON: Before we come to artificial intelligence, then I’d just like to explore what you’ve been talking about a little bit with reference to Britain. I love history. I’ve always massively admired Britain. And I know Britain seems to be into self-flagellation on just about every issue you can think of at the moment, the decrying of its own cultural roots.

But to my way of thinking, I think in many ways, Britain has been a force for unbelievable good in the world. I really do. I mean, as an Australian, I would not live in a free country if it hadn’t been for the prime minister of this country standing up when no one else did in 1939, just one minor example.

But I come here now and I wonder just what the British people believe in. So massively shaped by Christian faith, arguments, sometimes very ugly over a long period of time, but nonetheless profoundly shaped. The Times reported just a couple of years ago that we’ve reached the point where 27% of Britain’s believing God with an additional 16% believing in a higher power. Among the British as a whole, 41% say they believe there is neither a God or a higher power.

Interestingly, those in the UK, young people, the number who said they believe in God rose a little and nonetheless, what you’ve got here is one of the most secular societies on earth, which not so very long ago was one of the more Christian. What’s responsible? Is it tied to a sort of false faith in science amongst other things? Or is it just it’s too hard? Or is it that the wars have seen people convinced that you saw two Christian nations fighting, people praying to the same God for victory? How did it morph so badly to a state of unbelief, you think, the country that you’ve lived in your life?

PROF. JOHN LENNOX: I find this a complex and difficult issue because I see different strands in it. If you pick up on the science side, you go back to Isaac Newton and he gave us a picture of the universe that was very much what’s called a clockwork picture, the universe running on fixed laws that were, according to Newton, originally set in place by God.

But it was a universe that essentially now ran on its own. And you can see that that, in the 18th century, particularly favored what’s called Deism. That is, there is a God, but He’s hands off. He started it running and now it runs and it runs very well. And you can see with that in the collective psyche, particularly in the academy, it very rapidly led to questions of, is God really necessary?

Now you add that to what was happening on the continent with the Enlightenment and the corrupt church professing Christianity, utterly corrupt, and the reaction against that, which was fuel to the fire, really, of a rising secularism and atheism.

And then you add to that what was happening in the days around the time of Charles Darwin, where you had Huxley, who was an atheist, and he resented these clergymen who were actually some very good bachelor philosophers, like Wilberforce, actually, was a much brighter man than many people think, as Darwin pointed out.

But Huxley in the UK, he wanted a church scientific. He wanted to turn the churches into temples to the goddess Sophia of wisdom, that kind of idea. So you’ve got all of that, and then you add to that the vitriolic anti-God sentiments that are not just atheism, but anti-God feeling, led for quite a long time by Richard Dawkins and other people and that’s had huge influence on young people. It’s one of the reasons I entered the fray, actually, because the media then come into this.

It’s even more complicated because within the media, the dominant view, and I think the BBC actually stated this at one time, that they favored naturalism, the philosophy that nature’s all there is, and there’s no outside, there’s no transcendence, there’s no God. So you’ve got all of that, and against it, you have a group of people who are often cowed into letting their faith in God become private. This is the tragedy of secularism, and you get into that, the cancel culture, the woke culture, all this kind of stuff, where I’ve got to affirm everything.

Everything’s equally valid, that you’ve got relativism and postmodernism, at least in things that people think don’t matter. You’ll never meet a postmodern business person who goes to a bank manager and says, I’ve got $5,000 in the bank, and the bank manager says, well, actually, you owe the bank 10,000. No, that’s only your truth. No, that doesn’t work in the business world.

But still, you’ve got this pressure of relativism, and so you end up, as Michael Burke put it a few years ago, talking about faith in God in Britain, with a first generation that doesn’t have a shared worldview. Now, there’s still a Christian influence, as even atheists recognize, but we’ve gone a long way in rejecting God and abandoning God.

And then there’s the entertainment industry that will fill everybody’s vacuum with noise, and we entertain ourselves to death. So your question is extremely complex, and it would need a more observant person than me to give you a full answer. It’s a huge mix of stuff, and any individual person may be in effect of this in completely different ways.


JOHN ANDERSON: The reason that it’s important, I think, to set that up is we now come to what I really wanted to hear your views on, artificial intelligence, because science is giving us extraordinary capabilities. But will we simply be seduced by it in the sense that artificial intelligence is rapidly creating things that are marvelous, that we want to enjoy, that may satiate us, may dull us, while aspects of the emergence of AI could be very dangerous.

But before we start to explore that, for ordinary people in the street like me who are not living with this — well, I am living with this stuff, but don’t know where it might go, we need to define some terms. What is AI? What’s, I think you call it Narrow AI, of the sort that we’re quite familiar with, limited intelligence, but highly focused on narrow areas.

What is artificial general intelligence, and where might that go? There’s a whole number of issues, then there’s the whole issue of transhumanism. So can we start with, very broadly, AI is what, John? How would you explain it to a layman? We’ve all heard the term.

PROF. JOHN LENNOX: Oh, sure. Well, the first thing to realize is that the word ‘artificial’ in the phrase artificial intelligence is real. And that’s not due to me, it’s due to one of the pioneers of the subject who happens to be a Christian.

And the point is that, and we’ll take a narrow AI system first, because it’s much easier to explain. A narrow AI system is a system involving high-powered computer, a huge database, and an algorithm that does some picking and choosing, that whose output is something that it normally requires human intelligence to do.

That is, if you look at the output, you would say normally that it’s taken an intelligent person to do that. So let’s take an example that is very important these days in medicine, and that’s X-ray, interpreting X-rays. So we have a database. Let’s say it has 1 million X-rays of lungs that are infected with various diseases, say related to COVID-19. They are then labeled in the database by the world’s top experts.

Then they take an X-ray of your lungs or my lungs, and the algorithm compares the X-ray of your lungs with the million very rapidly. And it produces an output which says, John Anderson has got that disease. Now at the moment, that kind of thing, which is being rolled out, not only in radiology, but all over the place, will generally give you a better result than your local hospital will. And that’s hugely important and hugely valuable.

But the point is, the machine is not intelligent. It’s only doing what it’s programmed to do. The database is not intelligent. The intelligence is the intelligence of the people that designed the computer, know about X-rays and know about medicine. But the output is what you would expect from an intelligent doctor. So it’s in that sense, artificial.

It’s a system — narrow in the sense that only deals with one thing. And all kinds, endless kinds of systems are being rolled out around the world. And some of them, as you mentioned, are extremely beneficial. Narrow AI has been used in the development of vaccines and the spinoff from that technology is enormous in drug development. And on and on it goes. And I can give you dozens of examples there in my book. So that’s where we start.

Now, we are familiar with it and it’s worth giving a second example of it. Because most of us voluntarily are wearing, first of all, a tracker, it’s called a smartphone. It knows where we are. It could be even recording what we’re saying. But what it does do, of which we’re all aware, is if we, for example, buy a book on Amazon, we very soon get little pop-ups that say people that bought that book are usually interested in this book.

And what’s happening there is the AI system is creating a database of your preferences, your interests, your likes, your purchases, and is using that to compare with its vast database of available things for sale so that it predicts what you might like. So this is of huge commercial value.

And it leads to something else which most of us don’t know about, and we can come to that later, but I’ll mention it now, which is called Surveillance Capitalism. And there’s a book by an emerita professor at MIT called Shoshana Zuboff, and it’s regarded as a very serious book because the point she’s making is global corporations are using your data and without your permission are selling it off to third parties and making a lot of money out of it. And that raises deep privacy issues. So now you’re straight into the ethics. So that’s narrow AI.

JOHN ANDERSON: Okay, so let’s stay on narrow AI and extend our road a little bit further down towards broader use. You’ve just talked about us being unaware in a way of how we’re being surveilled.


JOHN ANDERSON: And it was right here in Oxford. I think it may have been you who made the point. But I can’t remember in a talk that I heard where the point was made that what’s happening in China using artificial intelligence to surveil people is astonishing, but in many ways, all that information is being collected in the West as well. It’s just, it’s not collated in the same way.

PROF. JOHN LENNOX: That’s correct. And this is perhaps one of the scariest aspects of it. What we’re talking about here is facial recognition by closed circuit television. Well, it starts with facial recognition, but we’ve now got to the stage where in China in particular, they can recognize you from the back by your gait, by all kinds of things.

And what has happened is, and you can see the positive benefit, police want to arrest criminals or thugs or rowdies even in a football crowd. And so using facial recognition technology, they can pick a person out and arrest them or her. Well, okay.

But what can be used for good purposes in that sense in keeping law and order can also become, particularly in an autocratic state, become an instrument of control. And here’s the huge dilemma which people try to solve. How much of your privacy are you prepared to sacrifice for security? There’s a tension between those two things.

Now, in China, you mentioned, and you’re probably thinking about Xinjiang, where you’ve got a minority, a Muslim minority of Uyghur people. The surveillance level on them is unbelievable. Every few hundred meters down the street, they have to stop, they have to hand in their smartphones, the smartphones are loaded with all kinds of stuff by the government. Their houses have QR codes outside them as to how many people live there and all this kind of thing.

And I don’t know how many, it’s way over a million, I believe, are being held as a result of what has been picked up by artificial intelligence systems and re-education centers. And the suspicion is that the culture is being destroyed and eradicated. That’s the one hand, that’s in one particular province.


But elsewhere in China, we have now the social credit system that apparently will be ruled out in the entire country. We’re given, say, you and I, we’re given to start with, let’s say 300 social credit points. And we’re being trailed. If we fail to put our rubbish trash can out at night, there’ll be marks against us. If we go to somewhere dubious or mixed with someone whose political loyalties will suspect we’ll get more negative points.

On the other hand, if we pay our debts on time and go green, so to speak, and all this kind of thing, we will amass more credit points. And then if we are going negative, the penalties kick in. We’ll discover we can’t get into our favorite restaurant. We’ll discover we don’t get that promotion or don’t even get that job we apply for, or that we can’t travel, or that we can’t even have a credit card.

And this is being rolled out, and the list of penalties and things that have actually been recorded is just very serious.

Now, what amazed me when I first came across this was the fact that many people welcomed us. They think it’s wonderful. They both, I got a thousand points. How many have you got? And they don’t realize that the whole of life is becoming controlled in the interest ostensibly of having a healthy society.

So it is, talk about 1984. Now, this is not futuristic speculation. This is already happening. George Orwell, you mentioned him, who wrote 1984. He talked about Big Brother watching you, and the technology would eventually, it is doing it. This is narrow AI. This is not futuristic in any way. It’s what’s actually happening at the moment.

And you mentioned briefly the fact that all this stuff exists in the West, except, and the point has been made forcibly, it’s not quite yet under one central authority and control, but it is coming. We have credit searches. We have all kinds of stuff that is beginning to creep in in the US and in the UK, and I presume also in Australia.

And also we have even police forces here, I believe, who want the whole caboodle in here, but want to be able to exert a much more serious level of control. And it is frightening because what it does for human rights is, well…

JOHN ANDERSON: So it occurs to me that, you know, I love history, as I’ve mentioned. Authoritarian regimes have collapsed under their own weight. Typically, the people have risen up one way or another, and there’s been an overturning. We’ve never had autocratic regimes that have had this surveillance capacity. You know, an estimated 400 million closed-circuit television sets in China. That’s one for about every three people, and it’s mind-boggling.

PROF. JOHN LENNOX: Oh, it is mind-boggling. And even here in the UK, what I’m told is that you’re on a closed-circuit TV camera every five minutes when you’re moving around. So it is very serious, and of course, the irony is as I hinted at earlier, here we are with our smartphones that have got all these capacities, certainly at the audio level, and we’re voluntarily wearing them. So we’re voluntarily ceding part of our autonomy and our rights, really, to these machines when we don’t really know what is being done with all the information.

So we have a huge problem, and someone has said we’re sleepwalking into all of this so that we’re captured by it, we’re imprisoned by it, and we wake up too late because the central authority has got so much control that we cannot escape anymore.


JOHN ANDERSON: So let’s go back to where I started. Science is blessing us because they are fantastic, a lot of these things, you know, with incredible technology and capabilities that you’ve alluded to some of the useful things. I mean, I love the way in which I can, in my car, say, hey, Siri, call my wife. I mean, that’s just fantastic.

But my question about what we now believe goes to the heart of who do we think we are, what is our status, on what basis will we be alert enough to recognise we need to make tough decisions, and then on what basis will we make the ethical decisions around how far this goes?

I know it’s a complicated question, but there’s another element to it because we haven’t even got into general artificial intelligence yet. We’re still talking, as I understand it, about narrow artificial intelligence, just masses of it.

PROF. JOHN LENNOX: Yes, we are.

JOHN ANDERSON: Those surveillance cameras and the people at their desks in Beijing collating the information and what have you, there might be a lot of information and a lot of capability, but those cameras can’t think of another task, you know, how to go and bring my boss a cup of coffee. It’s still narrow.

PROF. JOHN LENNOX: That’s absolutely right.

JOHN ANDERSON: And it’s before we’ve got to general intelligence.

PROF. JOHN LENNOX: Yes, and we’ve got to realise several things. First of all, the speed of technological development outpaces ethical underpinning by a huge factor, an exponential factor.

Secondly, some people are becoming acutely aware that they need to think about ethics. And some of the global players, to be fair, do think about this because they find the whole development scary. Is it going to get out of control?

And someone made a very interesting point. I think it was a mathematician who works in artificial intelligence. And she was referring to the book of Genesis and the Bible. She said, God created something and got out of control, us. We are now concerned that our creations may get out of control.

And I suppose in particular, one major concern is autonomous or self-guiding weapons. And that’s a huge ethical field. Here’s a man sitting in a trailer in the Nevada desert, and he’s controlling a drone in the Middle East and it fires a rocket and destroys a group of people. And of course, he just sees a puff of smoke on his screen and that’s it done. And there’s huge distance between the operation of that lethal mechanism.

And we only go up one more from that where these lethal flying bombs, so to speak, control themselves. We’ve got swarming drones and we’ve got all kinds of stuff. Who’s going to police that? And of course, every country wants them because they want them to have a military advantage. So we’re trying to police that and to get international agreement, which some people are trying to do.

Now, I don’t think we must be too negative about this. And I’m cautious here, but we did manage at least temporarily, who knows what’s going to happen now, to get nuclear weapons at least controlled and partly banned. So some success, but whether with what’s happening in Ukraine at the moment with Putin and so on, whether he could shoot a nuclear tactical weapon or it could be controlled autonomously, make its own decision. And then where do we go from there?

And these things are exercising people at a much lower level, but it’s still the same. How do you write an ethical program for self-driving cars?

JOHN ANDERSON: Yeah. So that if there’s an accident, it can’t be avoided.

PROF. JOHN LENNOX: Yes, when you knock down to see it’s the switch tracks dilemma again that you’ve put for ethical students of ethics and it’s very interesting to see how people respond. The switch tracks dilemma is simply that you have a train hurtling down a track and there’s a points that can be directed down the left hand or the right hand side.

Down the left hand side, there’s a crowd of children stranded in a bus on the track. On the right hand side, there’s an old man sitting in his cart with a donkey and you are holding the lever. Do you direct the train to hit the children or the old man, that kind of thing. But we’re faced with that all the time and it’s hugely difficult. Without going near AGI yet.


JOHN ANDERSON: Yet. And let’s come to AGI. What is AGI? And because up until now, we’re talking about intelligence, it’s not human. It can’t make judgements. It can’t switch tasks, it can’t multitask. It can just be built up to do an enormous one thing, even though that might be massively intrusive as we’ve talked about with surveillance technology.


JOHN ANDERSON: But now we’re talking about something different altogether. General.

PROF. JOHN LENNOX: Yes, we are.

JOHN ANDERSON: General intelligence means.

PROF. JOHN LENNOX: Well, there’ve been several things. The rough idea is to have a system that can do everything and more that human intelligence can do. Do it better, do it faster and so on. A kind of superhuman intelligence, which you could think of possibly as, at least in its initial stages, being built up out of a whole lot of separate, narrow AI systems, building them up. That will surely be done to a large extent.

But research on AGI, and of course it’s the stuff of dreams, it’s the stuff of science fiction, so people absolutely love it. And interest in it moves in two very distinct directions. There’s, first of all, the attempt to build machines to do it. That is, that are based on silicon, computer, plastic, metal, all that kind of stuff.

And then there is the idea of taking existing human beings and enhancing them with bioengineering, drugs, all that kind of thing, even incorporating various aspects of technology so that you’re making a cyborg, cybernetic organism, a combination of biology and technology, to move into the future, so that we move beyond the human. And this is where the idea of transhumanism comes in, moving beyond the humans.

And of course the view is, of many people, that humans are just a stage in the gradual evolution of biological organisms that have developed according to no particular direction through the blind forces of nature. But now we have intelligence, so we can take that into our own hands and begin to reshape the generations to come and make them according to our specification.

Now that raises huge questions. The first one is, of course, as to identity. What are these things going to be? And who am I in that kind of a situation?

Now, AGI, I mentioned, is something that science fiction deals with a lot. The reason I take it seriously is it’s not only science fiction writers that take it seriously. For example, one of our top scientists, possibly the top scientist, who is our Astronomer Royal, Lord Martin Rees, he takes this very seriously. He says, in some generations hence, we might effectively merge with technology.

Now that idea of humans merging with technology is again very much in science fiction. But the fact that some scientists are taking it seriously means in the end that the general public are going to be filled with these ideas, speculative on the one hand, but serious scientists espousing them on the other, so that we need to be prepared and get people thinking about them, which is why I wrote my book.

And in particular, in that book, I engaged not with scientists, but with a historian, Yuval Noah Harari, an Israeli historian.

JOHN ANDERSON: Can I interrupt for a moment?

PROF. JOHN LENNOX: Yes, of course you can.

JOHN ANDERSON: To quote something that he said, just to frame this so beautifully. He actually said this, because I’m glad you’ve come to him. We humans should get used to the idea that we’re no longer mysterious souls, we’re now hackable animals. Everybody knows what being hacked means now.

And once you can hack something, you can usually also engineer it. I’ll just put that in for our listeners as you go on.


PROF. JOHN LENNOX: Yeah, well, sure. That’s a typical Harari remark. And he wrote two major bestselling books, one called Sapiens, Homo Sapiens, you would be, and the other, Homo Deus. And it’s with that second book that I interact a great deal, because it has huge influence around the world.

And what he’s talking about in that book is re-engineering human beings, and producing Homo Deus spelt with a small d. He says, think of Greek gods turning humans into gods, something way beyond their current capacities and so on. Now, I’m very interested in that, from a philosophical and from a biblical perspective, because that idea of humans becoming gods is a very old idea. And it’s being revived in a very big way.

But to make it precise, or more precise, Harari sees the 21st century as having two major agendas, according to him. The first is to, as he puts it, to solve the technical problem of physical death, so that people may live forever. They can’t die. But they don’t have to. And he says, technical problems have technical solutions, and that’s where we are with physical deaths. That’s number one.

The second agenda item is to massively enhance human happiness. Humans want to be happy, so we’ve got to do that. How are we going to do that? Re-engineering them from ground up, genetically, every other way. Drugs, et cetera, et cetera. All kinds of different ways. Adding technology, implants, all kinds of things.

Until we move the humans from the animal stage, which he believes happened through no plan or guidance. We, with our superior brain power, we’ll turn them into super humans. We’ll turn them into little gods. And of course, then comes the massive range of speculation. If we do that, will they eventually take over? And so on and so forth.

So that is transhumanism connected with artificial intelligence, connected with the idea of the superhuman. And people love the idea. And you probably know there are people, particularly in the USA, who’ve had their brains frozen after death and hope that one day they’re going to be able to upload their contents onto some silicon-based thing that will endure forever. And that will give them some sense of immortality.


Now, if you’ve noticed those two things, John, solving the problem of physical death, re-engineering humans to become little gods, that has all to do with wanting immortality.

And as a Christian, I’ve got a great deal to say about that because what’s happening, I believe, in the transhumanist, the desire for that is a parody of what Christianity actually is all about.

JOHN ANDERSON: Doesn’t it, to some extent, though, reflect that I think the very great majority of us are conscious that deep down we don’t want to think we’ll come to an end?

PROF. JOHN LENNOX: Oh, no, we don’t.

JOHN ANDERSON: I’m an individual who actually has no great aspiration to live to an advanced old age.

PROF. JOHN LENNOX: Well, I’m the same.

JOHN ANDERSON: Frankly, I don’t want to —

PROF. JOHN LENNOX: Not in this situation, no.

JOHN ANDERSON: Not to say I don’t enjoy life, doesn’t mean that at all. No. It just means I don’t aspire to great physical old age, frailty, and what have you.

And I have a different perspective on what happens after that. But deep down, I don’t want to think it ends with that physical death. And I think that’s pretty much hotwired into all of us.

PROF. JOHN LENNOX: I think it’s hardwired, and that’s important. This business of what’s hardwired into human beings, version 101, so to speak, I think is vastly important.

Many years ago, I came across that idea in the moral sense. C.S. Lewis talking about in his book, and it’s relevant to what we’re talking about at the moment, the abolition of man is an appendix at the end where he points out that all around the world, look at every culture, they may differ, but they’ve got certain moral rules in common. It looks as if morality is hardwired, I believe it is, by a benevolent Creator.

But now we come up to this, and we see that there’s hardwiring again at this particular level. God has set eternity in the human heart. Now, of course, that’s a theistic perspective, but if you take the atheistic take on it, then you’ve got to explain where it comes from.

And again, I found C.S. Lewis, as always, right on the money, so to speak. He makes the point, and I’m going to paraphrase it slightly. It would be very strange to find yourself in a world where you got thirsty and there was no such thing as water. But I think that’s a very powerful thing, that longing. And C.S. Lewis has written a great deal about it, brilliant essay called The Weight of Glory.

That longing for another world implies, and these are not his words, but they’re his sentiments, that we were actually made for another world.

Now, I feel that the transhuman quest is an expression of the fact that we’re hardwired with a longing for something transcendent, and it’s trying to fulfill it. And I have reasons for thinking it won’t do that, but you may want to ask about that later.


JOHN ANDERSON: Well, I think we’re probably coming into land. The thing that I want to explore with you for a moment is that I think that a lot of people are at the point where they don’t, it requires a lot of energy, quite a bit of anguish to say, I’m going to make some tough decisions about what I really believe.

And it seems to me that this whole area of artificial intelligence and the chance that we may reach the capacity to literally destroy ourselves, requires us to think long and hard and to make judgments that will have to be based, if you like, on faith. You can’t know exactly what’s going to happen.

So you see, if you want to say, well, it requires a lot of faith to believe in, to think through whether I believe in a God, I would have thought this whole area presents just as great a challenge. Who am I? How am I going to work this out? Do I put some ethical framework down? Or do I just sit in the pot and let the water boil, gradually boil until it’s too late?

PROF. JOHN LENNOX: Yes, I think this is a very important issue we’ve come to. There’s such confusion in the world about what faith is. And that’s mainly the fault, and I would say the fault of people like Dawkins and Hitchens who actually didn’t know what they were talking about, because they redefined faith actually as a religious word that means believing where there’s no evidence.

And what they fail to see is that’s a definition of blind faith that only a fool will get involved with. The word faith, in English, from the Latin fides, from which we get fidelity, which conveys the whole idea of trustworthiness. And trustworthiness comes from having a backup in terms of evidence.

A bank manager will only have faith in you if you prove you’ve got the collateral. You have to bring the evidence. We’d be foolish to trust people without evidence.

So evidence-based faith is something everyone understands, but they don’t realize is that it’s essential to science, and it’s essential to genuine Christian faith in God. I get leery these days, John, of using the word faith on its own, because people think you’re talking about religion.

Sometimes they say to me, will you give a talk on faith and science?

I say, do you want me to talk about God?

Oh, yes.

Well, I say, it’s not in your title. I can talk about faith in science without even mentioning God, because scientists have got a basic credo, things they believe. They’ve got to believe that the science can be done. They’ve got to believe that the universe is rationally intelligible. That is their faith, and no scientist could be imagined without it, as Einstein once said. So if you want to talk about faith as faith in God, please call it faith in God, or else we’re going to get very confused.

Now, coming back to this, you are absolutely right. This is going to force us, whether we like it or not, to do some hard thinking and to re-inspect and recalibrate our worldview, because our attitude to these things depends on our worldview, our set of answers to the big questions of life. What is reality? Who am I? What’s going to happen after death? And all those kinds of things, they’re coming out in this area. We’re being forced to think about them.

And as you say, we can sit like the toad in the kettle when the water’s boiling and pretend that nothing’s happening, but we can’t afford that. That isn’t a luxury. That’s suicidal.

And the trouble is there is a book called The Suicide of the West, where we’re just not thinking enough. And I feel, and I know you’re doing this, and I feel called to do it too, to put issues out into the public space so that people can really see that they can think about them and they can come to conclusions about them.

And as you say, we’re nearly landing this discussion. And it seems to me that focusing on what’s going on, I read Harari and I read other books like this, and I say, you know, I can understand what you’re looking for. You’re looking for something that’s very deep and hardwired in us.

But, and I make people smile sometimes when I meet these people, transhumanists, and I say, guys, I respect what you’re after, but you’re too late.

And they say, what?

Too late?

Of course we’re not too late.

I say, you actually are too late.

Take your two problems, one physical death. I said, now, I believe there’s powerful evidence that that was solved 20 centuries ago. It was actually solved before that, but 20 centuries ago, there was a Resurrection in Jerusalem. We celebrated at Easter. We’re just after Easter now.

And as a scientist, I believe it for various reasons that we can discuss. But the point is that if Jesus Christ broke the death barrier, that puts everything in a different light.

Why? Because it affects you and me. How does it affect you and me? Because if that is the case, then we need to recalibrate and take seriously His claim to be God become human.

I said, isn’t that interesting? What are you trying to do? You’re trying to turn humans into gods. The Christian message goes in the exact opposite direction. It tells us of a God who became human. Do you notice the difference? And of course, that actually gets people fascinated.

I say, you are actually taking seriously the idea that humans can turn themselves into gods by technology and so on. Why won’t you take seriously the idea that there is a God who became human? Is that any more difficult to do?

And once you’ve got that, then I think arguably you need to take seriously what Jesus says and what He says is, and that is the Christian message. He is God become human in order to do what? To give us His life.

If you like to turn us into what you want to be, because the amazing thing about this is that the central message of the Christian faith to you and me is the answer to the transhumanist dream. One, Christ promises eternal life. That is life that will never cease. And it begins now, not in some mystical transhuman uncertain future, but right now.

Secondly, because He rose from the dead, He promises that we will one day be raised from the dead to live with Him in another transcendent realm that’s perhaps even more real, probably more real, is more real than this one. And that’s going to be the biggest uploading ever, you see.

So your hope for the future of humanity, changing human beings into something more desirable, living forever and happier, all of that is offered, but the difference between the two is radical. Because firstly, your idea is using human intelligence to turn humans into gods, bypassing the problem of moral evil. You’re never going to do it. No utopia has ever been built.

And of course, you’re not thinking straight, because there have been attempts to re-engineer humanity. Crude, of course. The Nazi program of eugenics, the Soviet attempts to make a new man.

And what did they lead to? Rivers of blood. 20th century being the bloodiest century in history. Mind you, what’s happening now might make this a very bloody century.

But what I’m saying, John, is that I believe even more strongly than ever that we’ve got as Christians a brilliant answer and a message to speak into this that crosses all the boxes, but it means facing moral reality, which is exactly at the heart of the scariness with which some people approach these issues.

JOHN ANDERSON: John, I think we should land the plane there. You couldn’t more clearly articulate the reality of the challenges before us and the need for people to get off the fence and not allow themselves to be satiated by false comforts. The world doesn’t give us that option anymore, in my view.

If we don’t make decisions now, individually and corporately, we’re sunk. I don’t want to subtract or add to that remarkable overview of what we’re facing. So I’ll land the plane and thank you very much indeed.

PROF. JOHN LENNOX: Happy landing.

For Further Reading:

Luca Longo: The Turing Test, Artificial Intelligence and the Human Stupidity (Transcript)

Artificial Intelligence: It Will Kill Us by Jay Tuck (Full Transcript)

Peter Haas: The Real Reason to be Afraid of Artificial Intelligence (Full Transcript)


Related Posts

Reader Disclosure: Some links on this Site are affiliate links. Which means that, if you choose to make a purchase, we may earn a small commission at no extra cost to you. We greatly appreciate your support.