Read the full transcript of John Lennox’s lecture titled “2084: Artificial Intelligence and the Future of Humanity”, Dec 20, 2023. John Lennox is an Irish mathematician, bioethicist, and Christian apologist.
TRANSCRIPT:
Introduction
JOHN LENNOX: Are you sitting comfortably, ladies and gentlemen? Well, so am I. I learned to do this in Siberia, and then I discovered it was totally biblical. Rabbi sit to teach. And I’m so thrilled at this late stage of my life to have been allowed to come into contact with the Lanier Foundation. It’s a real high point for me to be invited to come amongst you to this facility which has enormous potential for reaching the world for Christ.
The initiatives are mind blowing, but they’re real, and I believe they’re going to have a wonderful future. It’s been an honor for me to get to know the senior people, Mark and his wife, the two Davids, and they have welcomed me with a warmth that I think is really characteristic of this part of the world.
The Dangerous Intersection of Technology and Humanity
E.O. Wilson, a brilliant American entomologist, said that “the real problem of humanity is the following: We have paleolithic emotions, medieval institutions, and godlike technology, and it’s terrifically dangerous.” Until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago – Where do we come from? Who are we? Where are we going? – rationally, we’re on very thin ground.
I disagree with them about the philosophers. We are fortunate, I think, in the Christian world to have some distinguished philosophers, some in this room tonight, who have not abandoned these big questions, which are the famous three questions of Immanuel Kant.
The late Lord Jonathan Sacks, the chief rabbi of the United Kingdom, brilliantly formulated: “Science takes things apart to see how they work.
Dystopian Visions of the Future
These scary things aren’t just the product of overheated imagination of science fiction writers. They are coming from some of the most distinguished minds in our generation. Lord Martin Rees, our Astronomer Royal, says: “The abstract thinking by biological brains has underpinned the emergence of all culture and science, but this activity spanning tens of millennia at most will be a brief precursor to the more powerful intellects of the inorganic post human era. So in the far future, it won’t be the minds of humans but those of machines that will most fully understand the cosmos.”
We are all familiar with two famous dystopias written about the future: Aldous Huxley’s “Brave New World” and George Orwell’s “1984.” My title, “2084,” was given to me by a very famous atheist in Oxford, who when he discovered I was writing on artificial intelligence, said, “I’ve got a title for you. You should call it 2084.”
Neil Postman, in his fascinating book “Amusing Ourselves to Death,” contrasted these two analyses of the future: “Orwell warns that we will be overcome by an externally imposed oppression, big brother. But in Huxley’s vision, no big brother is required to deprive people of their autonomy, maturity, and history. People will come to love their oppression, to adore the technologies that undo their capacities to think.”
So Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us. And in our current culture, it seems to me that both things are happening simultaneously. We have a love-hate relationship with what is going on.
Contrasting Views on AI’s Dangers
The response to technologically driven oppression in terms of surveillance varies widely. Some people think, like this famous web engine developer: “AI doesn’t mean that the end of humanity is nigh. It’s maths, code, computers built by people, owned by people, controlled by people. The idea that it will, at some point, develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstition, a superstitious hand wave. In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you because it’s not alive. AI is a machine and it’s not going to come alive any more than your toaster.”
But here is a different view by Geoffrey Hinton, known as the “Godfather of AI,” who left Google recently to be free to speak about what he saw as the dangers: “As soon as it gets really complicated, we don’t actually know what’s going on anymore than we know what’s going on in your brain. We designed the learning algorithm. But when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things. But we don’t really understand exactly how they do those things. One of the ways in which these systems might escape control is by writing their own computer code to modify themselves and that’s something we need to seriously worry about.”
What is Artificial Intelligence?
Artificial intelligence comes in two sorts. There’s narrow AI, and a narrow AI system typically does one thing and one thing only that normally requires an intelligent human being. Radiology gives us a very good example. The AI system for analyzing chest X-ray images uses an algorithm – that’s just simply a set of step-by-step instructions embedded in computer software that selects a fit for my X-ray of my lungs from a huge database of X-rays of other people that are labeled by doctors with the diseases that they represent, and it then gives a diagnosis.
That diagnosis these days will normally be better than what your local hospital can give you. There has been a recent phenomenal development in adaptive radiotherapy for tumors. Artificial intelligence reduces two weeks’ work to five minutes, and that kind of advance is spectacularly beneficial to human beings.
Like any technology, AI is like a sharp knife. A sharp knife can be used to do surgery. It can be used to commit murder. And the more advanced the technology, the more it can be used for good and the more it can be used for evil.
Current Examples of Narrow AI
Let’s run through briefly some current examples of Narrow AI:
- Digital assistants: Siri, Alexa and so on
- Online shopping: I buy a book. Few minutes later, a pop-up tells me that people who bought my book often bought another book
- Protein folding research: Demis Hassabis, one of the geniuses of the AI world, solved how proteins fold to make new molecules and to use them for the development of medicine
- AI-based job interviews: Where people never come face to face with an individual but an artificial intelligence poses questions to them and decides whether they are suitable for the job
Of course, now the dangers start because bias is built in. The programs are made by humans and humans are biased, and some of these interviewing programs have been seen to be biased against women, against people of color, and so on. There’s a whole industry trying to deal with the increasingly intransigent problem of the ethics of artificial intelligence because the technology is rising at a vast pace and the ethics is chugging along very far behind.
Surveillance and Privacy Concerns
Then there’s crime prevention. This brings me to face recognition technology, which is incredibly sophisticated. It’s no longer just face recognition. The Chinese have developed cameras that can recognize you by your gait from any direction – the back, the front, anything. Forces love to have this to pick out a terrorist in a crowd. But the flip side is there’s an entire population in Xinjiang in China that is being monitored unbelievably and is being oppressed and suppressed by being subject to the most intrusive and constant surveillance.
Now comes the huge problem. You want to be secure. We all want to be secure. How much of your privacy are you prepared to give up to be secure? That’s a very difficult problem to decide.
These technologies are being used, particularly in Xinjiang, to control a population and in fact to remove its ethnic roots and reeducate it. But as a famous article warned, all the technology that can do this is available in the West. The writer pointed out the only difference is it is not yet in the West under the control of a single authoritarian central authority.
ChatGPT and Language Models
Then we’ve got ChatGPT. Now these are all just narrow AI. All of this stuff is working. It’s in operation today. ChatGPT is a very advanced version of the thing that used to drive me mad – my first simple smart-ish phone kept telling me what I should write next.
Now with ChatGPT, you can type a question and ask it, “What does Mark Lanier think of atheism?” And it’s read all his books, but it didn’t get permission from you for copyright. As many of you know, there’s a big class action being undertaken.
It’s useful – I know many people that are using it to do research, to find out things, to investigate, but all of them tell me it is very important to check out what you find. Interestingly, ChatGPT was asked a series of very penetrating questions, “Are you an atheist?” And after a whole sequence of questions, it concluded, “I am an atheist.” But the next question was, “Are you a Christian?” And after another series of penetrating questions, it ended up saying, “You may say, I am a Christian.” So it’s both a Christian and an atheist at the same time, apparently.
There are huge dangers with this and there is a very big debate among school teachers. I was in a seminar for directors of higher educational institutions. Half of them wanted it used. Half of them wanted it banned. Should we have all examinations in Faraday cages so that no electronic signal can come in or out? I tend to agree with that kind of approach. We need to test what people think.
Deepfakes and Deception
Then there are deepfakes, and this is now getting into really serious territory. A short video clip of you, a short audio clip, and they can make you say anything they want you to say. The deepfake technology is ready to such an extent that you might get a phone call – I wonder if any of you have – it sounds like your daughter and she is stuck in Paris and she needs an air ticket and it’s on sale right at the moment and please, dad, send me one thousand dollars at once. What father who had one thousand dollars would reject that?
# 2084: Artificial Intelligence and the Future of Humanity
The Dangers of AI Today
JOHN LENNOX: But it’s not his daughter. It’s a clone of the voice by a deep fake and it could be on video. So many families are adopting the very sensible technique that if any putative family member telephones them to talk about finance, they’ve got code words and code questions so that they can be sure they’re talking to their children. And then there are, of course, autonomous vehicles, autonomous weapons. AI war is already being fought by the control of swarms of drones that are equipped with lethal weapons.
So that raises all kinds of questions. All of these are technologies that are being developed at a huge speed and they’re all in operation today. But artificial intelligence is not intelligent. The word artificial is there for a reason. It simulates intelligence.
In other words, it plays what’s called the imitation game, which is the name of a very interesting film about the late Alan Turing who invented the who solved the problem of the German Enigma machine and invented the bombe. And it’s important to realize, and we’re coming closer to a very big question, what is a human being? Because that’s being questioned left, right, and center today.
Intelligence vs. Consciousness
We noticed that God has coupled in human beings’ intelligence with consciousness. Now, no one, and I’ve talked to some of the world’s leading researchers, no one knows what consciousness is.
When people say that machines are conscious, they’re actually talking nonsense. They don’t even know what consciousness is. And many leading researchers are perfectly content to simulate intelligence without consciousness. In other words, that X-ray AI machine, the result looks as if it had been reached by a conscious, intelligent human being, but it has been reached by a machine which is neither intelligent nor conscious. It is simply doing what it has been programmed to do.
And here, Stuart Russell and Peter Norvig, authors of what’s regarded by many people as the AI Bible, “Artificial Intelligence and Modern Approach,” third edition: “We are interested in creating programs that behave intelligently. The additional project of making them conscious is not one that we are equipped to take on nor one whose success we would be able to determine.” That’s a very honest statement by two of the pioneers in these fields. All of that narrow AI, one thing that normally requires an intelligent human.
The Quest for Artificial General Intelligence
But there is a second quest and that’s the quest for artificial general intelligence. As the name suggests, the idea is to create technology that can simulate everything and a great deal more that normally takes an intelligent human being. And there are two directions of research. In order to build a super intelligent AI system that equals or exceeds all human capacities, you either take existing humans and enhance them by bioengineering or you start with that non-biological base.
One of the problems of getting old is that our bodies, the bits start to fall off it as I know and the bones start to creak and all of that. Wouldn’t it be great to do without the biological stuff and just have the hard silicon mechanical stuff? Well, that’s one way people want to go.
Yuval Noah Harari and Homo Deus
Now because it is futuristic and it appeals to the imagination, there is a lot of literature about it. And in my book, I interact with various kinds of literature at a scientific and popular level as well as the literature in the subject itself. But one of the most influential writers these days is Yuval Noah Harari.
He is not a scientist. He is an Israeli historian. And he’s written a book, “A Brief History of Tomorrow, Homo Deus,” which being translated from Latin is the God Man, the man who is God. The reason he has called his book that is he believes that this century, the twenty-first century is going to see two major developments.
First of all, there’s going to be a serious bid for immortality by which he means we are going to solve the medical problem of physical death. For him, death is a technological problem. And he writes that technological problems always have technological solutions. So we shall solve the problem of physical death, which won’t mean that people may not die. It will mean they don’t have to die. And so you have people hoping that this will be solved, freezing their brains and their bodies, and hoping that one day they’ll be wakened up if nobody switches the power off in the meantime that’s keeping them cold.
The second agenda item is intensification of the pursuit of happiness, making us happier people. Now here’s what needs to be done according to Harari: change our biochemistry and reengineer our bodies and our minds so that we shall need to reengineer Homo sapiens so that it can enjoy everlasting pleasure.
Now listen to this very carefully: “Having raised humanity above the beastly level of survival struggles, we will now aim to upgrade humans into gods and turn Homo sapiens into Deus.” And he adds, “Now humankind is poised to replace natural selection with intelligent design and extend life from the organic realm into the inorganic.”
He has got many people on his side. There is a lot of money being put into it. And I would repeat this is not simply the pipe dream of an obscure historian. There are many leading scientists like Lord Rees that feel that this will be achieved.
Transhumanism
Now moving humankind beyond human means that they are going to become what is called transhuman. Now that’s a serious issue. God created human beings model one hundred one in his own image. Now says Harari, that’s only one stage. It parallels in our society what’s going on with gender. God created male and female but now we are moving on from that and it’s all becoming fluid.
Exactly the same at a much higher level is happening with human beings themselves. So what is transhumanism? A lot of it comes from my own university, I regret to say. The intellectual, and this is Nick Bostrom of the Oxford Institute for the Future: “The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical and psychological capacities.”
That’s transhumanism. So we’re going to enhance human beings. We’re going to eliminate aging and we’re going to become essentially supermen and women with bigger brains, more ability and all the rest of it. But now there has been a subtle change. Transhumanism will produce, in some people’s opinion, billions of entities.
Let’s not call them human because they won’t be human. They’ll be transhuman. They may be cyborgs. A cyborg is a combination of the biological and the mechanical in many science fiction novels. And in the future, Lord Rees says that the majority of the population is going to be those entities.
Longtermism and Its Moral Problems
So what should our attitude be today? And here we come to the fearful term and I say that advisedly of longtermism. Listen carefully to Bostrom’s priorities: “Priority number one, two, three and four should be to reduce existential risk. We mustn’t fritter away our finite resources on feel-good projects of suboptimal efficacy such as alleviating global poverty and reducing animal suffering since neither threatens our long-term potential and our long-term potential is what really matters.”
That’s devastating because it’s a complete denial of the fundamental altruistic moral principle, love your neighbor as yourself. Don’t bother about the two-thirds world but pour your money into the brains in the west who are going to be able to develop these beings because if you don’t pour the money there, you run the risk of these beings never being created.
Apparently, and I have this from several good sources, it’s not millions but billions of dollars that are going into these projects. We’ve got to think very hard, ladies and gentlemen, don’t we, as to how we are going to respond to this because this is totally not amoral, but immoral.
AI and Morality
And Yoshua Bengio, who is a very famous fellow of the Royal Society, Canadian computer scientist, neural networks, deep learning, he was right in at the heart of it. And he says this, “People need to understand that current AI and the AI we can foresee in the reasonable future does not and will not have a moral sense or moral understanding of what is right and what is wrong.”
Not only that, there’s the question of truth. Ken McCallum, head of MI5 in the UK, was in California three days ago. And this was published two days ago in the London Times. The fabric of society, he says, could be undermined by AIs impersonating real people. That’s the deep fake. So that it would no longer be possible to distinguish truth from falsehood. Deep fake technology is a threat to democracy and could be harnessed by hostile states to sow confusion and disinformation at the next general election in any country.
Governments are scared. There are big meetings about this. What is going to happen to the democratic electoral process?
The Degradation of Humanity
Now one commentator in all these things who I love to read is Leon Kass of Chicago. He is a polymath. He is Jewish and he has written some phenomenally interesting commentaries on Genesis and Exodus from a philosophical perspective. But here’s what he says:
“We have paid some high prices for the technological conquest of nature, but none so high as the intellectual and spiritual costs of seeing nature as mere material for our manipulation, exploitation and transformation. With the powers of biological engineering gathering, there will be splendid new opportunities for similar degradation of our view of man. If we come to see ourselves as meat, then meat we shall become.“
And C.S. Lewis saw it in the nineteen forties. The first edition of this book is in the library behind me: “Man’s conquest of nature means the rule of a few hundreds of men over billions upon billions of men. Each new power won by man is a power over man as well. Each advance leaves him weaker as well as stronger. In every victory. Besides being the general who triumphs, is also the prisoner who follows the triumphal car. Man’s final conquest has proved to be the abolition of man.” 1940s, he saw it and it’s coming true.
Three Major Threats of AI
So AI to many people is posing a major threat or perhaps three threats:
First of all, take over an AI war to end all wars. Secondly, mass unemployment. Not only will people be unemployed, they’ll be unemployable because they haven’t got the skills to cope with this. That’s a huge problem in sub-Saharan Africa. I was talking at a seminar there and they said to me, it’s all very well telling us to reskill and learn to use artificial intelligence, but we don’t have the educational infrastructure to do that and the bias that we talked about before.
The late Stephen Hawking was always interesting to read and he made a comment on what’s called the alignment problem thinking of this first thing, the takeover by AI: “The real risk of AI,” he wrote, “isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals and if those goals aren’t aligned with ours, we’re in trouble.”
The Regulation Problem
So you’ve got the alignment problem that leads to the regulation problem. “Regulation will be critical,” says Sam Altman, the head of OpenAI that developed ChatGPT. “It will take time to figure out. Although current generation AI tools aren’t very scary, I think we’re potentially not that far away from potentially scary ones.”
But you know in most of the western countries, at least before 2020, there was no ethical policy for dealing with this kind of thing whatsoever. Stuart Russell, whom I quoted earlier, one of the pioneers, he puts up three principles that he feels governments ought to adhere to:
1. Restricted AI system’s goals only to maximizing the realization of human goals. 2. Keep the AI uncertain about what those goals are so it has to keep asking. 3. Insist that it tries to understand the nature of those goals by constant observation of human behavior.
How realizable these things are is another huge question and it leads to very deep ethical problems that you can ask me about later.
Future AI Scenarios
Now with all that, people think about possible scenarios as Orwell and Huxley did. And one of the prominent thinkers about this is a brilliant physicist, Max Tegmark of Princeton and he has got several, about a dozen AI scenarios that he sets up in his book “Life 3.0.” I’m only going to mention three of them.
The first one is that AI is developed and it becomes a kind of protector god. These are his words. It’s amazing the number of times God appears in these books. So essentially omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny and hides well enough that many humans even doubt the AI’s existence.
Opposite to that, AI takes control. It decides that humans are a threat, nuisance or waste of resources and gets rid of them by a method they don’t understand and possibly even keeps a few as pets in a little zoo.
But what’s interesting about his book is he gives a lot of space to what he calls the Omega Project. This he develops in detail. It comes out of the Turkers and Amazon and this Omega Corporation becomes a world corporation and gathers all the wealth and he describes it in great detail until he reaches the climax of it:
“For the first time ever, our planet was run by a single power called Prometheus, amplified by an intelligence so vast that it could potentially enable life to flourish for billions of years throughout the cosmos. But what specifically was their plan?”
And one of the major characteristics of Prometheus was this, economic control with the excuse of fighting crime and terrorism and rescuing people suffering medical emergencies. Everybody could be required to wear a security bracelet that combined the functionality of an Apple Watch with continuous uploading of position, health status and conversations overheard. Unauthorized attempts to remove or disable it would cause it to inject a lethal toxin into the forearm.
The Future of AI and Religion
JOHN LENNOX: That is the scenario that he spends most time over. Now let’s bring another realm and that is the question of a new AI religion. Now the person writing these words is the director for the Center for Professional and Applied Ethics at Montreal University. We’re about to witness the birth of a new kind of religion. In the next few years or perhaps even months, we will see the emergence of sects devoted to the worship of artificial intelligence.
It has already started, ladies and gentlemen. And why is that? Well, it’s obvious. Certain AI systems are beginning to exhibit the kind of capacities usually ascribed to deity such as immortality, omniscience and omnipresence. These super AIs have prayer-like connectivity via the Internet, the oracle-like capacity of ChatGPT to answer virtually any question, produce life advice and even scriptures almost instantaneously. Nor do they have needs or desires like humans, only electricity.
You can see it happen very easily and people will be deceived. Coming back to Max Tegmark, after he’s describing this futuristic scenario with Prometheus, he then says, are we really in an unavoidable battle with an AI monster to stay alive? Listen to much of the debate and you could be forgiven for thinking so. A monster? And that brings me directly to the biblical scenario for the future.
Biblical Parallels to AI Concerns
A series of monsters and using wild beasts as symbols of totalitarian governments is something that we meet in scripture both Old Testament and New. The book of Daniel is famous for it. But let me read to you Revelation thirteen:
“And I saw a beast rising out of the sea with ten horns and seven heads with ten diadems on its horns and blasphemous names on its heads and the whole earth marveled as they followed the beast. And they worshiped the dragon for he had given his authority to the beast. They worshiped the beast saying, who is like the beast and who can fight against it?”
It’s a Prometheus like Tegmark’s. And then there was another, a wild monster from the earth. “I saw another beast, two horns like a lamb speaking like a dragon.” And what did it do? “It deceives those who dwell on the earth telling them to make an image for the beast that was wounded by the sword and yet lived, and it was allowed to give breath to the image of the beast so that the image of the beast might even speak and might cause those who would not worship the image of the beast to be slain.”
“Also, it causes all to be marked in the right hand so that no one can buy or sell unless he has the mark that is the name of the beast of the number of its name.” That’s almost identical to the Prometheus project of Tegmark. Now when you see that, a question arises, what is this? Is this scripture telling us that there is going to be an artificial general intelligence like the Omega Monster? Or is it something even more sinister because it is an artifact, it’s a construct, they make the image, but then it’s given breath. How far is God going to allow people to go?
On the other hand, people may be racked and say, but look, this is imagery. This is symbolism. Goodness me. All these beasts with multiple heads and horns and all this kind of thing. It’s symbolism, metaphor.
Of course it is. But if you’ve read a lot of C.S. Lewis like I’ve done, you’ll not make the mistake of thinking that because there are symbols, there’s no underlying reality. And Lewis made very clear in his writings that symbols and metaphors always describe a reality. And what is the reality? Well, the theologians among you will know instantly that what is described in symbolic language in Daniel and the book of Revelation is described in plain down to earth text by Paul in his second letter to the Thessalonians.
Listen to it. “For that day will not come unless the rebellion comes first and the man of lawlessness is revealed, the son of destruction who opposes and exalts himself against every so-called god or object of worship so that he takes his seat in the temple of God proclaiming himself to be God, homo Deus.” And then, says Paul, “the lawless one will be revealed whom the Lord Jesus will kill with the breath of his mouth and bring to nothing by the appearance of his coming. The coming of the lawless one is by the activity of Satan with all power and false signs and wonders and deception.” That is straightforward language and Paul actually told this to people who had only been Christian for three weeks and he reminded them.
I wonder, do you teach this to young Christians in your church? Why did he do it? Why talk about the future to three-week-old Christians? Because as he said, the mystery of this lawlessness it is lawlessness, by the way, ladies and gentlemen, in the sense of spiritual lawlessness. A man claiming to be God and ruling the earth with a rod of iron, of course.
It’s not civil lawlessness. Why did he say it then? Because it was already operating in their society. He told them so. The Caesars were claiming to be God, some of them before their death, most of them after their death and Christians were being persecuted, tortured and killed because they wouldn’t bow down and acknowledge the power and the authority of the Caesar who is God.
I find this fascinating that twenty centuries ago, there’s a scenario that people are bringing to scientists, leading thinkers. And my argument is simple, which is why I wrote my book. It’s just this. If you’re going to take seriously Tegmark and his scenario, I would like to introduce you to a scenario that’s got far more credibility because of the actual arguments for its truth than Tegmark’s and I find that people do. But now I am coming to a crunch here.
The Flawed AGI Project
The AGI project, the Homo Deus project is flawed. Now why is it flawed? Well, Harari wants to solve the problem of physical death. He also wants to upgrade humanity, but there is another person with a completely different agenda. When people talk to me about Harari’s hopes, I say you are too late.
The problem of physical death was solved twenty centuries ago when God raised Christ from the dead. And as for upgrading humanity, what a marvelous upgrade that Jesus Christ has done. First, it’s in two stages of course because when we encounter Christ and face the mess we have made of our own lives and sadly sometimes the lives of others and trust of our savior and Lord, we receive new life. It is eternal life and that is true in several senses. It’s the very life of God, so it will exist eternally and it guarantees for the person that trusts the Lord Jesus that they will be raised from the dead.
That is by far the biggest upload I have ever read of. “To as many as received him, to them gave he the right to become children of God.” And phase two, it’s worth reading this, isn’t it? After all this gloomy stuff, just listen to this. “I tell you this, brothers, flesh and blood shall not inherit the kingdom of God nor does the perishable inherit the imperishable.”
“Behold, I tell you a mystery. We shall not all sleep, but we shall all be changed in a moment in the twinkling of an eye, at the last trumpet, for the trumpet will sign and the dead will be raised. Imperishable, and we shall be changed. This mortal must put on immortality.” You see the true Homo Deus will return from heaven.
As he stood before his judges, “you shall see,” he said, “the Son of Man sitting on the right hand of God and coming on the clouds of heaven.” And privately to his disciples in Acts one, they were told “this same Jesus shall so come as you saw him go.” They saw him go literally and physically until a cloud received him out of their sight. He shall come literally and visibly. So we have got the contrast between Harari’s vision and the vision of Jesus Christ and the contrast is huge.
Contrasting Worldviews
Let’s take these two world views as we come to a conclusion. John Gray is a professor of the history of European thought, an atheist. Well worth reading usually. “Humans may well use science to turn themselves into something like gods as they have imagined them to be but no supreme being will appear on the scene. Instead, there will be many different gods, each of them a parody of human beings that once existed.”
“I will come again and will take you to myself that where I am, there you may be also.” And if we’re going to understand what’s going on, we need to take a worldview perspective. My late teacher of quantum mechanics was professor Sir John Polkinghorne in Cambridge. And he wrote, “If we are to understand the nature of reality, we have only two possible starting points, either the brute fact of the physical world or the brute fact of a divine will and purpose behind that physical world.” And the difference between those two worldviews is colossal.
Sean Carroll, physicist, “We humans are blobs of organized mud which through the impersonal workings of nature’s patterns have developed the capacity to contemplate and cherish and engage with intimidating complexity of the world around us. The meaning we find in life is not transcendent.” That’s bottom up reductionist atheism, but there is such a thing as top down Christian theism. “In the beginning God created the heavens and the earth. God made humans in his own image.”
So there’s a choice to be made. In transhumanism, humans become gods by trusting technology. Christianity is the exact opposite. God becomes human in Jesus Christ. And through trusting him, humans become children of God.
Have you ever thought about this? Why are humans original model that God created so special? I’ll tell you why. God became one. “The word became flesh and dwelt among us.”
This ladies and gentlemen is what it means to be made in God’s image and the tragedy of many of these futuristic scenarios is the fatal flaw that they try to create utopia by bypassing the fundamental flaw in human beings and that is sin. They have no cross. We have hope because there is a cross and the word who became flesh gave his life for us and God raised him from the dead. That’s what gives me hope. And at my age now, I want to try as best I can in the time left to really try to get across to my fellow Christians but to the world in general the fact, the central hope of Christianity that Jesus Christ is actually going to return.
Thank you very much indeed.
Q&A Session
UNIDENTIFIED SPEAKER: Professor John Lennox is not only the distinguished gentleman that was introduced, but he’s also going to be the distinguished research fellow at the Lanier Theological Library and Learning Center in Yarnton Manor. So you’ve got every excuse to come over there. Thank you very much. Okay.
There are a boatload of questions, and we’ll just have to get through them as best as we can. If absolute truth were established and agreed upon, would we need to fear AI as much?
JOHN LENNOX: We live in a fallen world, and absolute truth is not going to be agreed. The whole problem with our world is a rebellion against God. And we are called upon to witness to the one who is truth.
And it’s a hypothetical question, and I think looking at the world with all the thousands of years of education, the world is in a bigger mess than it’s ever been. So I don’t think we’re going to have to face that one, but what we should do is deepen our conviction that when Jesus Christ said I am the truth, he wasn’t merely saying I say the truth. He is the ultimate answer to every question. What’s the truth about the universe? Well, are galaxies and planets and stars.
What’s the truth about them? Well, they are made of chemicals and atoms and molecules. What’s the truth about them? Those questions do not go back forever. Ultimately, you’re going to find Christ standing behind them and saying, I’m the truth.
That’s a huge play.
UNIDENTIFIED SPEAKER: There are a couple of questions here that fall into the same category. It’s a category of concern that I’ve got, so I’m going to start by framing my question, and then we’ll work through a couple of permutations of that here. I have read a number of people say we need to get a six month moratorium on developing AI. We need to come up with these ethical guidelines for AI.
We need to come up with ways we can agree that we will use AI, ways we will limit AI. And I’m all for the idea of getting control and keeping control of AI. But I have a concern that there are people in the world who will not stop nefarious development of AI, and we put a hold on what we’re doing for six months or we hedge what we’re doing in with guardrails of certain protections. And meanwhile, there’s a government in North Korea that’s going to do everything they can to circumvent all of that. They leapfrog us because all of a sudden, we’re six months behind.
Or there are people worth bazillions of dollars who are developing this in order to basically become that evil ruler that we’ve seen in all the James Bond movies. Do you have a concern about how we try to navigate in waters where there are some sharks swimming in the water?
JOHN LENNOX: I have huge concern for the same reason as you have. Point number one, Vladimir Putin said a few years ago, the state that controls AI will control the world. That’s number one.
Number two, ethics in general because God has been rejected, particularly in the West, is an ethical view called preference utilitarianism. Utilitarianism, simply put, is the idea that you should always base your ethics on the maximum benefit for the maximum number of people. Now that’s flawed. Hitler thought that the maximum benefit for the maximum number of people was to eliminate the Jews. So that’s a huge problem.
JOHN LENNOX: But utilitarianism works if you are dividing ice cream among children. If you want to keep the kids happy, you have to divide it exactly equally among the kids. Now the problem that underlies what you are saying, and I don’t see an easy answer to this, that utilitarianism works if you have got a world consisting of a number of roughly equal centers of power. So suppose you are in control of one of them and I am in control of the other, we set up ethics and I say, well, if you do this, I’ll do that. And that kind of tit for tat analysis which leads to agreements and treaties sounds wonderful.
But if you go back to Hitler, when he was a political infant, he made treaties but once he got the power he didn’t bother because you see if I get enough power and you say to me, well look if you do this I’ll do, you’ll do what? I’ve got the power. And that’s exactly the problem you are expressing and I don’t know any easy answer to it. I think it’s very important though to realize that we’re not in a total moral vacuum even in situations where God has been rejected. Why do I believe that?
Lewis in his book, The Abolition of Man has an appendix at the end that’s well worth reading. And he researched all, well, twenty or thirty, forty philosophies, religions all around the world and he discovered that they all had a common moral base. And in particular, he discovered that every single one of them, including Roman pagan religion, humanism, Christianity and so on, had a version of the golden rule, do unto others as you would be done by. Now that tells me as a Christian that Paul in Romans indicates in modern terminology that human beings are all hardwired with a built in morality. They’ve got some moral compass and if that weren’t there, the whole of humanity would just fall apart instantly.
So we can at least work with that but because we have the ability to rebel against our inner moral convictions, you face this huge difficulty. So I am torn in two directions. One, yes, let’s work. Let’s have the best in the world like yourself to write these rules and people are trying to do it and to at least save us from something, but let’s realize that rules are not enough. Every business executive I’ve ever talked to tells me it’s one thing to put the moral and ethical mission statement on the wall in the boardroom, it’s another one to get it into the hearts of the executives.
So there is a lot of work, but this is why finally we need to have Christian believers working in this area. There are some wonderful examples. Ross Pickard of MIT with her AI Lab, a whole discipline invented by herself, that smartwatch that can recognize if a child or an adult is about to have a seizure and save its life. That’s wonderful stuff to be working on. There are many possible things.
We need people in there with scientific qualifications but who are ethically strong. And of course, it’s the old problem. Socrates thought that education was enough to make people ethical, but Aristotle realized the ethics were fine, but he hadn’t the power to do it. And that’s where the gospel comes in because the gospel doesn’t simply give us ethics, as you know. It gives us the power to live and the transformation, and we need to be emphasizing that.
One of the most popular books of the Renaissance era was The Prince by Niccolo Machiavelli. And in chapter eight and then he’s writing to instructed the Medici young man who’s going to become the ruler. And he’s telling him how to be a good ruler. And in chapter eighteen he says, you’re going to rule with vice.
Lie when you need to lie. Go back on your word when you need to go back on your word. No good ruler rules virtuously, but you must appear to be virtuous, especially religious so that people won’t question you while you do these vices.
On Transhumanism
AUDIENCE QUESTION: Friends of mine who endorse transhumanism seem to take it for granted that smarter people will make good decisions. What are the odds that transhumanism can overcome sinfully poor decision makers?
JOHN LENNOX: What are the odds? Zero. As you were speaking this evening, I was reminded of Thomas Malthus. An economist in the late 1800s who determined that we would run out of food to feed the masses. And he postulated that you really need to let people starve to death because if we try to feed the needy, they’ll just keep breeding, and there’ll be more needy people. So actually, utilitarian wise, he said, everyone you kill today will spare ten from dying in the future.
Yes, that’s exactly right. And that led to the Eugenics program all around the world. And you see what’s happened with it. Now we have it dressed up in Oxford as oddly effective altruism and long termism. It is absolutely incredible. But we’ve now got the technology to do things we could never do before. And that’s frightening because the Germans tried this and they tried to create an Ubermensch, as you know, Nietzsche.
The Russians tried to do it, create the new man. And as I think I said on another occasion, I’ll never forget a member of the Academy of Sciences in Siberia saying to me, “John, we thought we could get rid of God and retain a value for human beings. We discovered too late we couldn’t.”
Careers in AI
AUDIENCE QUESTION: What would be your advice to Christians considering careers in artificial intelligence?
JOHN LENNOX: Consider them very carefully and go into something. You see, they’re most likely to be offered career in narrow AI. And there you can make ethical decisions because you’re not likely to go into something that is doing something that you know is morally illegal. There are huge fields of narrow AI in medicine particularly, but not only there, that are doing an awful lot of good. Or if you’re an ethicist, you can go in and contribute there. And that’s very, very important to date.
I mean you have at this weekend, Heutziger, who’s working on that very thing. And her work is very important and others like her. Gretchen, where are you? There you are. Would you stand up so people know you who you are because they may want to accost you with questions afterwards?
Some of these questions I chuckle at, and I’m not going to read. So the one that says, what is your cell number? I would like you to be my mentor. I will not read. You mean my cell number in prison?
AUDIENCE QUESTION: What advice would you give to high school students in this day and age?
JOHN LENNOX: I always say to high school students to play to their strengths first of all. And at any stage in life, ask yourself what can I do now to maximize my potential for the future? Now I notice a huge difference. As you heard, I’m very old.
When I was young, I hadn’t a notion, not a notion, of the kind of job opportunities that are available today, not only university education but apprenticeships and high-tech industry and all this kind of thing. And what is marvelous and I’m sure they exist in your schools, career advisors who really know about these possibilities. If you’re a believer, a Christian believer, then there’s an extra question to ask. How can I so serve God in my work that I can maximize my witness? And I’ve got a long answer to the question and you yourself sir have been very kind in endorsing my latest book which is called A Good Return.
You want to get a good return on your investment of life. I endorsed it because it was a good book. Well there we are. It’s called Biblical Principles of Work, Wealth and Wisdom and it’s already available and that I am encouraged to say many young people have found it helpful. But talk to people, find out what their jobs are like and inform yourself early and if you want to go to university, go and visit the department, talk to the people because universities like people who are interested.
So there are lots of little practical tips that you can follow.
On AI and Consciousness
AUDIENCE QUESTION: Do you believe that AI can develop a mind?
JOHN LENNOX: Well, since we don’t know what a mind is and we don’t know what consciousness is, I certainly believe it can simulate some of what the human mind does. But you see, we’re talking about a machine and machines at the theoretical level, all of them, past, present or future, can be simulated by what’s called a Turing machine. It’s a mathematical object.
And this is the subject of a very brilliant piece of mathematics called the Church Turing thesis. And I tend to be sympathetic with Roger Penrose’s arguments in his book. He’s a brilliant mathematician that used to work with Stephen Hawking. There are certain things that the human mind can do that are not algorithmic, so they will be never you can never put them into a machine. And I think there’s a gap there.
But since we haven’t a notion of what consciousness is and minds as we understand are conscious, then I think it’s hard to even make sense of what the question would mean.
AUDIENCE QUESTION: What are your thoughts on the sightings in many countries of UFOs from another universe? Wormhole?
JOHN LENNOX: Well, I believe they are what they say. They’re unidentified, so we don’t know what they are.
AUDIENCE QUESTION: Do you view it against God’s will to modify human genes that remove the imperfections that sin caused in the first place?
JOHN LENNOX: Well, there’s an assumption there that by genes we will do that. I think I take very seriously what I said in the last few minutes that humans are so amazing that God could become one. That is utterly awesome. You know, the heavens declare the glory of God.
They weren’t made in his image. You were. And therefore, to take God’s specification and to do something that’s going to reengineer it, I think comes under Lewis’ criticism that what you now produce is not a human being but an artifact. It’s subhuman and therefore trying to do that is essentially rebellion against God. Bless you.
And that’s nothing to sneeze at. It’s brilliant, isn’t it? I’m sorry. It’s alright. I’m sorry.
Beta minus. Yeah.
On the Soul
AUDIENCE QUESTION: How do you view the soul in all of this?
JOHN LENNOX: The soul is a complex thing because looking at scripture, the word from which we get psychology, which is usually used for soul, has that spectrum of meanings. I’ve always found this theologically fascinating that the big words have a spectrum of meanings.
So a soul, the soul can stand for the whole person. In Noah’s Ark, eight souls were saved, not some disembodied spirit, but eight people. Or John can write to a lady and said, I hope you’re well even as your soul is well. So he makes a distinction. And I’ve thought about this and I’ve asked myself, so what?
What are we meant to learn from scripture about the soul? And Peter comes to our help because he is the only one that talks to us about the salvation of the soul. And that is a concept that the Lord taught him. And I used to learn as a child, and I couldn’t understand it because I was told you have a soul to be saved. If you want to save it, you should lose it.
And then I lost it at that point. I couldn’t understand what that meant. But thinking of it in its context as an adult, I think it’s very powerful. You see the soul, one meaning of it seems to me to be life in the sense of what makes life, life. And when Peter resisted the Lord when he was predicting that he was going to die on the cross, that will never happen to you.
It was in that context that Jesus taught him what the salvation of the soul meant. Peter has spent three years investing his life, his time, his energy, his money in Christ. And now Jesus says, I am going to go to Jerusalem and be rejected. Peter thought he was losing his soul, his life. Life was being drained out and Jesus said, Look, you think you are losing it but if you invest it for me in my kingdom, you’re going to save it.
So the practical meaning of the soul for me, and I’m sure it hit, was not perhaps behind the original question, but I find it a thrilling thing that we can invest life in Christ and we will never lose any of it. Now you see here is a practical example. You’ve invested how long? Nearly an hour and a half, wasting your time coming to this meeting in the opinion of thousands of people in Houston. Who would waste the time when they could be watching a ball game for goodness sake?
They think you’re wasting your life, but it’s an investment for eternity. That’s what I think it means. That means something to me.
AUDIENCE QUESTION: And that’s something that AI will never have?
JOHN LENNOX: It looks like it, but it might help me to invest my life properly. Because if it saves me time, when I go to get an x-ray and I get cured faster and gives me a few more years to serve the Lord, then, you know It’s a good thing.
UNIDENTIFIED SPEAKER: Yes. Exactly. Would you join me in thanking John Lennox?
Related Posts
- The Art of Reading Minds: Oz Pearlman (Transcript)
- Transcript: India’s NSA Ajit Doval’s Speech on Regime Changes
- Inside India’s Astonishing Solar Revolution: Kanika Chawla (Transcript)
- Is The AI Bubble Going To Burst? – Henrik Zeberg (Transcript)
- Why Writing Is the Ultimate Rehearsal for Public Speaking
