Here is the full transcript of Practical Wisdom podcast titled “John Lennox Unlocks the Truth about AI, Consciousness, and God.” In this episode, renowned apologist and Oxford mathematician John Lennox reveals the truth about Artificial Intelligence (AI), consciousness, and our understanding of God.
Listen to the audio version here:
TRANSCRIPT:
SAMUEL MARUSCA: Welcome to Practical Wisdom. My name is Samuel Marusca, and today we’re going to be talking about artificial intelligence. I’m delighted to be joined by Professor John Lennox today. Hello and welcome to Practical Wisdom.
JOHN LENNOX: Thank you very much. Delighted to be with you.
SAMUEL MARUSCA: John Lennox is a professor of mathematics and the philosophy of science, emeritus professor at Oxford University. He’s a renowned author, having written numerous books published in several languages. He wrote “Can Science Explain Everything?”, a best-seller, “God’s Undertaker: Has Science Buried God?”, and now a very recent one, “A Good Return.” He’s also debated many atheists, including Richard Dawkins and famously Christopher Hitchens.
John, I’d like to start with artificial intelligence. First of all, I’d like to start with the definition of AI. What is, in your opinion, artificial intelligence? Because the wording seems to be very confusing. It seems to be that there’s a lot of artificial but probably not as much intelligence going on. Also, what is your understanding of weak or narrow AI and general AI?
JOHN LENNOX: Well, let’s start with the basic definition. There’s a marvelous paper that was written many years ago by one of the pioneers of AI, who rejoices in the name of Joseph McRae Mellichamp. He wrote a paper entitled ‘The Artificial in Artificial Intelligence is Real.’ In other words, the word artificial needs to be taken seriously.
Narrow AI typically is a system that involves a large database, a computer, and an algorithm for sorting that database. It does one thing and one thing only that normally requires human intelligence to do. So, let’s have an example of that. For instance, the x-rays of lungs, which sadly is a very important thing in these days of COVID or at least in its aftermath. So imagine a database consisting of, say, 1 million x-ray photographs of people’s diseased lungs. They are labeled by experts on lung disease around the world. That’s the database.
Then an x-ray is taken of my lungs, and the algorithm compares that picture with the million others very rapidly. It then outputs a diagnosis. At the present state of play, that diagnosis will probably be better in most cases than I would get at my local hospital, even here in Oxford. So that is a typical example of narrow AI, and there are many other examples which we can come to later.
Artificial general intelligence, on the other hand, and this is where a lot of hype and science fiction enters the scene. That’s the idea of creating, as the name suggests, or stimulating more accurately, everything that normally requires human intelligence, only doing it much quicker, faster, and more efficiently. So we’re getting now into the realms of the concept of a super intelligence.
Research is proceeding in two directions. Firstly, enhancing existing human intelligence by means of cybernetic implants, by the use of drugs, by genetic engineering. All sorts of possibilities have been suggested, and some of them have been attempted. The other line of research is starting from scratch with some inorganic base. Because one of the arguments is that the problem with human intelligence is it’s carried by an organic vehicle that degenerates and eventually dies. So why don’t we start with an inorganic base, like silicon or something like this, and build up the artificial intelligence from scratch?
Now, a number of huge issues come here because we’re nowhere near that. And we need to realize, and this to my mind is utterly fundamental, that in human beings, intelligence is coupled with consciousness. Now the leaders in AI research, people like Peter Norvig, who writes the basic Bible, if you like to put it that way, on AI, essentially says that he has given up, and this is really in line with what Alan Turing long ago suggested, given up the idea of producing artificial consciousness, whatever that would mean. And simply doing the simulation so that AI is disconnected from consciousness. So what the people looking to create some kind of artificial general intelligence are mostly content to do is to produce a simulation but not an actually conscious machine or being or whatever it is.
SAMUEL MARUSCA: So you’re saying that artificial intelligence doesn’t have consciousness, and there’s a divide between artificial intelligence, robotics, machines, and consciousness and human consciousness on the other hand?
JOHN LENNOX: I am.
SAMUEL MARUSCA: There’s no overlap between them.
JOHN LENNOX: And one of the reasons for that is very simple. No one has any idea what consciousness is.
SAMUEL MARUSCA: That’s actually true. It’s very, very difficult to define.
JOHN LENNOX: It’s impossible to define consciousness. It’s like something fundamental in the universe, and we’ve no idea how even to start. The whole notion of qualia, awareness, and all this kind of thing, no one has got anywhere near simulating it. And a lot of hype about some of the machines, the very rapid processing machines we’ve got, that look as if they’re intelligent. But the hype that they are actually conscious is nonsense, actually, and is regarded as such by experts.
SAMUEL MARUSCA: Now we have Bertrand Russell, philosophers of language like John Searle, and more recently, Roger Penrose, who all have some ideas and have written about what consciousness is. John Searle said that consciousness is a mystery. Although in the past, in science, we had the hard problems of science that were matter and movement, nowadays, it seems that consciousness is becoming a bit of a problem for scientists to explain, but no one really understands what it is.
So, Roger Penrose said he’s explaining consciousness from a quantum mechanics perspective, which is a very interesting one. He’s saying if you want to know what it is, first we need to understand how we can switch it off and switch it back on again. Whereas John Searle, on the other hand, is looking at it philosophically. He argued, and this just goes back to your point on machine consciousness and the fact that machines can’t think, he devised this Chinese room argument.
What is the Chinese room argument, and do you agree with John Searle’s interpretation and take of weak AI?
JOHN LENNOX: Well, John Searle is very interesting, and of course, Roger Penrose is as well. It may be that consciousness involves quantum mechanical aspects. We just do not know, and any speculation is worth having, especially when it comes from a mind as bright as his because, in my book, he’s one of the brightest mathematicians around, so always worth taking very seriously.
But John Searle’s idea is that you could mimic consciousness in connection with the Chinese language. The idea is that you have this room containing all sorts of tablets on which instructions are written in Chinese characters. The person in the room is, for example, asked a question written in Chinese from outside. He doesn’t understand Chinese but has rules that deal with what he sees in front of him in terms of symbolism.
So, he goes to the appropriate box and takes out what the instructions say is related to that and passes it out through the window. The person outside is satisfied with the response and thinks that the man inside understands Chinese when he doesn’t understand a single word of it. Therefore, what Searle was doing there was, in a sense, suggesting that the Turing test is not sufficient to establish consciousness. In Turing’s idea, if a computer could convince a person interacting with it that it was human, then that was enough. It had passed the test and therefore could be regarded as unconscious.
Searle showed that that is not true, and of course, people have then tried to ratchet up the Turing test and develop other versions of it, but there’s always this elusive ‘what is consciousness?’ question. But until we’ve got better answers to that, I think the quest is virtually hopeless, which is why, as I said earlier, many of the leaders in AI have given it up. For practical purposes, in terms of the world economy and doing things, you don’t need consciousness, provided what you’re doing is what normally takes a conscious person to do, and there have been huge developments in that direction, but none of them goes anywhere near consciousness.
SAMUEL MARUSCA: Now, going back to what you said about Alan Turing, he wrote this famous paper in 1950 about machines and whether machines can think. He also wrote about the imitation game in that paper. Now, is artificial intelligence simply imitating the human brain? What about artificial intelligence?
JOHN LENNOX: I think the key there, possibly if I understand your question correctly, is that so far, AI is based, as has been pointed out particularly by Penrose and others, on the concept of an algorithm. That is a very sophisticated, maybe set of rules. A lot of us in school learn the Euclidean algorithm for finding the greatest common factor of two numbers, that kind of thing.
Penrose, I think, and I find his argument persuasive, says that there are things that the human brain or mind can do, and that’s another debate within this, is the brain the same as the mind, that can do things that are not algorithmic. Therefore, if that’s the case, then algorithms, although they do a great deal and have produced what you mentioned, Chatbot 4, now very powerfully and impressively. But algorithms will not solve these fundamental problems because our minds can work non-algorithmically.
The kind of thing that Penrose refers to, and I would as a mathematician, are the results that Kurt Gödel brilliantly showed. Those kind of things will not lend themselves to an algorithmic approach, and mathematicians, of course, are massively impressed by Gödel’s work. So, it seems to me, if you bring that to bear, you are doing something other. And of course, there are experts in computer science that will tell you straight that the computers with their algorithms and their AI systems work nothing like a human being’s mind or brain works. So that, we’re doing something that would normally take human intelligence to do, but we’re not doing it in the same way as human intelligence would do it, and I think this is fairly generally accepted.
SAMUEL MARUSCA: I think that’s a very valid point that you’re making there. The fact that AI doesn’t work in the same way as the human brain, as the human mind, there might be a debate there whether it’s the same thing or not. But you say it’s different because our human mind is not algorithmic. So, in that respect, then, it seems to me that most AI language uses words like ‘neural networks’ or ‘intelligence’ or ‘deep learning’ and so on.
Now, this seems to me to be more like describing a brain function but it’s quite different. The brain works quite differently. So a neural network in artificial intelligence the word nodes is used rather than a neural, so it works on different layers. And it’s all algorithmic, whereas the human brain works in a whole different way. It has around a hundred billion neurons, and each neuron has several thousands of synapses or connections.
In children’s brains, the neurons are even better developed, and actually, they don’t have the connections but they build them. They’re building connections up to the age of two at a rate of 700 connections per second, neural connections per second. So, the brain, the human brain, works very differently to a neural network. What is the neural network?
JOHN LENNOX: Well, let’s talk about language, which is your speciality, I believe. I am concerned about this highly anthropomorphic language that’s used because it’s very misleading. In fact, the term ‘artificial intelligence’ is misleading, and that’s why my friend, because I know him, Ray Mellichamp, entitled his paper that way. The artificial is real, but the language that’s used actually erodes the real distance between the AI machine and the actual human being who’s operating at, or the human whose pelagens is being simulated, and playing that in a patient game. The neural network is a mathematical construct and is nothing like what we’ve got in our brain.
But because there are certain similar features, through connections and so on, they use the word ‘neural’, and people think it is a copy of the human brain. And of course, it’s nothing like that. We get, I’m afraid, bamboozled by language, and a great deal of the hype associated with AGI comes exactly from that. The language is driving the ultimate philosophy, and people think that they’re really getting to a real artificial general intelligence, if you don’t mind putting it that way.
When in fact, the only thing that we’re moving towards is more and more anthropomorphic languages, shaping us.
SAMUEL MARUSCA: And you talk about the language of AI, and we know, of course, there are language models, and now famously, we have ChatGPT, ChatGPT 3, and now ChatGPT 4. Now, this is a very powerful tool, and it produces, as we know, output that sometimes you can mistake for a human. So, the language is very very good and very very accurate, and you can use it for various purposes. I can see the potential of this being used for commercial reasons and in many areas of life. So, I think it’s a revolutionary technology.
But Noam Chomsky wrote an article in the New York Times recently about the fourth promise of ChatGPT, and he said that the human brain doesn’t understand language and learn language in the same way as ChatGPT. Because he said, he argues that children learn language quite differently. They have a very limited input, so it’s what we call the poverty of the stimulus in linguistics. So, very very limited data, very limited language that goes into the minds of a two — in the mind of a two-year-old. And yet, a two-year-old, three-year-old then is able to produce amazingly rich language, grammatically correct, syntactically correct sentences, well-formed grammatical sentences based on very very limited data, very limited input.
Whereas ChatGPT, for instance, works with vast amounts of data. Obviously, it’s all texts from the internet, collated, and based on that, it predicts what the next best words should be in a text. What is your thought on that?
JOHN LENNOX: Well, I have always been interested in Noam Chomsky and his ideas because he rather drove a bulldozer through simplistic notions of the evolution of language. That’s another topic, of course, that Darwin used, that the evolution of language to say what had possibly happened in biology. And Chomsky came to the viewpoint that there must have been an explosion like the Cambrian explosion in language, because of this amazing facility that children have. And he’s always worth listening to.
SAMUEL MARUSCA: I had him on the show.
JOHN LENNOX: How do you really — that’s fascinating. I must listen to that. I’d love to because he’s one of the people who’s influenced me in my thinking. He, in what you say, I read the New York Times article. I sympathize with… ChatGPT 4 is impressive when it doesn’t have to be creative. And it’s very capable of making mistakes because, of course, it is not understanding what it puts out. And that seems to be the crucial distinction. There is no understanding, and it is known, even in its latest iteration, to make up things where it doesn’t have an answer, which are completely fallacious.
And it is going to be very useful for writing things like sports reports or O level essays, maybe even A level essays. I think the latest is that ChatGPT 3, is it past the second year medical exam in the United States for doctors? There’s a huge risk with it. And I think it was in yesterday’s Times, maybe today’s Times, they were talking about the huge potential for fake news coming out of ChatGPT 4.
If you couple that together with the fact that AI systems can make you visually say anything they want you to say. The potential for spreading misinformation is absolutely colossal. And of course, it will be weaponized. There are great dangers there. Very interesting, actually. I was doing a seminar with a number of senior teachers and further education colleges in the UK, and I raised this question. I said, how many of you have stopped your students writing essays? And several hands went up. How many of you think students should be allowed to use this? And several heads went up.
So it is very interesting that there’s a division of opinion. And the thing looks as if it’s here to stay. It’ll probably be embedded in something like Microsoft Word so that none of us will be able to avoid it. But I think we need to be aware of the fact that it has, to my mind, the risk of killing creativity.
SAMUEL MARUSCA: You mentioned Microsoft Word there. And of course, you wrote 2084, a very well-known book about artificial intelligence. And in your book, you say physicists and cosmologists, like Max Tegmark, President of the Future of Life Institute at MIT, made this rather grandiose statement. And I quote, “In creating AI, we’re building a new form of life with unlimited potential for good or ill.” And then he continued to say, “How much science lies behind this statement is another matter since, to date, all AI and machine learning algorithms are, to quote the neat phrase of Rosalind Picard, ‘no more alive than Microsoft Word.’ That’s right.
JOHN LENNOX: And I think Rosalind’s right. She’s someone very well worth listening to. And actually, it’s good that you mentioned her because her work in the affective computing lab, she’s really developed a whole field of AI science on her own. And it’s being used to help children, for example. She’s developed a smartwatch that recognizes that the wearer is about to have a convulsive fit or something like that. Now it’s saving lives.
But I think that’s absolutely right. She’s punctured the balloon very, very well indeed. Take Mark, of course, is one of these people who tends to make very grandiose statements. And he’s the one who produces these futuristic scenarios of how a superintelligence will be created and take over the world, which is something I mentioned in my book. And if you don’t mind me saying so, that book is about to be revised. There’s so much been happened since I wrote it only three years ago, but the publishers asked me two weeks ago would I produce a revision. But I’ve been working on that for the last three years anyway, so it should be ready fairly soon.
SAMUEL MARUSCA: I’m very much looking forward to reading the new edition, and I appreciate that this field is developing very very quickly, so you will have lots of new material in there and look forward to that.
Going back to your point on artificial intelligence, ChatGPT, and deep fake videos, you talked a little about the dangers of AI. So, you said this can be weaponized, from ChatGPT to artificial intelligence in general to robotics and so on. And there’s a very famous video of Ukrainian President Zelensky saying, it’s a deep fake video of the President of Ukraine, saying that soldiers should surrender.
So, people can use this, and I can understand the potential of this technology in a U.S. presidential election, for instance. So, in your opinion, what are some of the dangers of AI, especially ChatGPT?
JOHN LENNOX: Well, we can start much earlier on because much simpler AI technology has its problems. I often say, make a general point, the ethics of AI we need to take very seriously, because technology tends to develop much more rapidly than its ethical underpinnings. And that has led to a recent situation with the development of ChatGPT 4, that Elon Musk, plus the founder of Apple, plus a thousand, over a thousand other experts have said, look, we need to stop. We cannot afford to go beyond GPT 4 unless we’ve got serious ethical underpinning, which just doesn’t exist.
Now, some years ago, there were set up what is called the Asilomar principles. It was a conference in a town of that name, I believe, giving what are pretty obvious common-sense ethical principles based on the morality that most sensible people would consider reasonable.
But the problem is, as every business executive knows, it’s one thing having the rules written up on your wall, it’s another thing getting them into the hearts of your senior executives and board. And that is a huge problem. The policing of this, take for instance the example we started with, which uses recognition technology to recognize x-rays. Now, closed-circuit video recognition technology has moved to recognize faces and not only faces, gaits, so that the most recent Chinese iteration of this can recognize a person from the rear. They don’t have to see the face.
Now, you can see immediately that that kind of technology is very useful to police force trying to pick a terrorist out of a football crowd. But on the other hand, as we sadly know, that knowledge and a huge state-controlled database in the hands of an authoritative state can be used to suppress an entire minority culture, as is happening in the Uyghur population in Xinjiang, in China. And that is terrifying because people who’ve written seriously about that particular case point out quite correctly that all this technology is available in this country, for example.
And the police would love to have it and realize it can be used for control. The only difference, as has been pointed out by one of the lead writers on this topic, is that it is not yet, with emphasis on the word ‘yet’, under the control of the central authority in the West. But at least it could become so. So, that doesn’t bode well for the future, sadly. So, simple technology, well, it’s simple now, today. It wasn’t simple at first, that facial recognition technology, is ubiquitous around the world and leads to all kinds of things, like the social credit system in China, or the social credit system in the West.
And now, there are very different views of exactly how far the Chinese have got with this. But it’s quite clear that people are finding that their social credit points can be gained or lost according to their behavior as observed on facial recognition cameras and so on. And then, they begin to bite. They cannot eat at their favorite restaurant, or get flights on a plane. They may even lose a job, etc., etc., etc. So, the risk element and violations of human rights are on the increase.
And the key problem for any technology like this is, it’s like a knife. A sharp knife can be used for surgery or can be used for murder. Now, what is offered to many people is, look, if you surrender your privacy, we can protect you. So, how much privacy do you surrender to these systems in order to have the protection that you are, at least, offered? Whether it will happen or not, I don’t know. And that’s one of the debates that is very much in the rise around the world. And I think it’s partly behind the cry, look, we’ve got to stop this development until we see where it’s going. Unless, unless they know something, these experts, that we don’t yet know.
SAMUEL MARUSCA: I think you raised a very significant point there on freedom and privacy. Now, we seem to have this blind faith in AI and technology. We sometimes trust AI blindly, and we know that privacy is a major issue in AI, in ChatGPT 4, for instance. And free ChatGPT has been banned in Italy because of private data privacy issues. And also, there’s some talk at the moment to ban ChatGPT in Germany as well, for those specific reasons.
And in your book, in 2084, you say, one of the major Orwellian aspects of AI is that certain forms of it present a serious threat to individual and corporate privacy. AI tracker programs are geared to harvesting as much data as possible that you generate about yourself. Your lifestyle habits, where you go, what you buy, people you communicate with, books you read, jobs you do, political and social activities, your personal opinions, and the list that is being added to all the time.
Now, when we go online, for instance, many times we’re going to be asked to accept some terms and conditions and just click that we’ve read those before we are able to access a page. The question is, do we know what we consent to?
JOHN LENNOX: Absolutely not. Most of us don’t read because we haven’t tied to, if we want the information —
SAMUEL MARUSCA: Manufactured consent.
JOHN LENNOX: Oh, that’s right, and this is another can of worms actually that you’re opening, raising this because we readily agree to wearing a tracker, most of us, myself included. I’ve got a smartphone, I find it extremely useful, but it is recording where I am, it may even be recording all that I say.
Not only that, there’s what has been deemed surveillance capitalism by Shoshana Zuboff, another MIT expert, making the point that take the simple thing of purchasing a book, which I did two nights ago on Amazon. Very convenient, I’ve got the book already, which is super, but I know that within a couple of days a pop-up will appear saying people who bought that book are interested in this book. That information about me is harvested, but what most of us don’t realize is it’s being sold to third parties without our permission, and without our knowledge.
And this is a huge problem, and it’s a problem at many levels. What we do not know, in fact, I understand that one of the reasons that Musk and his colleagues are calling for a halt to development beyond GPT-4 is that the people designing and inventing and pioneering these systems cannot really understand what they’re doing. There’s a danger of losing control. There’s a marvelous, I think it’s a messy by a mathematician who said if you look at the story of creation, one of the interesting things about it is, this is the way she puts it, that God lost control of his creations, of the world, said such a mess because of that. And we’re in danger of not learning the lesson of Genesis and doing exactly the same thing and losing control of our AI creations. I think she’s got a very real point.
SAMUEL MARUSCA: So, I think we’ve all experienced this. Sometimes people go online, or they have a conversation privately. So, I had a conversation, for instance, about France and trips to France, and then the next thing, I do, I pick up my phone, and I open social media, and I get ads to hotels in France. Are we being listened to?
JOHN LENNOX: Well, I suspect we are. The marvelous story of the person that bought an AI-guided vacuum cleaner, and it roamed around the house like an automated vehicle, and a few weeks later, they had a pop-up that shows pictures of curtains that would look well in their sitting room because there was a camera, of course.
It needs a camera in order to guide itself around the room, but the camera was taking pictures and sending them on to a commercial enterprise that she knew nothing about. And I suspect this could be multiplied endlessly. In the UK, I believe, just ordinary CCTV, most of us appear on it every five minutes if we’re outdoors, and dear knows how often if we’re indoors. So, I think we are being listened to. And if you take the situation among the Uighurs, where it’s ratcheted up to the nth degree, that is oppressive, and it’s very dangerous, and it could happen here.
It depends on the morality of the ethical norms that the controlling powers have. And this is where it gets really serious, because what a lot of this raises in my mind, and the mind of others, is what value and what status does a normal human person have, an intelligent human being? Because the attempt to replicate as much as possible of what we can do, but to do it in an uncontrolled fashion, risks undermining our concept of the value of human beings. And we can see that devaluation coming, which is another side of the ethical questions, in the way in which people take the simple example of applying for a job.
Now, very often, you don’t have an interview person to person. You speak into a machine, and you’re rejected even before you’ve met anybody, and that makes people feel undervalued and devalued. To say nothing, of course, of the other side of that, which is that AI systems are going to take over a lot of jobs. And that raises questions, particularly in the developed world, where they don’t have the infrastructure or the educational structure to teach people how to take advantage of the jobs in the new technology. So, it’s going to lead, well, some people have suggested, to a massive unemployable cohort of people, which would be terrifying.
SAMUEL MARUSCA: I think that is very important, and you also mentioned that there are some biases, and you talk about the morality of AI. Now, if you go back to ChatGPT, I wrote some text, some input, some text into ChatGPT, and asked ChatGPT to give me 10 positive and negative reasons for visiting a small town in the United States.
So, in the negative list, I got three, a couple of negative things, right. So, I’ve got things like low unemployment, low employment rate, not much of a nightlife in that town, conservative values, and others.
JOHN LENNOX: As negatives?
SAMUEL MARUSCA: As a negative. I was surprised to see conservative values as a negative, given by ChatGPT. So, the question is, John, in terms of the morality of AI, how is it that someone else’s values from Silicon Valley are to be imposed onto someone from the countryside, from a village from Bangladesh, and why are those values better than another person’s in another area in the world?
JOHN LENNOX: Really, maybe much worse. And I think that puts the finger on it, that all of the AI systems, and the more sophisticated they are, the more likely they are to enter into the ethical situation. Those systems have to be programmed, and inevitably, if there’s ethical programming, a simple example is the self-driving car, that it has to make the kind of choice that’s often put to students of ethics, the switch tracks dilemma. If the car is coming down the road, and the sensor picks up an old man with a donkey in the center of the road, if it swerves to the right, it’s going to hit a line of children, and if it swerves to the left, it’s going to hit a bus full of old age pensioners. It has to do one of those three things.
Now, the programming will reflect the ethical viewpoint of the programmer, and we live now in probably the second generation of people who in the West have no shared ethical worldview. And that’s why I say very often to people who are scared of AI, which is another topic, I say to the scientifically minded, look, we need people in there who are doing really good research, like Professor Picard, but also who’ve got ethical roots themselves, who can sit at the table and contribute to the direction in which things are moving.
Now, there’s a knee-jerk reaction in Elon Musk and other people like him, and one can appreciate it. At least somebody’s saying stop, but the grounds for that, because the philosophy on the other hand, and it’s the philosophy that worries me most, and it’s been around for a very long time: if it can be done, it must be done. That’s the scary philosophy, and when vast sums of money are involved in it, you think of the billions that are now being put into AI, but it was just a trickle a relatively few years ago. In fact, AI nearly went defunct for a while.
It shows that there are fortunes to be made and lost in this area, and money creates huge pressure. It’s not a moral pressure, but it’s a pressure on people. So, we could be in for a pretty scary time, actually. So, there are things to fear, which is one of the reasons why in my book, I took the risk of opening up the Christian perspective in all of this, to see: actually do we have anything that indicates where we should go on this?
Now, of course, the big question is, what should we do about it? That depends entirely on who we are. The average person who’s using a smartphone has no input, or virtually no input, apart from refusing to buy the thing, into the general ethical underpinning of this. But there are influencers, serious people, and we need all the help we can from ethicists and philosophers and folks like this, who will stand back from the whole thing and say, like Musk and Co., ‘Hey, stop! We need to sort something out before we go further because we’re actually now, for probably the first time in history, on the edge of a technology that we don’t really understand, and it could run amok in some way or other.’
SAMUEL MARUSCA: It’s interesting you brought in Christianity there, and of course, you talk about this in your book ‘2084’. Now, in terms of ethics, John, can we have a robust ethics of AI outside of the idea of God?
JOHN LENNOX: Well, I would actually challenge the idea of having any real ethics at all outside the idea of God. One of the writers that has interested me to a certain extent, I spent a lot of time in Russia, is Fyodor Dostoevsky, and in his famous novel ‘The Brothers Karamazov‘, he has this famous statement, ‘If God does not exist, then everything is permissible.’
Now, of course, he didn’t mean that atheists can’t behave. Of course, they can. From where I sit as a Christian, every man and woman, whether they believe in God or not, is a moral being, and therefore they have a moral base, which is ultimately, whether they believe in God or not, attached to God.
What Dostoevsky meant was, there’s no rational base for morality if you don’t believe in God. Now, that brings us directly back to Nietzsche, I think, who was hugely influential. I think he stands behind all this because he was a serious atheist. And he saw that if people rejected God, then the morality to say that they are keeping a Christian morality just doesn’t make sense, because once you kill God, in the end, you kill morality.
And the more I read of that, the more sense that makes to me philosophically, and it’s hugely important. But people cannot see it, and they keep on saying, ‘Well, of course, I can be good without God,’ and all this kind of thing, and in a sense, they can. But that’s only because, as creatures of the living God, they have an inbuilt moral compass. But of course, we can suppress that to such an extent, and teach people that it doesn’t exist, until they begin to believe that. And then we see the world as it is today, where morality gets relativized.
SAMUEL MARUSCA: The UK government is investigating how best to regulate the AI industry and AI in general. There’s been a report, a white paper that was proposed at the end of March, titled ‘A Pro-Innovation Approach to AI Regulation’, in which the government is proposing five pillars, five main pillars that they wanted to tackle and regulate AI in general, including approaching fairness and making it accessible and safe for everybody. What aspects of AI do you think should be regulated for?
JOHN LENNOX: Well, those are the obvious things, and you’ll find those in the Asilomar principles. What rather has taken me aback is that the UK is not taking regulation as seriously as some other countries, and there’s been quite a cry about that. Why aren’t we doing more about it? So, I haven’t actually read the details there.
Again, the difficulty is, any of us can sit down and think of basic moral principles if we got any ourselves. It’s the implementation’s the problem. How do you get a rogue state to listen? Well, they won’t, and historically, it’s interesting that, for example, Hitler made agreements and treaties in his political infancy, but when he got the power, he tore them up. And the problem, in a nutshell, is, one of the reasons I did a degree in bioethics, was to try to get down to the fundamental question. And I discovered, in all areas, in law, in business, in medicine, etc., etc., the question nobody wants to answer is, ‘Who said so?’ The question of a child at school, ‘Who said so, Miss?’ That is the key question. Whose authority?
And of course, the kind of utilitarian ethic that is trumpeted today by people like Peter Singer in Australia and so on, is brilliant if you’ve got equal centers of power. Because if you have got, for example, divide ice cream among 20 children who are equal centers of power, you’d better do it equally, or you’d be in trouble. But if you come up to Hitler and say, ‘Well, you ought to do that,’ he’ll say, ‘Well, what will you do if I don’t?’ He’s got all the power, and that is the problem.
If one center of power has it all in their hands, where will you get the ‘oughtness’ from? And as David Hume, much as I disagree with him on so many things, long ago said, you cannot get an ‘oughtness’ from an ‘is-ness’, and that is the problem, I think, philosophically, that many people are facing, though they don’t realize it.
SAMUEL MARUSCA: You mentioned children, John, and I want to ask you about AI and children. Now, when I grew up in Eastern Europe, I didn’t have any technology. There were no screens, of course. And we know that children learn language in a certain way. We learn, we know that language is not accessible to the consciousness, and a lot of language is internal language. So, what goes on in our mind is not accessible to the consciousness.
So, when I select, when I choose to say one sentence and not another, I do this unconsciously. And also, when I lift my finger, linguists and neuroscientists will say that we actually do it unconsciously first, and we only become conscious of the movement after we have performed the action.
JOHN LENNOX: Leibniz’s experiment.
SAMUEL MARUSCA: Indeed, and Sam Harris picks up on this, he says, he says, actually, there’s no free will. This is a demonstration proof that we have no free will, although I disagree with that. So, we don’t know quite yet what happens in children’s minds and how AI affects and impacts on children.
And we’ve been using AI for a while now, and you know, Google, YouTube suggestions, all of these algorithms, all of these suggestions and prompts that we get, guide us in a certain direction. And if you’re not careful enough, you may be tempted and guided to think and choose some things that you may not have chosen in the first place. So, how do you think AI will impact children’s development and education, especially in early years, going forward?
JOHN LENNOX: I fear, because even without AI, what I have lived to see, and I grew up like you, in the world here in the West, where there are no screens. And we didn’t have a television for a very long time, so I’ve grown up to live with this.
What I fear is that our children at the moment, and it’s highly controversial, but are being taught stuff about their natures as human beings, and their sexuality, that I feel is explosively dangerous, and is confusing them. So, long before you get to AI, now I would have a little question there as to whether this stuff that’s being pushed at them is actually written by Chatbot 4 or something like that. Where is this coming from? Because there seems to be almost an undermining of the concept of humanity that I would find acceptable.
In other words, the concept of human beings that has been dominant in Europe for centuries, but of course, has been set aside. And going back to your earlier point, the European Constitution doesn’t even mention God, and I think that worries me, and I can’t say much more about that. But I think there’s one other element that we possibly need to discuss, if you don’t mind, and that is the whole question of the transhumanist agenda.
You see, that will start with children. How can we modify children? And here, Harari is very influential. Yuval Noah Harari, and that concerns me greatly because it’s so age-old. His second book, ‘Homo Deus’, the man who is God, that’s a very ancient idea. And we see it throughout history, right up into modern times in Eastern Europe, with Albania, for example, the leader claiming essentially to be God, and having hymns sung to him. And we get that megalomania, self-deification, right through the ancient world, and up through history.
And now, we get Harari bringing all this technology in, and saying, well, what we’re going to do in the 21st century is solve two major problems. One, the problem of physical death, which is just a technical problem, so it has a technical solution. And secondly, to enhance human happiness. Now that’s the transhumanist agenda, going beyond the human. And keeping with your question about children, what concerns me is, when are people going to start, for example, these operations to change a person’s gender, becoming younger and younger? And this kind of genetic engineering is reaching further and further into what it means to be a human being.
Now, to cut a long story short, when people come to me and tell me about this transhuman agenda, how exciting it is, we’re going to solve the problem of physical death, that we’re going to enhance human happiness by bioengineering, genetic engineering, drugs, everything else, I simply look at them and say, ‘You’re too late.’ And they say, ‘Of course we’re not too late, we haven’t got there yet.’ Oh, I said, ‘You’re far too late. The problem of physical death was solved 20 centuries ago when Christ rose from the dead.’
Now, that’s a big shock to the system, because I bring the supernatural in, because I believe it to be true, even as a scientist, that Christ’s resurrection changes the whole game. And secondly, the whole question of people’s fear of dying out, they want some eternity built in, they even have their brains frozen in a cryogenic storehouse or something like this. I say, but look, the thing that you’ve missed in all of this is that Christ offers to people that trust him, he offers the most impressive uploading you’ll ever come across, and that is the resurrection when he returns.
Now, I discover when they say that to people, they see I believe something. But it’s hugely important, because my attitude to all these futuristic scenarios is to say to people, ‘Great, if you’re prepared to think about those, then why won’t you think of a scenario that’s much older than any of them and has got a huge additional credibility, because it’s backed up by two major things: history and human experience. Let me tell you about it.’
And that is the fundamental Christian message. And at the heart of it, and this, to my mind, is hugely important, a little child is made in the image of God. And why is humanity 101 so unique? Well, I would want to say that it’s unique because God became human. Human beings are of such nature, let me put it this way, that God can become one. I find very few people have reflected on the absolute staggering nature of that claim. Of course, it’s either true or false, but I believe there’s evidence to support that it’s true. And that colors my whole attitude to the field of artificial intelligence, in particular to AGI.
SAMUEL MARUSCA: I really think, in all of this debate about artificial intelligence, we need to rediscover what makes us human.
JOHN LENNOX: Exactly.
SAMUEL MARUSCA: John, thank you very much for your time.
JOHN LENNOX: It’s been a pleasure.
SAMUEL MARUSCA: Thank you.
JOHN LENNOX: I very much hope that you’ve enjoyed this discussion. It’s been very stimulating for me, and I hope it’s been stimulating for you. And so, therefore, I would encourage you to check on future podcasts and follow them because you can be absolutely sure they’ll be dealing in a serious way with the major issues that we all need to know about as we face the coming years.
Related Posts
- The Dark Subcultures of Online Politics – Joshua Citarella on Modern Wisdom (Transcript)
- Jeffrey Sachs: Trump’s Distorted Version of the Monroe Doctrine (Transcript)
- Robin Day Speaks With Svetlana Alliluyeva – 1969 BBC Interview (Transcript)
- Grade Inflation: Why an “A” Today Means Less Than It Did 20 Years Ago
- Why Is Knowledge Getting So Expensive? – Jeffrey Edmunds (Transcript)