Skip to content
Home » TRANSCRIPT: 2084 – Artificial Intelligence and the Future of Humanity: John C Lennox

TRANSCRIPT: 2084 – Artificial Intelligence and the Future of Humanity: John C Lennox

Here is the audio, full text and summary of John C Lennox’s lecture titled “2084 – Artificial Intelligence and the Future of Humanity”.

Listen to the audio version here:

TRANSCRIPT:

We humans are insatiably curious. We’ve been asking questions since the dawn of history. We’ve especially been asking the big questions about origin and destiny. Where do I come from and where am I going? The problem is that these are not easy questions. And with the rise of artificial intelligence, the questions become even more daunting.

Will Technology Change What It Means To Be Human?

How should we think about the artificial intelligence we encounter in everyday life? Will all this change the way people think about God?

I believe that there are real, credible answers to these questions to show that Christianity has some very serious, sensible, evidence-based things to say about the nature of our quest for superintelligence. That it is real answers, way beyond anything that AI prophets can even dream about.

Every author has a biography. And I found when I read books that the more I know about the author, the better I can understand why they write what they do. And in my case, I come from Northern Ireland, which is a country sadly infamous for terrorism and violence. And I grew up in the city of Armagh, which was a particularly violent centre.

The violence was just beginning when I left to go to Cambridge. But what was very important was the way in which I was brought up by my parents. They were very unusual people, in the sense that we lived in a sectarian country, divided across Protestantism and Catholicism.

But my parents, they were Christian, but they were not sectarian. And that was demonstrated by the fact that my father, who ran a medium-sized country store, he tried to employ as best he could people from both communities. And the shop was bombed because of that, my brother nearly killed.

And I asked him once why he’d done that. And he said, well, I believe that the biblical account of human life is correct, that human beings, no matter what they believe, are made in the image of God and therefore of infinite value, and I try to treat them that way. And that has become a life principle for me.

And we’ll be looking at that a little bit later, the significance of human life.

The second thing I got from my parents was they loved me enough to allow me to think. And that, I discovered later, was unusual because the country was full of religious bigotry. But my father in particular read widely, although he didn’t have the education I later got, and encouraged me to read as widely as I could. When I was about 14, he gave me a copy of the Communist Manifesto. And I said, have you read that? He said, no, but you should. I said, why? Because you need to know what other people think.

So with that background, I went up to Cambridge to read mathematics, although I was originally interested firstly in languages and secondly in electrical engineering. I ended in Cambridge because my school headmaster thought I might have a chance of getting there. And I suppose one of the first interesting things about Cambridge was that C.S. Lewis was still there, and I was able to attend some of the very last lectures he ever gave.

But I was challenged when I got to Cambridge. Very early on, a fellow student said to me, do you believe in God? And he apologized, and he said, I shouldn’t have asked you that. You’re Irish. All you Irish believe in God, and you fight about it. And of course, I’d heard that many times.

But I thought, I’ve got a real opportunity now at Cambridge to meet people from different worldviews and to find out what makes them tick to befriend them. And so I searched for people who did not share my Christian worldview. And I’ve been befriending people like that ever since.

So what that explains about me is that I’ve always been, since childhood, interested in the big questions, the big questions of life, the big worldview questions. And being a mathematician, I’ve wondered, well, where does mathematics fit in science? And then, where does science fit in our view of reality?

Does science tell us everything?

Or is there more to be found? Is there a transcendent dimension? Which, of course, I believed in as a Christian. But I wanted to expose my faith in God and Christ to questioning. And so, for all of my life, I have opened myself, made myself vulnerable, if you like, to facing really big questions.

And I spend my time playing Socrates, asking the big questions. And I’m looking up at some of the big questions with which developments in artificial intelligence confront us. Questions like this, will we be able one day to construct artificial life? Will we be able to re-engineer humans so that they become super-intelligent?

And what implications will advances in AI have on our worldviews in general? And, indeed, on the God question in particular. Now, I have never personally constructed an autonomous vehicle or weapon, indeed. I’ve never designed a machine learning system. But you don’t have to be able to do either of these practical things in order to have an intelligent discussion about their implications.

My background is in pure mathematics and the philosophy of science. And that has given me a keen interest in the public understanding of science. So, let’s begin with human curiosity about the questions: where do I come from and where am I going? Our answers to the first shape our concepts of who we are.

Where Do I Come From And Where Am I Going?

A person who has lost his memory often loses his identity and has to be given information about their past in order to reconstruct their identity. So, the past determines our identity. But then the second question, the matter of the future, where we’re going. Our answers to that question give us our goals to live for.

The sad tragedy is that people commit suicide and often leave a note behind them saying, ‘I have nothing to live for.’ So, we live for the future and our identity is shaped by the past. And both of those fill up our worldview and help to define it.

Now, what is our worldview? It’s the narrative within which we live our lives, the narrative that give our lives their meaning. And over the years, many answers to these questions, of course, have been proposed by science, philosophy, religion, politics and so on. And two of the most famous futuristic scenarios are the novels, Brave New World by Aldous Huxley and George Orwell’s 1984, which suggested my present title.

Both novels are dystopian. That is, according to the Oxford English Dictionary, at least, they describe an imaginary place or condition that is as bad as possible. However, they are very different. The famous sociologist Neil Postman says, “Orwell warns that we will be overcome by an externally imposed oppression.”

But in Huxley’s vision, no big brother is required to deprive people of their autonomy, maturity and history. As he thought, people will come to love their oppression, to adore the technologies that undo their capacity to think. Orwell envisaged the arrival of a surveillance state. The development of face recognition and tracker technology using AI has now made this possible and is reason enough in itself to write about AI, whose goal is to build computer technology that can do the sorts of things that a human mind can do in the hope of eventually producing super intelligence.

Billions of dollars are now being invested in artificial intelligence, and we wonder where it’s all going to lead, for good or ill. On the plus side, better quality of life through digital assistance, medical innovation and human enhancement on the one hand, and on the minus side, fear of job losses and Orwellian surveillance societies on the other. We need to separate reality from fantasy and hype.

So let’s be very clear at the start that most of the successes so far in artificial intelligence have to do with building systems that do one thing, and only one thing, that would normally take human intelligence to implement. For instance, like working out your buying preferences. This is called narrow AI.

By contrast, artificial general intelligence, AGI, is the very ambitious quest to build systems which some think will surpass human intelligence within a relatively short time, even by 2084. On that score, three contemporary best-selling books came to my attention. Firstly, two books by Israeli historian Yuval Noah Harari, Sapiens, which deals, as its title suggests, with the first of our questions, the origins of humanity.

And secondly, Homo Deus: A Brief History Of Tomorrow, which deals with humanity’s future. The third book is, like Huxley’s and Orwell’s, a novel, Origin, by Dan Brown. It focuses on the use of AI to answer both of our questions in the form of a page-turning sci-fi thriller that is likely to be read by billions of people.

Brown also focuses on the question, will God survive science? A question in various forms that has motivated me to write several of my books. That work has led me to the conclusion that God will more than survive science. But it has also led me seriously to question whether atheism will survive science. A very controversial viewpoint, I know.

One of Dan Brown’s main characters in Origin is a billionaire, of course, computer scientist and artificial intelligence expert, Edmond Kirsch, who claims to have solved the questions of life’s origin and human destiny. He intends to use his results to fulfil his long-time goal to employ the truth of science to destroy the myth of religions, meaning in particular the three Abrahamic faiths: Judaism, Christianity and Islam.

Perhaps, inevitably, he concentrates on Christianity. His solutions, when they are eventually revealed to the world, are a product of his expertise in artificial intelligence. His take of the future involves the technological modification of human beings. Now, it should be pointed out right away that it is not only historians and science fiction writers, but some of our most respected scientists who are now suggesting that humanity itself may well be changed by future technology.

For example, UK astronomer-royal Lord Rees says, “We can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us, even though they may have an algorithmic understanding of the way we behaved.”

The term AI, or artificial intelligence, was coined in a summer school held at the mathematics department of Dartmouth University in 1956, a conference that was organized by John McCarthy, who said, ‘AI is the science and engineering of making intelligent machines.’ However, the idea of constructing machines that can simulate aspects of human and indeed animal behavior has a very long history, some of which we describe, particularly the brilliant work of Alan Turing during World War II to decode messages encrypted by the Germans on their Enigma machine.

More recent landmark achievements that attracted huge public attention were IBM’s Deep Blue computer, which beat world chess champion Garry Kasparov in 1997, and in 2016 Google’s AlphaGo program became the first to beat an unhandicapped professional human Go player using machine learning.

Early robots and AI systems did not involve learning. Key to the current machine learning process is the idea of an algorithm, which may be of various types, symbolic, mathematical, and so on. The word algorithm itself is derived from the name of a famous Persian mathematician, astronomer, and geographer, Muhammad ibn Musa al-Khwarizmi, who lived around 780 to 850.

An algorithm is, quotes, ‘a precisely defined set of mathematical or logical operations for the performance of a particular task,’ according to the Oxford English Dictionary. The key feature of an algorithm is that once you know how it works, you can solve not only one problem, but a whole class of problems.

One of the most famous examples is the Euclidean algorithm, which is used to find the greatest common divisor of two positive integers, and which many of us learned at school. In a typical contemporary artificial intelligence system, the relevant algorithms are embedded in computing software that sorts, filters, and selects various pieces of data that are presented to it.

In general terms, such a system can use training data to, quotes, learn — machine learning that is, to recognize, identify, and interpret digital patterns such as images, sound, speech, text, data, and so on. In short, a machine learning system takes in information about the past and makes decisions or predictions when it is presented with new information.

In a lot of early work in artificial intelligence, algorithms were designed to solve a particular problem. In more recent AI, a general algorithm is designed which, quotes, learns a solution to the problem. Often the human developers don’t know an explicit algorithm for solving the problem and don’t know how the system arrives at its conclusions.

Early chess playing programs were of the first type. Even Deep Blue was more in this category, whereas the modern Go software is of the second type. But here are some examples of AI systems so that we can make this more precise and clear. For instance, Amazon uses algorithms that trace your online purchases, and mine too, and uses statistical methods to suggest new products you or I might like to buy.

Algorithms have been devised to sort through job applications and suggest the applicant most suited to the job. AI systems are already up and running that work with a database consisting of many thousands of X-rays of lungs, for example, labeled according to their states of health by top-level medical professionals.

In that sense, the system learns about the various diseases from the labels. The system then compares an X-ray of your lungs or mine with this database in order to check whether or not you have, say, a specific type of lung cancer. Such a system is an example of what is called supervised machine learning and has been very successful in recent years.

A great deal of research is being done in another direction to develop AI systems that can translate from one language to another, as in Google Translate. Facial recognition is now highly developed. One rather amusing application is to use AI face recognition technology in a pub in order to recognize who is next in the line to get a drink at the bar and so avoid unfair queue jumping.

Closed-circuit television cameras are now ubiquitous and are used by police to track criminal activity. However, of course, such surveillance systems can also be used for social control. We shall look later at the major ethical issues that arise from such applications.

It is pretty obvious from even this short list that many, if not all, of these developments raise ethical questions from financial manipulation and crime to invasion of privacy and social control. The danger is that people are carried away with the idea that if it can be done, it should be done. And they are carried away mentally without thinking carefully through potential ethical problems.

You see, the big question to be faced is this. How can an ethical dimension be built into an algorithm that is itself devoid of heart, soul and mind? And it’s here that the language of AI can be confusing. For instance, its use of everyday words like learning, planning, reasoning, intelligence as technical terms to describe inanimate machinery can give the impression that AI systems are more capable than they actually are since they often use such terms in a much narrower way than common use.

As a result, media coverage of AI tends to over-dramatize results and be over-optimistic or over-fearful. Professor Joseph McCray-Mellichamp of Alabama, speaking to a conference at Yale in 1985, said, “It seems to me that a lot of needless debate could be avoided if AI researchers would admit that there are fundamental differences between machine intelligence and human intelligence differences that cannot be overcome by any amount of research.”

In other words, to cite the succinct title of Mellichamp’s talk The Artificial in Artificial Intelligence is Real, a brilliant formulation to my mind of the situation. Computer scientist Professor Danny Crooks of Belfast also stresses the need for realism here. He says, “We are still a long, long way from creating real human-like intelligence.”

People have been fooled by the impact of data-driven computing into thinking that we are approaching the level of human intelligence. But in my opinion, we are nowhere near it. There are reasons to doubt, in fact, if we will ever get there. We need to have this in mind as we look at Dan Brown’s use of narrow AI in our next session.

Want a summarized version of this talk? Here it is:

SUMMARY:

The lecture titled “2084 – Artificial Intelligence and the Future of Humanity” by John C. Lennox delves into the intricate landscape of artificial intelligence (AI) and its profound implications for humanity. Lennox begins by emphasizing human curiosity and the perennial pursuit of understanding fundamental questions about our origin and destiny. He acknowledges that these inquiries have become even more complex with the advent of AI.

Lennox reflects on the potential transformation of human identity due to technological advancements, prompting a query about the changing perceptions of God. He argues that Christianity offers compelling, evidence-based insights into these matters, surpassing the visionary predictions of AI enthusiasts. He then shares his personal background growing up in Northern Ireland, a place marked by sectarian strife. Despite the divisiveness, his parents practiced non-sectarianism, treating all individuals with respect and dignity, which became a life principle for him.

His upbringing encouraged him to think openly, leading him to study mathematics at Cambridge University. Lennox recalls encountering various worldviews and fostering relationships with people of differing perspectives. He identifies himself as someone who has always been fascinated by life’s grand questions and seeks to explore the interplay between science, mathematics, and a transcendent dimension.

The talk transitions to discussing the fundamental questions of human existence: “Where do I come from?” and “Where am I going?” Lennox argues that answers to these questions shape our identity and goals. The past defines who we are, while the future provides purpose and direction. He introduces the concept of a worldview, which serves as a narrative guiding our lives and conferring meaning upon them.

Lennox invokes dystopian novels such as Aldous Huxley’s “Brave New World” and George Orwell’s “1984,” both of which explore imagined futures with dire consequences. He highlights their differences: Huxley predicts voluntary subjugation to technology, whereas Orwell envisions oppressive surveillance. In the context of AI, he examines the potential to construct artificial life and enhance human intelligence, posing questions about the ethical and philosophical implications.

The talk subsequently delves into the history and concepts of AI. Lennox clarifies the terminology, noting that everyday language used by AI often differs from common usage. He emphasizes that AI’s artificial nature sets it apart from human intelligence, a point underscored by experts and scientists. Despite AI’s successes, he contends that it still lacks the nuanced understanding and reasoning abilities of humans.

Lennox concludes by cautioning against overestimating AI’s capabilities and offers a preview of the next session, which will analyze Dan Brown’s application of narrow AI. He reminds readers that despite AI’s advancements, it remains essential to remain grounded in the reality of its current limitations.

In this talk, John C. Lennox navigates the intricacies of AI and human existence, drawing on his personal experiences, philosophical insights, and historical context to provide a comprehensive exploration of the profound implications posed by the rise of artificial intelligence.

For Further Reading:

Luca Longo: The Turing Test, Artificial Intelligence and the Human Stupidity (Transcript)

TRANSCRIPT: AI And The Future Of Humanity – Yuval Noah Harari

TRANSCRIPT: The Next Global Superpower Isn’t Who You Think – Ian Bremmer

TRANSCRIPT: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! – Mo Gawdat

Related Posts

Multi-Page