Skip to content
Home » Why AI Is A Threat – And How To Use It For Good: John Tasioulas (Transcript)

Why AI Is A Threat – And How To Use It For Good: John Tasioulas (Transcript)

Read the full transcript of John Tasioulas’s talk titled “Why AI Is A Threat – And How To Use It For Good” at TEDxAthens 2025 conference on Feb 16, 2025.

Listen to the audio version here:

TRANSCRIPT:

JOHN TASIOULAS: Well, I’m a philosopher in Athens in 2024, talking to you about artificial intelligence. That is a great thrill and a great honor. But you might ask, what does the ancient discipline of philosophy have to say about the AI revolution?

I’ve got 15 minutes to make the case. Let’s go back to Socrates in the Republic. He says, the question we’re dealing with is not a trivial question. It is the question, how should one live? Now this question, how should we live, is a central philosophical question.

But it’s also a question each one of us has a responsibility to answer for themselves. That’s why Socrates conducted his dialogues in the Agora in Athens with ordinary citizens. And AI makes it all the more urgent to ask this question again, because this technology is so revolutionary, it could transform our lives, both for the better and for the good. So we need to ask this question again, in light of these new technological developments that Socrates could not have foreseen.

But not only does AI make this question, how should we live, more urgent, it creates problems for us in addressing this question, new problems, new threats to our ability to answer Socrates’ question. So today I’m going to talk about three of these threats.

Threat 1: Distortion of Self-Understanding

The first threat is that AI threatens to distort our self-understanding, our understanding of what it is to be human. All the big tech corporations say they have the same goal. Their goal is to create artificial general intelligence. Artificial general intelligence is intelligence that is replicating the entire spectrum of human intelligence. So able to write a poem, make a cancer diagnosis, make a hiring decision, do everything that a human can do.

But if we pursue AGI as our aim, there will be a temptation, an incentive, to imagine that AGI has been achieved because we’ve changed the goalpost, we’ve changed our own understanding of what humans are like, blurring the distinction between humans and machines. But we need to preserve this distinction.

So what are the differences? Well, one fundamental difference relates to understanding. An AI system like GPT-3, large language model, operates on the basis of statistical correlations between data. These statistical correlations do not give it a genuine understanding as a human would have of what a cat is or what an electron is. Humans do have that understanding, partly because we’re embodied agents engaging with the physical reality. And one way this difference shows up is that there is a quality that most humans have, which is common sense, which has proved very elusive for AI systems. In other words, AI systems make spectacular mistakes that no human would ever make.

They confuse a cat with a skateboard or a human with a gorilla. No human would ever make these mistakes because humans have a rooted understanding of the world in a way that a machine operating on correlations between data points does not. So understanding is a big difference.

But there’s another big difference, and that is humans have a capacity for rational autonomy. We have the capacity to choose our goals, to decide, do we want to be a doctor or an actress?

Do we want to be a lawyer or a musician? And to choose these goals in light of all the reasons, pro and con, for each option. An AI system cannot do this. It has a goal programmed into it, and it optimizes for the fulfillment of that goal. Now you might say, if we start to blur the distinction between human and machine, what difference does it make?

Why is this important? Why is it a threat? The reason it’s a threat, I think, is the reason given by Aristotle, which is if we want to know what a good life for a human being is, we need to know what the distinctive capabilities of a human are. These capabilities for reasoning, for communicating, for social engagement that manifest themselves in all sorts of valuable pursuits, like scientific research, or sustaining a friendship, or fighting for justice. If we lose our vivid sense of the capabilities we have as humans, the risk is we will have an impoverished ethics. We will have an impoverished sense of the values that matter.

ALSO READ:  Transcript of The Catastrophic Risks of AI — and a Safer Path: Yoshua Bengio

What will that impoverished ethics look like? Well, one form it will take will be a kind of consumerism that says what matters is your gratification as a consumer, which can be just a passive matter of receiving pleasure rather than exercising your capabilities. Or there’s another kind of ethics that’s become very popular in Silicon Valley, which says the pathway to a better life is transhumanism, is to transcend our human nature, to live in virtual reality, to become cyborgs. Now in the Aristotelian view, according to which a fulfilling life is exercising your human capacities, transhumanism, which is taken seriously in these powerful circles, is not a path to utopia, it is a path to species suicide.

Threat 2: Distortion of Problem Understanding

Another threat, astronomical sums of money are being invested by the private sector in the development of artificial intelligence. In 2025, it is projected that $200 billion will be invested in developing AI technologies. And there will be a profound economic incentive for these companies to say to us that these AI systems can solve problems better than humans can. And that will create another distortion. It will create a distortion whereby increasingly, maybe unconsciously, we start to change the nature of our problems, the way we understand our problems, to make them more suited to being handled by AI systems. So let me give you an example of this.

Risk assessment tools based on AI technology are already used in criminal justice.