Iyad Rahwan – Australian-Syrian scientist
Today I’m going to talk about technology and society. The Department of Transport estimated that last year 35,000 people died from traffic crashes in the US alone. Worldwide, 1.2 million people die every year in traffic accidents. If there was a way we could eliminate 90% of those accidents, would you support it? Of course you would. This is what driverless car technology promises to achieve by eliminating the main source of accidents — human error.
Now picture yourself in a driverless car in the year 2030, sitting back and watching this vintage TEDxCambridge video. All of a sudden, the car experiences mechanical failure and is unable to stop. If the car continues, it will crash into a bunch of pedestrians crossing the street, but the car may swerve, hitting one bystander, killing them to save the pedestrians. What should the car do, and who should decide? What if instead the car could swerve into a wall, crashing and killing you, the passenger, in order to save those pedestrians? This scenario is inspired by the trolley problem, which was invented by philosophers a few decades ago to think about ethics.
Now, the way we think about this problem matters. We may, for example, not think about it at all. We may say this scenario is unrealistic, incredibly unlikely, or just silly. But I think this criticism misses the point because it takes the scenario too literally. Of course, no accident is going to look like this; no accident has two or three options where everybody dies somehow. Instead, the car is going to calculate something like the probability of hitting a certain group of people, if you swerve one direction versus another direction, you might slightly increase the risk to passengers or other drivers versus pedestrians. It’s going to be a more complex calculation, but it’s still going to involve trade-offs, and trade-offs often require ethics.
We might say then, “Well, let’s not worry about this. Let’s wait until technology is fully ready and 100% safe.” Suppose that we can indeed eliminate 90% of those accidents, or even 99% in the next 10 years. What if eliminating the last one percent of accidents requires 50 more years of research? Should we not adopt the technology? That’s 60 million people dead in car accidents if we maintain the current rate. So the point is, waiting for full safety is also a choice, and it also involves trade-offs.
People online on social media have been coming up with all sorts of ways to not think about this problem. One person suggested the car should just swerve somehow in between the passengers and the bystander. Of course, if that’s what the car can do, that’s what the car should do. We’re interested in scenarios in which this is not possible. And my personal favorite was a suggestion by a blogger to have an eject button in the car that you press just before the car self-destructs.
So if we acknowledge that cars will have to make trade-offs on the road, how do we think about those trade-offs, and how do we decide? Well, maybe we should run a survey to find out what society wants, because ultimately, regulations and the law are a reflection of societal values.
So this is what we did. With my collaborators, Jean-François Bonnefon and Azim Shariff, we ran a survey in which we presented people with these types of scenarios. We gave them two options inspired by two philosophers: Jeremy Bentham and Immanuel Kant. Bentham says the car should follow utilitarian ethics: it should take the action that will minimize total harm — even if that action will kill a bystander and even if that action will kill the passenger. Immanuel Kant says the car should follow duty-bound principles, like “Thou shalt not kill.” So you should not take an action that explicitly harms a human being, and you should let the car take its course even if that’s going to harm more people.
What do you think? Bentham or Kant? Here’s what we found. Most people sided with Bentham. So it seems that people want cars to be utilitarian, minimize total harm, and that’s what we should all do. Problem solved. But there is a little catch. When we asked people whether they would purchase such cars, they said, “Absolutely not.” They would like to buy cars that protect them at all costs, but they want everybody else to buy cars that minimize harm.
We’ve seen this problem before. It’s called a social dilemma. And to understand the social dilemma, we have to go a little bit back in history. In the 1800s, English economist William Forster Lloyd published a pamphlet which describes the following scenario. You have a group of farmers — English farmers — who are sharing a common land for their sheep to graze. Now, if each farmer brings a certain number of sheep — let’s say three sheep — the land will be rejuvenated, the farmers are happy, the sheep are happy, everything is good.
Now, if one farmer brings one extra sheep, that farmer will do slightly better, and no one else will be harmed. But if every farmer made that individually rational decision, the land will be overrun, and it will be depleted to the detriment of all the farmers, and of course, to the detriment of the sheep.