Skip to content
Home » Transcript: Why Do Conspiracy Theories Go Viral? – Adam Kucharski

Transcript: Why Do Conspiracy Theories Go Viral? – Adam Kucharski

Read the full transcript of Professor Adam Kucharski’s talk titled “Why Do Conspiracy Theories Go Viral?” at TEDxLondon [Mar 10, 2025].

Listen to the audio version here:

TRANSCRIPT:

ADAM KUCHARSKI: It is not easy to explain why aeroplanes stay in the sky. A common explanation is that the curved shape of the wing makes air flow faster above and slower beneath, creating lift. But this doesn’t explain how planes can fly upside down. Another explanation is that the angle of the wing pushes air downwards, creating an equal and opposite upwards force. But this doesn’t explain why, as the angle gets slightly steeper, planes can suddenly stall.

The point is, aerodynamics is complex. It’s difficult to understand, let alone explain, in a simple, intuitive way. And yet, we trust it. And the same is true of so many other useful technologies in our lives.

The idea of heart defibrillation has been around since 1899, but researchers are still working to untangle the biology and physics that means an electric shock can reset a heart. Then there’s general anaesthesia. We know what combination of drugs will make a patient unconscious, but it’s still not entirely clear exactly why they do. And yet, you’d probably still get that operation, just like you’d still take that flight.

For a long time, this lack of explanation didn’t really bother me. Throughout my career as a mathematician, I’ve worked to separate truth from fiction, whether investigating epidemics or designing new statistical methods. But the world is complicated, and that’s something I’d become comfortable with.

For example, if we want to know whether a new treatment is effective against a disease, we can run a clinical trial to get the answer. It won’t tell us why the treatment works, but it will give us the evidence we need to take action.

The Challenge of Unexplainable Technology

So I found it interesting that in other areas of life, a lack of explainability does visibly bother people. Take AI. One of the concerns about autonomous machines like self-driving cars is we don’t really understand why they make the decisions they do.

There will be some situations where we can get an idea of why they make mistakes. Last year, a self-driving car blocked off a firetruck as it was responding to an emergency in Las Vegas. The reason? The firetruck was yellow, and the car had been trained to recognise red ones. But even if the car had been trained to recognise yellow firetrucks, it wouldn’t go through the same thought process we do when we see an emergency vehicle.

Self-driving AI views the world as a series of shapes and probabilities. With sufficient training, it can convert this view into useful actions, but fundamentally it’s not seeing what we’re seeing.

ALSO READ:  Why We Should Grow Food for Future Generations: Esther Meduna (Transcript)

The tension between the benefits that computers can bring and the understanding that humans have to relinquish isn’t new. In 1976, two mathematicians named Kenneth Appel and Wolfgang Harkin announced the first ever computer-aided proof. Their discovery meant that for the first time in history, mathematicians had to accept a major theorem that they could not verify by hand.

The theorem in question is what’s known as the four-colour theorem. In short, this says that if you want to fill in a map with different colours so that no two bordering countries are the same colour, you’ll only ever need four colours to do this. The mathematicians had found that there were too many map configurations to conch through by hand, even if they simplified things by looking for symmetries. So they used a computer to get over the finish line.

Not everyone believed the proof initially. Maybe the computer had made an error somewhere. Suddenly, mathematicians no longer had total intellectual control. They had to trust a machine. But then something curious happened. While older researchers had been sceptical, younger mathematicians took the opposite view. Why would they trust hundreds of pages of handwritten and hand-checked calculations? Surely a computer would be more accurate.

Is Understanding Always Necessary?

Whether we’re talking about anaesthesia, self-driving cars, or mathematical proofs, perhaps we don’t need to fully understand something as long as the accuracy is high enough for what we need.

Let’s go back to self-driving cars. A common thought experiment when it comes to AI is what’s known as the trolley problem. Suppose we have a heavy trolley or a big car and it’s going to hit a group of people. But you have the option of pulling a lever to divert the vehicle so it hits only one person. Would you pull that lever? And would it matter whether the people are old or young?

These kinds of decisions can sometimes crop up in real life with human drivers. In 2020, a car in Michigan swerved to avoid a truck and hit a young couple walking on the pavement, putting them in hospital for several months. Would AI have reacted differently? Well, it turned out that the car was also racing side-by-side with another vehicle at the time and the driver didn’t have a valid licence.

Before we get too deep into theoretical dilemmas, we should remember that humans often aren’t very good drivers. If we could ensure there were far fewer accidents on our roads, would you mind being unable to explain the ones that did happen?

In this complex world of ours, maybe we should just abandon the pursuit of explanation altogether. After all, many data-driven areas of science increasingly focus on prediction because it’s fundamentally an easier problem than explaining. Like anaesthesia, we can often make useful predictions about what something will do without fully understanding it.

ALSO READ:  The Learning Blind Spot: Why We Miss What Matters: Sasha Vassar (Transcript)

When Explanation Matters

But explanation can sometimes really matter if we want a better world. The focus on prediction is particularly troubling in the field of justice. Increasingly, algorithms are used to decide whether to release people on bail or parole. The computer isn’t deciding whether they’ve committed a crime. In effect, it’s predicting whether they’ll commit one in future.

But ideally, we wouldn’t just try and predict future crimes using an opaque algorithm.