Read the full transcript of Mathematician and TED Fellow Adam Kucharski’s talk titled “Why Does Uncertainty Bother Us So Much?”, at TEDxLondon on January 26, 2025.
Listen to the audio version here:
The Mystery of Flight and Trust in Technology
ADAM KUCHARSKI: It’s not easy to explain why aeroplanes stay in the sky. A common explanation is that the curved shape of the wing makes air flow faster above and slower beneath, creating lift. But this doesn’t explain how planes can fly upside down. Another explanation is that the angle of the wing pushes air downwards, creating an equal and opposite upwards force. But this doesn’t explain why, as the angle gets slightly steeper, planes can suddenly stall. The point is aerodynamics is complex. It’s difficult to understand, let alone explain in a simple, intuitive way. And yet, we trust it.
And the same is true of so many other useful technologies in our lives. The idea of heart defibrillation has been around since 1899, but researchers are still working to untangle the biology and physics that means an electric shock can reset a heart. Then there’s general anaesthesia. We know what combination of drugs will make a patient unconscious, but it’s still not entirely clear exactly why they do. And yet, you’d probably still get that operation, just like you’d still take that flight.
Comfort with Complexity in Mathematics
For a long time, this lack of explanation didn’t really bother me. Throughout my career as a mathematician, I’ve worked to separate truth from fiction, whether investigating epidemics or designing new statistical methods. But the world is complicated, and that’s something I’d become comfortable with. For example, if we want to know whether a new treatment is effective against a disease, we can run a clinical trial to get the answer.
It won’t tell us why the treatment works, but it will give us the evidence we need to take action.
The AI Explainability Problem
So I found it interesting that in other areas of life, a lack of explainability does visibly bother people. Take AI. One of the concerns about autonomous machines like self-driving cars is we don’t really understand why they make the decisions they do. There will be some situations where we can get an idea of why they make mistakes. Last year, a self-driving car blocked off a firetruck as it was responding to an emergency in Las Vegas. The reason? The firetruck was yellow, and the car had been trained to recognise red ones.
But even if the car had been trained to recognise yellow firetrucks, it wouldn’t go through the same thought process we do when we see an emergency vehicle. Self-driving AI views the world as a series of shapes and probabilities. With sufficient training, it can convert this view into useful actions, but fundamentally it’s not seeing what we’re seeing.
The Four-Colour Theorem: When Computers Changed Mathematics
This tension between the benefits that computers can bring and the understanding that humans have to relinquish isn’t new. In 1976, two mathematicians named Kenneth Appel and Wolfgang Haken announced the first ever computer-aided proof. Their discovery meant that for the first time in history, mathematicians had to accept a major theorem that they could not verify by hand.
The theorem in question is what’s known as the four-colour theorem. In short, this says that if you want to fill in a map with different colours so that no two bordering countries are the same colour, you’ll only ever need four colours to do this. The mathematicians had found that there were too many map configurations to crunch through by hand, even if they simplified things by looking for symmetries. So they used a computer to get over the finish line.
Not everyone believed the proof initially. Maybe the computer had made an error somewhere. Suddenly, mathematicians no longer had total intellectual control. They had to trust a machine. But then something curious happened. While older researchers had been sceptical, younger mathematicians took the opposite view. Why would they trust hundreds of pages of handwritten and hand-checked calculations? Surely a computer would be more accurate.
The Trolley Problem and Real-World Driving
Whether we’re talking about anaesthesia, self-driving cars, or mathematical proofs, perhaps we don’t need to fully understand something as long as the accuracy is high enough for what we need. Let’s go back to self-driving cars. A common thought experiment when it comes to AI is what’s known as the trolley problem. Suppose we have a heavy trolley or a big car and it’s going to hit a group of people. But you have the option of pulling a lever to divert the vehicle so it hits only one person. Would you pull that lever? And would it matter whether the people are old or young?
These kinds of decisions can sometimes crop up in real life with human drivers. In 2020, a car in Michigan swerved to avoid a truck and hit a young couple walking on the pavement, putting them in hospital for several months. Would AI have reacted differently? Well, it turned out that the car was also racing side-by-side with another vehicle at the time and the driver didn’t have a valid licence. Before we get too deep into theoretical dilemmas, we should remember that humans often aren’t very good drivers. If we could ensure there were far fewer accidents on our roads, would you mind being unable to explain the ones that did happen?
When Explanation Matters: Justice and Prevention
In this complex world of ours, maybe we should just abandon the pursuit of explanation altogether. After all, many data-driven areas of science increasingly focus on prediction because it’s fundamentally an easier problem than explaining. Like anaesthesia, we can often make useful predictions about what something will do without fully understanding it.
But explanation can sometimes really matter if we want a better world. The focus on prediction is particularly troubling in the field of justice. Increasingly, algorithms are used to decide whether to release people on bail or parole. The computer isn’t deciding whether they’ve committed a crime. In effect, it’s predicting whether they’ll commit one in future. But ideally, we wouldn’t just try and predict future crimes using an opaque algorithm. We’d try and prevent them. And that means understanding why people re-offend and what we can do to stop that happening. A lack of interest in explanation leaves a gap that in this situation creates room for injustice.
Conspiracy Theories and the Human Need for Explanation
But it’s not the only thing that can emerge in the gap between what is happening and why it’s happening. The desire for explanation can in some cases drive people to extremes, particularly if the science behind what they’re seeing is patchy or complex. Events must have a cause, goes their logic. Something or someone must be behind them. Karl Popper, who popularised the term “conspiracy theory,” once talked about conspiracy theories of society. Rather than events being random or unlinked, believers develop a narrative in which all of history is mapped out by shadowy influences. Nothing is a coincidence.
In some ways, conspiracy theorists are similar to scientists. They want to explain the patterns they see in the world. They want to share those explanations with others. And they’ll put a lot of effort into doing so. Because I work in health and I’ve appeared in the media, I’ve ended up interacting with quite a lot of conspiracy theorists. And one of the things you’ll notice if you try and debate a conspiracy theorist is they’ll usually have a mountain of scientific looking data and papers ready to argue their point.
The key difference though is that science frequently requires that we update our beliefs about the world rather than just double down on them. The point of evidence is to get us closer to the truth, not just pull us further into a theory. You can always tell quite quickly in a discussion when someone’s trying to defend a position rather than actually discover the reality.
The Chemtrails Conspiracy and Simple Science
One of the most popular conspiracy theories currently is the idea of chemtrails. This is a false claim that aeroplane vapour trails are actually a deliberate attempt to drug populations or control the weather with chemicals. Unlike the science of aeroplane wings, it’s actually pretty straightforward to explain where vapour trails come from. Jet engines produce water vapour in their exhaust. When this hot vapour hits the very cold air outside it freezes, creating a streak of tiny ice crystals in the sky.
So why do claims like this persist? It’s partly down to trust. Unless you want to brush up on thermodynamics or buy a jet engine, at some point you’re going to have to take someone’s word that this is how the science works. But conspiracy theories are also about community. If people go against scientific consensus it can make them feel like an independent thinker and part of a resistance. Then there’s that crucial element, the need for an explanation beyond simple coincidence.
Learning to Bridge the Gap Between Knowing and Understanding
Whether we want to push the boundaries of science or push back on conspiracy theories, we need to appreciate this very human desire to explain. I’ve made the mistake sometimes of neglecting this in the past. I’ve given people an oversimplistic explanation for a complex process and created even more confusion than there was before. Or in a situation with limited time I’ve told people it’s not possible to properly untangle the complexity involved. And in doing so I’ve failed to acknowledge that very deep-rooted need to explain.
I now notice other scientists making the same mistake. They might say the evidence is clear when it isn’t to a lot of people. Or they might say it’s well established this is true without saying why it’s true. This matters because increasingly we have to navigate a world that most of us struggle to fully understand. From climate and health to finance and AI there often isn’t a simple intuitive logic behind what we’re seeing. But there are lots of catchy false explanations ready to lead us astray. As science becomes more advanced and more reliant on opaque or counter-intuitive technologies, these challenges will only grow.
I’ve got a PhD in math and I still don’t fully understand the details of every climate simulation or AI algorithm. So like many others I’ve had to find other ways to evaluate published claims. I’ve turned to experts with good track records. I’ve sent text sources. I’ve looked for inconsistencies and I’ve tried to explain as much as I can. In this changing world we’re going to have to close this gap between knowing what is happening and wanting to know why it’s happening. That means finding better ways to trust the things we can’t explain and better ways to explain the things that we don’t trust. Thank you.