Skip to content
Home » How Partial Truths Are A Threat To Democracy: De Kai (Transcript)

How Partial Truths Are A Threat To Democracy: De Kai (Transcript)

Here is the full transcript of author De Kai’s talk titled “How Partial Truths Are A Threat To Democracy” at TEDxKlagenfurt 2025 conference. In this compelling TEDx talk, AI professor De Kai explores how selective context omission—more than outright falsehoods—threatens democratic discourse and decision-making. Drawing on decades of experience in AI development, De Kai warns about the dangerous information disorder amplified by today’s algorithms.

Listen to the audio version here:

The Misleading Power of Context Omission

DE KAI: So you’re scrolling your feed and you see a video showing a dozen people viciously beating and kicking a couple of other folks. How does that make you feel? For most of us, we instinctively recoil in horror. They’re bullying those poor defenseless victims. But what if we’re now also shown the preceding 20 seconds of the same video showing that the couple had been firing guns at an unarmed crowd who had been unsuccessfully trying to hide?

Context is everything. When crucial context is left out, the effect is even more misleading than outright lies. In spite of all the recent worrying about how AIs propagate misinformation, we still keep sweeping the biggest problem under the rug, which is the insidious effect of omitted context. Instead, we keep focusing on fakes, bad actors, which unfortunately sucks up all the oxygen in the room as soon as we start talking about misinformation problems.

This is a huge worry to me as a long-time AI professor who’s been working for decades on getting AIs to help us understand other humans outside our own groups. It’s actually why I invented key machine learning foundations that let me build the world’s first global-scale online language translators nearly three decades ago, which spawned the likes of Google Translate or Microsoft Translate or Yahoo Translate.

Ten years ago, though, I became extremely concerned about seeing many of the same machine learning and natural language processing AI tech that researchers like me helped to pioneer instead being used in social media, recommendation, search engines, and now chatbots in ways that have been doing the opposite, preventing folks from understanding each other, releasing our fear of the unknown, driving hate and dehumanization of other groups, escalating and hardening polarization and divisiveness, both domestically and geopolitically.

For me, it was an Oppenheimer moment that drove me to focus on the societal impact of AI, seeing how AI was being perverted toward destroying societies and civilization by causing massive misinformation problems.

Understanding Information Disorder

Now the word misinformation gets used to mean too many different things these days, and so we use the term information disorder to discuss all the many ways that AI algorithms contribute to societal dysfunction. It’s actually hard to find any exact definition of information disorder, even though all of us are sensing the crisis of the chaos of credibility in the information that AI algorithms are propagating to all of us.

But formulating that problem is trickier than it appears at first glance. Nearly all sources today just more or less follow Claire Wardle and Hussein Darakshan, who coined the term without explicitly defining it.

Misinformation is when false information is shared, but no harm is intended. Disinformation is again when false information is shared, but it’s knowingly shared to cause harm. And malinformation is when truths or partial truths are shared to cause harm, often by selectively omitting crucial context.

There are three types spread everywhere. In all kinds of graphic variations, from the UN to the Council of Europe to OECD to the National Institute of Health to Scientific American, thanks to the rule of threes, a cognitive bias that causes everyone to be mischaracterizing information disorder as having only three main types without even thinking about it.

ALSO READ:  Peter Zeihan: Mapping the Collapse of Globalization (Transcript)

Around the world, politicians, think tanks, regulatory agencies, and governments are issuing rallying cries and funding programs to tackle misinformation, disinformation, and malinformation. Fact-checking organizations are focusing on verifying whether the controversial claims are true or false. And meanwhile, cybersecurity and computational propaganda organizations like the Stanford Internet Observatory heavily emphasize detecting, tracking, whether information is being propagated by those who they deem to be bad actors.

Beyond the Rule of Threes

Everyone keeps repeating the idea that fakes and bad actors lead to three types of information disorder without thinking about it, but that’s dangerous. We’ve been led unconsciously into accepting the idea of three types of information disorder, thanks to our own cognitive bias, to unconsciously assign greater credibility to things that come in threes.

Most of you probably had storytelling teachers, professors of writing, marketing, and business mentors teach you that your slides should always have three bullet points. Two is not enough. Humans absorb concepts and ideas and entities more readily in groups of three, and so everyone just keeps repeating that fakes and bad actors lead to three kinds of information disorder without thinking about it.

But let’s think one more step. Because falsehoods and bad actors are two independent dimensions, and normally the natural way to put something like that would be into a two-by-two grid. The horizontal axis distinguishes is there misleading information being maliciously propagated by bad actors versus just negligently propagated by decent, ordinary folks? And the vertical axis distinguishes whether the misleading information is an outright falsehood versus a partial truth that selectively omitted crucial context.

Neg-Information: The Missing Piece

Suddenly, if we look at it this way, it’s immediately clear that there’s a glaring hole in the analysis of information disorder. What I call neg-information is by far the most existentially dangerous, insidious form of information disorder in democracies. But prior to my book and recent talks, neg-information didn’t even have a name.

Somehow, most portrayals of information disorder today simply sweep under the rug the massive problem that partial truths selectively omitting crucial context are being negligently propagated by decent, ordinary folks at a far greater rate. Partial truths are even more misleading than outright falsehoods because they’re much more believable. They have the ring of truth.

The volume of neg-information far outstrips the other types.