Skip to content
Home » How Partial Truths Are A Threat To Democracy: De Kai (Transcript)

How Partial Truths Are A Threat To Democracy: De Kai (Transcript)

Here is the full transcript of author De Kai’s talk titled “How Partial Truths Are A Threat To Democracy” at TEDxKlagenfurt 2025 conference. In this compelling TEDx talk, AI professor De Kai explores how selective context omission—more than outright falsehoods—threatens democratic discourse and decision-making. Drawing on decades of experience in AI development, De Kai warns about the dangerous information disorder amplified by today’s algorithms.

Listen to the audio version here:

The Misleading Power of Context Omission

DE KAI: So you’re scrolling your feed and you see a video showing a dozen people viciously beating and kicking a couple of other folks. How does that make you feel? For most of us, we instinctively recoil in horror. They’re bullying those poor defenseless victims. But what if we’re now also shown the preceding 20 seconds of the same video showing that the couple had been firing guns at an unarmed crowd who had been unsuccessfully trying to hide?

Context is everything. When crucial context is left out, the effect is even more misleading than outright lies. In spite of all the recent worrying about how AIs propagate misinformation, we still keep sweeping the biggest problem under the rug, which is the insidious effect of omitted context. Instead, we keep focusing on fakes, bad actors, which unfortunately sucks up all the oxygen in the room as soon as we start talking about misinformation problems.

This is a huge worry to me as a long-time AI professor who’s been working for decades on getting AIs to help us understand other humans outside our own groups. It’s actually why I invented key machine learning foundations that let me build the world’s first global-scale online language translators nearly three decades ago, which spawned the likes of Google Translate or Microsoft Translate or Yahoo Translate.

Ten years ago, though, I became extremely concerned about seeing many of the same machine learning and natural language processing AI tech that researchers like me helped to pioneer instead being used in social media, recommendation, search engines, and now chatbots in ways that have been doing the opposite, preventing folks from understanding each other, releasing our fear of the unknown, driving hate and dehumanization of other groups, escalating and hardening polarization and divisiveness, both domestically and geopolitically.

For me, it was an Oppenheimer moment that drove me to focus on the societal impact of AI, seeing how AI was being perverted toward destroying societies and civilization by causing massive misinformation problems.

Understanding Information Disorder

Now the word misinformation gets used to mean too many different things these days, and so we use the term information disorder to discuss all the many ways that AI algorithms contribute to societal dysfunction.