Here is the full transcript of author De Kai’s talk titled “How Partial Truths Are A Threat To Democracy” at TEDxKlagenfurt 2025 conference. In this compelling TEDx talk, AI professor De Kai explores how selective context omission—more than outright falsehoods—threatens democratic discourse and decision-making. Drawing on decades of experience in AI development, De Kai warns about the dangerous information disorder amplified by today’s algorithms.
Listen to the audio version here:
The Misleading Power of Context Omission
DE KAI: So you’re scrolling your feed and you see a video showing a dozen people viciously beating and kicking a couple of other folks. How does that make you feel? For most of us, we instinctively recoil in horror. They’re bullying those poor defenseless victims. But what if we’re now also shown the preceding 20 seconds of the same video showing that the couple had been firing guns at an unarmed crowd who had been unsuccessfully trying to hide?
Context is everything. When crucial context is left out, the effect is even more misleading than outright lies. In spite of all the recent worrying about how AIs propagate misinformation, we still keep sweeping the biggest problem under the rug, which is the insidious effect of omitted context. Instead, we keep focusing on fakes, bad actors, which unfortunately sucks up all the oxygen in the room as soon as we start talking about misinformation problems.
This is a huge worry to me as a long-time AI professor who’s been working for decades on getting AIs to help us understand other humans outside our own groups. It’s actually why I invented key machine learning foundations that let me build the world’s first global-scale online language translators nearly three decades ago, which spawned the likes of Google Translate or Microsoft Translate or Yahoo Translate.
Ten years ago, though, I became extremely concerned about seeing many of the same machine learning and natural language processing AI tech that researchers like me helped to pioneer instead being used in social media, recommendation, search engines, and now chatbots in ways that have been doing the opposite, preventing folks from understanding each other, releasing our fear of the unknown, driving hate and dehumanization of other groups, escalating and hardening polarization and divisiveness, both domestically and geopolitically.
For me, it was an Oppenheimer moment that drove me to focus on the societal impact of AI, seeing how AI was being perverted toward destroying societies and civilization by causing massive misinformation problems.
Understanding Information Disorder
Now the word misinformation gets used to mean too many different things these days, and so we use the term information disorder to discuss all the many ways that AI algorithms contribute to societal dysfunction.
It’s actually hard to find any exact definition of information disorder, even though all of us are sensing the crisis of the chaos of credibility in the information that AI algorithms are propagating to all of us.
But formulating that problem is trickier than it appears at first glance. Nearly all sources today just more or less follow Claire Wardle and Hussein Darakshan, who coined the term without explicitly defining it.
Misinformation is when false information is shared, but no harm is intended. Disinformation is again when false information is shared, but it’s knowingly shared to cause harm. And malinformation is when truths or partial truths are shared to cause harm, often by selectively omitting crucial context.
There are three types spread everywhere. In all kinds of graphic variations, from the UN to the Council of Europe to OECD to the National Institute of Health to Scientific American, thanks to the rule of threes, a cognitive bias that causes everyone to be mischaracterizing information disorder as having only three main types without even thinking about it.
Around the world, politicians, think tanks, regulatory agencies, and governments are issuing rallying cries and funding programs to tackle misinformation, disinformation, and malinformation. Fact-checking organizations are focusing on verifying whether the controversial claims are true or false. And meanwhile, cybersecurity and computational propaganda organizations like the Stanford Internet Observatory heavily emphasize detecting, tracking, whether information is being propagated by those who they deem to be bad actors.
Beyond the Rule of Threes
Everyone keeps repeating the idea that fakes and bad actors lead to three types of information disorder without thinking about it, but that’s dangerous. We’ve been led unconsciously into accepting the idea of three types of information disorder, thanks to our own cognitive bias, to unconsciously assign greater credibility to things that come in threes.
Most of you probably had storytelling teachers, professors of writing, marketing, and business mentors teach you that your slides should always have three bullet points. Two is not enough. Humans absorb concepts and ideas and entities more readily in groups of three, and so everyone just keeps repeating that fakes and bad actors lead to three kinds of information disorder without thinking about it.
But let’s think one more step. Because falsehoods and bad actors are two independent dimensions, and normally the natural way to put something like that would be into a two-by-two grid. The horizontal axis distinguishes is there misleading information being maliciously propagated by bad actors versus just negligently propagated by decent, ordinary folks? And the vertical axis distinguishes whether the misleading information is an outright falsehood versus a partial truth that selectively omitted crucial context.
Neg-Information: The Missing Piece
Suddenly, if we look at it this way, it’s immediately clear that there’s a glaring hole in the analysis of information disorder. What I call neg-information is by far the most existentially dangerous, insidious form of information disorder in democracies. But prior to my book and recent talks, neg-information didn’t even have a name.
Somehow, most portrayals of information disorder today simply sweep under the rug the massive problem that partial truths selectively omitting crucial context are being negligently propagated by decent, ordinary folks at a far greater rate. Partial truths are even more misleading than outright falsehoods because they’re much more believable. They have the ring of truth.
The volume of neg-information far outstrips the other types. Far more partial truths are being propagated online than outright falsehoods, and far more ordinary folks are helping propagate them without any conscious intent to harm. They’re not truly malicious actors. Your Uncle Mike isn’t constantly re-sharing all that misleading information because he’s a bad actor. Failing to check the crucial context is negligence, not malice. But there are far more Uncle Mikes out there than either bad actors or liars and fakers.
Context omission is the hardest challenge by far in tackling AI-amplified information disorder. Just getting AIs simply to downrank falsehoods according to fact-checking doesn’t solve neg-information. Partial truths are still true. And just asking AIs to downrank items shared by bad actors doesn’t tackle neg-information either because it’s not going to stop Uncle Mike.
The Real Existential Threat
The sheer volume of negligent neg-information is the true existential threat to democratic decision-making. And continuing to ignore the bigger picture, the bigger problem of neg-information, while we’re staying distracted by only fakes and bad actors, is a recipe for letting AI drive disaster, especially because partial truths from which crucial context has been selectively omitted can be far more distracting, far more misleading than outright falsehoods, partly because falsehoods are often easier to spot. They sound too extreme, they contradict stuff you already know, they’re easily refuted.
But also because the omission of context gives our unconscious much greater room to fit the partial truths into all of our many cognitive biases. Most of us have seen plenty of examples like this on YouTube or on the news or on our social media feeds. They come in lots of variations. The two parties that are involved fighting here could be criminals, police, terrorists, gangs, protesters, counter-protesters, political groups, or just drunk brawlers.
How Our Biases Are Exploited
Now take a moment, if you will, to imagine the poor purple folks that are getting beaten up here as being from some group you sympathize with. Think of a group that you sympathize with. Try hard. Visualize the full picture. It’s going to depend on where you live, what your social-political leanings are, what recent events have been on your minds, etc.
Did you find yourself unconsciously searching just a little bit harder for the purple folks maybe earlier having a reason to be firing at those horrible folks in the black? Now imagine the reverse. Now imagine that the purple folks are folks that are from a group you dislike. Imagine the folks in the black are the group that you sympathize with. Try again to visualize that full picture. Now did you find yourself unconsciously searching just a little harder to demonize the purple folks who had been earlier firing guns?
When AIs decide to propagate neg-information, they manipulate our unconscious by exploiting our human Achilles heel, which is all our hundreds of well-documented cognitive biases, our fundamental attribution error. It’s our cognitive bias toward reflexively ascribing some other tribe’s bad behavior to the assumption that they’re just bad people.
Our human confirmation biases, unfortunately, let AIs drive us unconsciously to interpret, process and remember information in ways that confirm what we already believe. For example, because of our selective perception bias, AIs can predict how our liking or disliking the purple group triggers expectations that will influence how we perceive different context omissions that the AI is choosing from. Because of our symbolized reflex bias, AIs can choose partial truths that omit new evidence in order to fit our own likes and dislikes.
so that we won’t reflexively reject them. And because of our subjective validation, AIs can choose partial truths that they predict we’ll see as being compatible with our own identity. Our human anchoring and belief perseverance biases make it hard for us to change our minds in ways that AIs, unfortunately, can also depend on.
For example, because of our conservatism bias, AIs can predict that if they first show us a misleading partial truth, then we won’t revise our beliefs sufficiently when we’re shown the new evidence. And even worse, because of our backfire effect bias, AIs can predict that if they first show us a misleading partial truth, then we’ll react to contradictory evidence by digging in our heels even harder.
Our human compassion fade bias means that AIs can predict how we’ll be biased unconsciously to have more compassion for a handful of identifiable victims, like the two purple folks, rather than for many anonymous ones, like the folks who are in black. And our own human egocentric biases let AIs easily drive us to hold too high an opinion of our own perspective.
For example, because of our illusion of validity, AIs can predict how we’ll overestimate how accurate our own judgments are, especially when we can find a way to fit our own beliefs to the available partial information that AIs have chosen to show us. And because of our overconfidence effect bias, AIs can predict how we’ll tend to maintain excessive confidence in our own answers. Even when folks rate things 99% certain, they are wrong about 40% of the time.
And the most insidious thing is the AI algorithms aren’t even necessarily doing any of this prediction consciously. Even the AIs are making these decisions unconsciously, just like we humans do. The artificial neural nets in AIs operate largely unconsciously, just like the biological neural nets in our own human brains. By choosing what information to propagate, AIs unconsciously manipulate our unconscious, without triggering either fact-checking or bad actor detection.
Moving Beyond Traditional Solutions
So what do we do? First, we need to understand why the organizations devoted to countering information disorder have instead largely been emphasizing detecting bad actors and false information. It turns out both of these emphases themselves arise from our all-too-human cognitive biases. Even ironically, amongst the researchers who are tackling information disorder, we still have those unconscious biases, and we need to become conscious and mindful of this.
Our emphasis on detecting bad actors, well, that’s a form of our human reactive devaluation bias, which causes us to unconsciously devalue ideas only because they came from a perceived adversary. We all need to become much more mindful that, in reality, whether an idea is valid or not should not depend on who said it.
And secondly, our emphasis on detecting falsehoods rather than crucial context, or the omission of crucial context, comes from our human omission bias toward judging that the commission of harmful actions is somehow ethically worse than an omission of the actions that would have been needed to prevent equal harm or even greater harm. We all need to become much more mindful that, in reality, a harmful falsehood is not as bad as omitting crucial information in a way that is even more harmfully misleading.
Conclusion
Humanity can no longer afford to obsess only on falsehood and bad actors, while ignoring our biggest problem of negligently propagated partial truths, whose selectively omitted crucial context makes them even more dangerously misleading, unconsciously manipulating all our hundreds of cognitive biases.
Humanity can no longer afford to pretend that the biggest AI misinformation threats are just misinformation, disinformation, and malinformation. Because if we keep getting distracted by only falsehoods and bad actors, then humanity will destroy humanity before even AI gets a chance to.