
Here is the full text and summary of Maarten Schenk’s talk titled “Why BS Goes Viral” at TEDxEindhoven conference. In this talk, Maarten discusses the role of social media algorithms in promoting and spreading false information online. He explains how algorithms tailor content recommendations based on users’ preferences and behaviors, ultimately prioritizing emotionally evocative content.
Listen to the audio version here:
TRANSCRIPT:
If you are watching this presentation online, there probably is an algorithm that thinks you should see it. This is unlike you nice people in the audience here, who all came here out of your own free will, at least I hope so.
But for those people online, why is that algorithm there? What does it know about them? And can we influence it?
I’m Maarten Schenk, the co-founder of fact-checking website leadstories.com, and I’m here today to talk to you about social media recommendation algorithms, and also why they are so often blamed for the spread of false information online. More importantly, what can we do about it?
So what makes social media platforms such fertile breeding grounds for what we in the fact-checking business often call, with a technical term, complete bullshit? Did somebody maybe find a way to hack these social media platforms, perhaps by playing with people’s emotions?
Let me give you a recent example to show you what I mean. A few weeks ago, my colleague Sarah Thompson alerted me to a series of Facebook posts she found that all looked like this. They all had a picture of a cute dog and some text asking for help in reuniting this poor animal with its desperate owner.
And there were dozens of such posts, all with the same text. The only difference was that each text had a different name of a different town or city.
Now what I love about my job is that we get to investigate puzzles like these and figure out what’s going on here. Very often what we find is that somebody is manipulating people’s emotions in order to exploit a social media recommendation algorithm. And that is exactly what is going on here.
Because once these posts got enough likes and shares from concerned animal lovers, who wouldn’t like these posts, then these posts would be edited and instead they would show some kind of real estate scam that these people were running that led you to a website where they would steal your financial and personal information.
Now the Facebook algorithm was helping them do it. Because the more people liked these posts, the more people got to see these posts. So maybe I should take a step back here and talk a little bit about why Facebook needs an algorithm and what an algorithm is anyway.
An algorithm, we all know from school, it’s just a list of steps and rules that you can use to solve a particular problem. And some algorithms can be really simple and have really simple rules. But the right algorithm can be worth a lot of money.
Take for example Amazon.com. They could run an algorithm over their sales data and then notice, hey, people who buy toilet paper, very often they also buy air fresheners. So the next guy who puts a roll of toilet paper in their shopping basket, pop up, hey, would you like to buy some air fresheners? And billions of dollars in extra sales. All thanks to the algorithm.
The same goes for a video streaming service, for example, that wants to know which movies or which shows to recommend so users keep paying for their subscription fees. They could use an algorithm to analyze their viewing data and then divide their viewers into groups based on what shows and movies they like to watch. And then they can base recommendations to those users on what other people in those same groups watched. And again, this is far cheaper than hiring a movie critic for every individual user.
So social media platforms essentially need to solve the same problem. What’s the next thing that we are going to recommend to a user in their timeline so they stay engaged on our website or in our app so we can show them more ads as their business model? So they put some very smart engineers on it and they came up with an algorithm.
And what works best is to show people more of the same kind of stuff that they already liked and engaged with before. And yeah, it’s social media. So these engineers have a lot of data to work with. They literally know where you live, who all your friends are, and what groups you’re a part of. So they can tailor these recommendations to a very fine degree.
This might all sound very Orwellian, but it’s just dumb mathematics: counting what people like and then giving them more of the same so to keep them hooked. It’s like a drug that changes its effect to become the one that the user craves most. And just like drugs, of course, this too can be abused.
Take, for example, a couple of months ago when me and my colleagues were looking through a bunch of political Facebook groups, always searching for the next story to fact check, and to our great surprise, we started seeing a lot of articles in these groups claiming that certain American celebrities have died.
For example, poor Bruce Willis here, who is, by the way, at the time of speaking, I just checked, still alive, but the people who clicked on these messages, they were taken to a website full of pop-ups, banner ads, advertising everywhere. And anywhere you clicked, these people behind these websites made a little bit of money. But who were these people? We decided to find out.
So it turns out these websites were all being run by a group of Cambodian IT students. We actually found their Facebook page with their picture and posts where they were bragging how much money they were making with this scheme. So these guys had successfully figured out that older Americans who like to argue about politics on Facebook generally also are fans of older American celebrities.
And by exploiting their emotional connection, they were making a nice chunk of money for themselves. So all of this by successfully figuring out which groups respond well to which emotional impulses. And social media platforms and their algorithms are really fast at figuring out what groups you belong to and what kind of stuff you like.
If you start from a blank profile on any of the major platforms and you use it for just a few hours, clicking around, liking, sharing, swiping, whatever, after a few hours, these algorithms, they will have already figured out if you like cute animals better, or maybe stand-up comedy, or maybe dancing girls, maybe with hula hoops, who knows. They will start serving you more of it.
But they will also figure out some really personal things about you that you didn’t even explicitly tell them about. They might even figure out your sexual orientation or even your political views just based on your behavior.
So in effect, these algorithms, they become a mirror – a mirror that shows you what you like. And so I find it very funny every time there is some politician that says, hey, there is too much kinky porn on my timeline. Something should be done about it. Yeah, no, that’s not the algorithm’s fault.
But yeah, if your hair looks bad, you don’t blame the mirror for that. If you frown at the mirror, of course the mirror will frown back at you. And if you give it a wide smile, then, oh, yeah, hang on a second. There seems to be something wrong with my mirror. It’s making me look fat for some reason. I don’t know. I guess it must be broken or deformed or something. Let’s put it away.
No, just like mirrors, algorithms can also be deformed to make things look bigger than they really are or more important, and that’s called advertising. But in general, for these algorithms on social media, what works best is to show people stuff that makes them feel things.
For example, a cute animal video or an inspiring TED talk, right? Or maybe some news that makes your blood boil. It’s no wonder that the social media companies added buttons like these so they could better measure what piece of content gets an emotional reaction and how strong it is.
So, of course, fake news hooks into the same mechanism. If people are angry or scared or indignant, they stop thinking and they start writing an angry comment right away. You’ve all been there online. You’ve seen those comments. Don’t blame those people for that. It’s only natural. It’s how humans are. We evolved to be this way.
Imagine back in the early days of the human race on the savannah, if somebody yelled, Tiger! Then the annoying guy who stopped and asked for stars, please, or fat check, please, that was probably the guy that got eaten first. So we evolved to immediately respond to messages about dangers.
And if you have a piece of information that you believe will protect your friends, your family, your community from dangers, it’s literally inhumane not to tell them, right? So even if it turns out later that that piece of information that you have is false, maybe there was some kind of misunderstanding going on, or maybe somebody made it up for political, religious, or economic reasons. Who knows? There’s a lot of reasons to make up stuff.
A few years ago, Professor Peter Burger from the University of Leiden and myself, we investigated a network of websites that were spamming Facebook and Twitters with scary stories about Muslims, migrants, and refugees doing terrible things all over Europe. There was just one problem: Most of these stories were either very old, just not true, or they didn’t even happen in Europe.
But people were liking and sharing these stories anyway, maybe to express their political viewpoint, but also because they felt they needed to warn their community against all these dangers. What the people liking and sharing these articles did not know, however, was that they were actually making a small group of friends from the town of Kumanovo in Macedonia very rich.
Because when we looked into it, these websites were actually being run by a policeman, a truck driver, a civil servant, a school teacher, even a soldier in the Macedonian army. We tracked them down, we called them on the phone, and at first they pretended they didn’t speak English, so we called back with an interpreter.
Then they said, we know nothing, we must have been hacked, but the hackers must have been listening because half an hour later all these websites were offline suddenly. I don’t know how that works, good hackers.
But these people, they weren’t doing any of this for any sort of political reasons, no, they just wanted to make some extra money to supplement their salary in Macedonia. And the best way they knew how to do that was to run a few websites, put a lot of ads and banners and pop-ups on them, and then make them go viral.
And in their experience, what worked best to go viral were these stories that made people angry and scared and afraid. And these stories didn’t even have to be new. Any old story that worked in the past, they could just reuse it. So they did. Far cheaper, far easier for them.
So yeah, there we are as fact-checkers, there’s not much we can do here except saying, hey, that story is false, hey, that story came from a satire website, hey, that story didn’t actually happen in Europe. We can only put up warning messages like, hey, the object in the mirror may be faker than they appear. And we can’t delete these stories, we can’t censor them, and we don’t want to.
As fact-checkers, our job is to add more information and then hope that people will draw their own conclusion. So that’s our job as fact-checkers, but maybe as a society there is something more we can do about this problem. Maybe somebody suggested we should make a law to ban fake news or we should ban these social media algorithms.
Well, let me do something unusual for a fact-checker, and that is give my opinion, because we never do that usually. We just say if it’s true or false. But in this case, I think those laws would be a really bad idea. Regulating speech is always very tricky and there’s always unforeseen consequences. So I don’t think we should go that way.
As fact-checkers, we always recommend adding more information when there is a problem. Nobody says that putting a warning label on a dangerous product or putting an ingredient list on food is censorship. And I think we should do the same with social media and their algorithms.
Warning: our algorithm has detected you are a 47-year-old Belgian with a weight problem. And here are some people trying to sell you pills. Or warning: our algorithm has detected that you like scary medical stories. You are 62% gullible and there is a 95% chance you will like and share this video. None of this involves banning or censoring anything, but it might hopefully open some eyes.
But yeah, that’s for the politicians to do. Is there nothing we can do as individuals in the face of these algorithms? Or are we completely powerless? Well I don’t think so.
When I look out into this auditorium, I see a crowd of individuals. And we know that the algorithms go where the crowd goes. We also know that the algorithms learn by watching what we do. So what if we were to watch what we feed to the algorithms?
So next time you’re scrolling through Facebook or Twitter or social media and you see a piece of content that gives you an emotional reaction, stop. Don’t go for that clickbait headline. Google that thing before you share it. Like and share a fact check now and then, please.
The plain truth is often boring, so it needs all the help it can get on social media. Like it, share it, bookmark it. Don’t just do it for the quality of your own timeline, because you know everything you do will be used to recommend things to others. Let’s use that knowledge for good, shall we?
And before I forget, please like and share this video, because now you know why it is so important.
Thank you.
SUMMARY OF THIS TALK:
Maarten Schenk’s talk, titled “Why BS Goes Viral,” delves into the world of social media recommendation algorithms and their role in the spread of false information online. Here are the key takeaway points from his presentation:
- Algorithmic Influence: Schenk highlights the role of algorithms in determining what content users see online and questions how much these algorithms know about users and whether they can be influenced.
- Exploiting Emotions: Schenk presents an example of manipulative content involving cute dog pictures used to gather likes and shares, only to later switch to promoting real estate scams. This demonstrates how people’s emotions can be exploited to trick recommendation algorithms.
- The Purpose of Algorithms: He explains that algorithms are used by platforms like Facebook to keep users engaged and show them more targeted content. Algorithms analyze user behavior and preferences to achieve this.
- Algorithmic Personalization: Social media algorithms tailor content recommendations based on users’ behaviors, even going as far as inferring personal information like political views or sexual orientation from their actions.
- Algorithm as a Mirror: Schenk compares algorithms to mirrors, reflecting users’ preferences and interests. He emphasizes that the algorithm isn’t to blame for content users see; it simply responds to user behavior.
- Emotional Content: He explains that algorithms prioritize content that elicits strong emotional reactions, whether positive or negative. This preference for emotionally charged content plays a role in the spread of fake news.
- Human Evolution and Reaction: Schenk points out that humans have evolved to respond quickly to information about potential dangers. This tendency to share alarming news contributes to the viral spread of false information.
- Exploiting Emotions for Profit: He shares an example of Macedonian individuals exploiting Americans’ emotional connections to celebrities and political issues to generate revenue through viral content.
- Fact-Checking: Schenk mentions the role of fact-checkers in addressing false information but underscores that their job is to provide more information and let individuals draw their conclusions, rather than censoring content.
- Regulation and Censorship: He expresses concern about regulating speech and social media algorithms, advocating for adding warning labels or providing users with information about how algorithms work instead.
- Individual Responsibility: Schenk encourages individuals to take responsibility for the content they engage with online. He suggests fact-checking content before sharing it and supporting fact-checkers to counter the spread of misinformation.
- Collective Influence: He suggests that users collectively have the power to influence algorithms by being mindful of what they engage with and share on social media.
- Supporting Truth: Schenk calls for supporting and sharing fact-checked, truthful content to counterbalance the prevalence of emotionally charged fake news.
In conclusion, Maarten Schenk’s talk sheds light on the inner workings of social media algorithms, their susceptibility to manipulation, and the importance of individual and collective responsibility in countering the spread of false information online. He emphasizes the need for transparency and awareness regarding algorithmic influence and encourages users to make informed choices in their online interactions.
Related Posts
- The Dark Subcultures of Online Politics – Joshua Citarella on Modern Wisdom (Transcript)
- Jeffrey Sachs: Trump’s Distorted Version of the Monroe Doctrine (Transcript)
- Robin Day Speaks With Svetlana Alliluyeva – 1969 BBC Interview (Transcript)
- Grade Inflation: Why an “A” Today Means Less Than It Did 20 Years Ago
- Why Is Knowledge Getting So Expensive? – Jeffrey Edmunds (Transcript)