Skip to content
Home » What Can We Do About ‘Evil AI’? – Staffan Truvé (Transcript)

What Can We Do About ‘Evil AI’? – Staffan Truvé (Transcript)

Here is the full transcript of Staffan Truvé’s talk titled “What Can We Do About ‘Evil AI’?” at TEDxGöteborg conference.

In his talk “What Can We Do About ‘Evil AI’?”, Staffan Truvé discusses the rapid advancement of machine intelligence and its potential to surpass human abilities in many tasks. He highlights the dual nature of AI, emphasizing its power to amplify both beneficial and malicious activities. Truvé reflects on the increasing prevalence of cyber threats, such as ransomware and deepfakes, facilitated by AI technologies, pointing out the commoditization of cybercrime tools on dark web markets.

He stresses the importance of developing AI as a tool against malicious uses, advocating for the creation of AI systems capable of detecting and countering threats, including deepfakes. Truvé also underscores the challenge of regulating AI development and use, arguing for a balance that ensures AI’s safety, fairness, and transparency without stifling technological progress.

He advocates for collaborative efforts among good actors to counteract the advantages held by malicious users of AI. Ultimately, Truvé calls for a comprehensive approach to AI that combines technological innovation with ethical considerations and regulatory measures to mitigate the risks posed by ‘Evil AI’.

Listen to the audio version here:

TRANSCRIPT:

We live in extremely exciting times, in case you haven’t noticed. And why is that? Well, if you look back for thousands, hundreds of thousands of years, humans have been evolving very, very slowly. Every now and then, there is a good mutation, but essentially we’re pretty much the same.

Machines, on the other hand, are following an exponential curve. And we haven’t really noticed it for a long time, but now we’re starting to see it. So what’s happening with machines, and in particular machine intelligence, is that it’s coming to a point that for many tasks, if you have enough resources and a well-enough defined job to do, machines will outperform humans. So we’re getting closer and closer to this magic point where they will actually supersede us in many cases.

And essentially what we’ve done is we’ve built an amplifier of ourselves. Anything we do, we can now get machine assistance to do even better. I’ve been excited about this for a very long time. In fact, 10 years ago, I gave a TEDx talk, talking about the opportunities that were lying ahead, using the Internet as a knowledge source and AI and machine learning as a mechanism to change how humanity is perceiving and understanding and planning for the future.

The Dark Side of Technology

I’m still an optimist, but in all seriousness, if you look at the world right now, it’s definitely taken a turn for the worse. So the new world order is one where we, on a daily basis, get news about war, terror, climate-based natural disasters, pandemics.

On the cyber side, we’re seeing attacks against infrastructure, we’re seeing industries being shut down, financial crisis caused by that, and maybe worse of all, we’re seeing attacks on our brains. Influence operations, where bad malicious actors are attacking us, changing the storytelling, trying to attack our democracies by inducing false news into our systems.

And if you look at it, the war in Ukraine was sort of highlighting that in a whole different way. All of a sudden we saw things like cyber attacks attacking entire nations’ infrastructure, we saw new kinds of influence operations, even suggesting that Astrid Lindgren was a Nazi, and of course we saw all the horrors of kinetic war. So this has to keep going on, it seems, or can it? Can we do anything about it?

ALSO READ:  Facebook CEO Mark Zuckerberg at F8 2016 Day 1 Keynote (Full Transcript)

Or is it even getting worse? Well, unfortunately, AI, as I just mentioned, is a great amplifier, but it amplifies not only the good things we do, it amplifies the bad things malicious actors can do as well. So I think we’ve come to a point now where we have to start thinking about how can we use AI as a tool against evil. We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.

The Misuse of AI

So, for instance, they could have me say things like, “I don’t know, Killmonger was right,” or “Ben Carson is in the sunken place,” or, how about this, simply, “President Trump is a total and complete dipshit.” So of course, this was not the real Barack Obama. This was a combination of deepfake generated AI and a good voice actor giving a message which by many could be perceived as true, but it’s not. This is one example of how AI is being used by the bad guys.

Another one, as you’ve probably all heard about, is ransomware and all kinds of cyber attacks, and these are becoming not only commonly used, they’re being commoditized. You can now go onto a dark website and buy a license to something called WormGPT, which is sort of ChatGPT’s evil cousin. A tool built with the same technology, but used to produce fake business mails, luring people into sending money to the criminals, and things like that. And it’s not something now that requires a very sophisticated hacker.

Anyone can do this. So the threshold for becoming a bad guy, a successful bad guy, has been significantly lowered. And we’re seeing even worse things on the horizon. So not only do you get these kind of things, we’re now starting to see a whole new family of malware, which is behaving like a natural virus.

The Evolution of Cyber Threats

It injects itself into a system in a seemingly unthreatening way, but once it’s in your system it starts to mutate. It starts to look at its environment and figure out how in there could it actually attack the systems. And then it changes itself. It mutates, tries things out, until it finds an attack vector which can, for example, shut down a factory, or maybe even the infrastructure of a nation.

And the fact that every one of these will be unique, because it’s evolving locally, means that our traditional tools for stopping these kinds of attacks are essentially worthless.