Here is the full transcript of Staffan Truvé’s talk titled “What Can We Do About ‘Evil AI’?” at TEDxGöteborg conference.
In his talk “What Can We Do About ‘Evil AI’?”, Staffan Truvé discusses the rapid advancement of machine intelligence and its potential to surpass human abilities in many tasks. He highlights the dual nature of AI, emphasizing its power to amplify both beneficial and malicious activities. Truvé reflects on the increasing prevalence of cyber threats, such as ransomware and deepfakes, facilitated by AI technologies, pointing out the commoditization of cybercrime tools on dark web markets.
He stresses the importance of developing AI as a tool against malicious uses, advocating for the creation of AI systems capable of detecting and countering threats, including deepfakes. Truvé also underscores the challenge of regulating AI development and use, arguing for a balance that ensures AI’s safety, fairness, and transparency without stifling technological progress.
He advocates for collaborative efforts among good actors to counteract the advantages held by malicious users of AI. Ultimately, Truvé calls for a comprehensive approach to AI that combines technological innovation with ethical considerations and regulatory measures to mitigate the risks posed by ‘Evil AI’.
Listen to the audio version here:
TRANSCRIPT:
We live in extremely exciting times, in case you haven’t noticed. And why is that? Well, if you look back for thousands, hundreds of thousands of years, humans have been evolving very, very slowly. Every now and then, there is a good mutation, but essentially we’re pretty much the same.
Machines, on the other hand, are following an exponential curve. And we haven’t really noticed it for a long time, but now we’re starting to see it. So what’s happening with machines, and in particular machine intelligence, is that it’s coming to a point that for many tasks, if you have enough resources and a well-enough defined job to do, machines will outperform humans. So we’re getting closer and closer to this magic point where they will actually supersede us in many cases.
And essentially what we’ve done is we’ve built an amplifier of ourselves. Anything we do, we can now get machine assistance to do even better. I’ve been excited about this for a very long time. In fact, 10 years ago, I gave a TEDx talk, talking about the opportunities that were lying ahead, using the Internet as a knowledge source and AI and machine learning as a mechanism to change how humanity is perceiving and understanding and planning for the future.
The Dark Side of Technology
I’m still an optimist, but in all seriousness, if you look at the world right now, it’s definitely taken a turn for the worse. So the new world order is one where we, on a daily basis, get news about war, terror, climate-based natural disasters, pandemics.
On the cyber side, we’re seeing attacks against infrastructure, we’re seeing industries being shut down, financial crisis caused by that, and maybe worse of all, we’re seeing attacks on our brains. Influence operations, where bad malicious actors are attacking us, changing the storytelling, trying to attack our democracies by inducing false news into our systems.
And if you look at it, the war in Ukraine was sort of highlighting that in a whole different way. All of a sudden we saw things like cyber attacks attacking entire nations’ infrastructure, we saw new kinds of influence operations, even suggesting that Astrid Lindgren was a Nazi, and of course we saw all the horrors of kinetic war. So this has to keep going on, it seems, or can it? Can we do anything about it?
Or is it even getting worse? Well, unfortunately, AI, as I just mentioned, is a great amplifier, but it amplifies not only the good things we do, it amplifies the bad things malicious actors can do as well. So I think we’ve come to a point now where we have to start thinking about how can we use AI as a tool against evil. We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.
The Misuse of AI
So, for instance, they could have me say things like, “I don’t know, Killmonger was right,” or “Ben Carson is in the sunken place,” or, how about this, simply, “President Trump is a total and complete dipshit.” So of course, this was not the real Barack Obama. This was a combination of deepfake generated AI and a good voice actor giving a message which by many could be perceived as true, but it’s not. This is one example of how AI is being used by the bad guys.
Another one, as you’ve probably all heard about, is ransomware and all kinds of cyber attacks, and these are becoming not only commonly used, they’re being commoditized. You can now go onto a dark website and buy a license to something called WormGPT, which is sort of ChatGPT’s evil cousin. A tool built with the same technology, but used to produce fake business mails, luring people into sending money to the criminals, and things like that. And it’s not something now that requires a very sophisticated hacker.
Anyone can do this. So the threshold for becoming a bad guy, a successful bad guy, has been significantly lowered. And we’re seeing even worse things on the horizon. So not only do you get these kind of things, we’re now starting to see a whole new family of malware, which is behaving like a natural virus.
The Evolution of Cyber Threats
It injects itself into a system in a seemingly unthreatening way, but once it’s in your system it starts to mutate. It starts to look at its environment and figure out how in there could it actually attack the systems. And then it changes itself. It mutates, tries things out, until it finds an attack vector which can, for example, shut down a factory, or maybe even the infrastructure of a nation.
And the fact that every one of these will be unique, because it’s evolving locally, means that our traditional tools for stopping these kinds of attacks are essentially worthless.
So it’s coming to a point where all these different use cases for the criminals is becoming more and more threatening. And you have to ask yourself, what can we do? I think the only solution to this is really to start seriously about how can we develop AI against evil.
This is going to be the next big challenge for us. So, for example, if you look at the deepfake of Barack Obama, there is now software, also AI-based, which can detect deepfakes. So you can have filters which filter out these things. Unfortunately, it’s an arms race.
The Fight Against AI Misuse
And right now, I would say the bad guys have a slight upper hand on this. So essentially, we’re going to keep getting better deepfakes. We’ll have new detectors. But in the long run, it’s not going to work.
We need to think about other ways to stop these attacks. In this particular case, I think we need to start thinking not about how can I detect bad stuff. We have to know how can I validate good stuff. So we’ll have to start having certificates of everything which is genuine.
And everything which is not genuine is, unfortunately, probably fake. But that’s just solving a very specific problem. The overall problem of understanding all the threats against us has to be attacked in a different way. So I’ve spent the last 15 years or so with an approach to this, which we call threat intelligence, which is essentially monitoring everything happening on the internet, collecting information about where new domains are registered, where the bad guys are talking in their criminal dark web forums to discuss new attacks, to really allow the defenders to have an up-to-date real-time view of what’s happening out there.
Ethical AI and Regulation
And I think this is the only way we can do that. We have to be smarter and faster. What we need to do is to counter the current situation where the defending side, the threat analysts, have way too much data to look at. There are too few people to do it.
And we’re up against these guys who are ruthless, well-financed, and tech-savvy. So we need to think, how can we develop countermeasures? And I think the end solution here will not be just machine against machine. We will have to figure out a way to build systems where the good guys, the threat analysts, can work together with machines.
Machines cannot do this alone, because there is too much human judgment involved in deciding what is good or bad or fake or right. But we need to automate as much as we can and give those tools at the fingertips of the defending side. If we don’t do that, essentially everything will be lost. So, of course, all of you have seen that there’s been a big debate recently about the potential threats of AI overall.
The Challenges of AI Development
In particular, there are two questions which have become predominant. One is the question of how do we develop ethical AI? And the second is, is that even enough, or do we need to have all kinds of new regulations about how AI can be deployed?
Let me start with the question of ethical AI, and I’d like you all to think for a second about what is the biggest problem with ethical AI. I should say first, ethical AI includes many things. It’s AI which is safe. It’s AI which is not biased towards any specific gender or race or religion or so on. So think of it as fair AI.
But what is the big problem with that? Someone says humans. The big problem is that only the good guys will care about ethical AI. So the bad threat actors out there, whether they are criminals or bad nation states or others, they will just ignore this.
They will probably laugh at us on the good side if we’re trying to stop development or restricting development, while they can just charge ahead and develop and do whatever they like. And there’s a similar problem with regulation of AI. Let me come back to the slide in the beginning with this chart, you know, the exponential evolution of how we’re seeing new capabilities enabled by AI over time. Many of us, when we think, when we hear the word exponential, I think we think of a line, a curve, roughly like this.
However, in reality, the curve actually looks more like this. So for a very long time, when you’re on one of these exponential curves, it looks like nothing much is happening. And then all of a sudden, things take off and gain speed extremely quickly. So if you’re thinking of, for example, saying that we need to halt development of a new technology unless we figure out how to do it in a safe way, what’s going to happen?
Well, the scary thing with these curves is that this is what’s going to happen. If you have an exponential curve like this, and if you halt development, even for a brief period of time, that’s going to mean that you’re behind the bad guys a lot in terms of capability. So saying that we should halt AI development for a year or so while we figure out how to do things right, whereas the bad guys just run ahead, means that we’ll probably get to a point where they are so far ahead of us that we won’t be able to catch up. And that’s not the situation we want to be in.
So what I think is important, and what I like to encourage our regulators, politicians, and all of us here as well, is to think about a couple of things. First of all, I think we do need to regulate AI. It’s an extremely powerful technology, and you should think about how it’s used. But we should not regulate the actual development of the technology.
What we need to think about is in more general terms, how do we regulate the use of it? How do we make sure that it’s transparent, safe, and fair, while still not preventing tech development to go on? Because again, if we stop, the bad guys won’t, and we’re doomed. So this is one piece we need to solve.
Collaboration and Optimism for the Future
We need to figure out how to not halt development, but still make sure that this is being used in a responsible way. The other thing which I think is important for us to realize here that if we’re going to win this situation, we have to look at other things than just technology. We have to look at humans as well. And what we’re seeing right now, you know, when we study how the criminals and the threat actors are developing things, they are collaborating.
Whereas on the defending side, on the good guy side, people are a bit scared to talk about these things. You don’t want to go out and talk about that you’ve had a data breach or that you’ve been attacked or that you got fooled into believing some fake news, things like that. But we have to. We have to start working together on the good side.
Because if we don’t work on the good side, the bad guys will get an upper hand. But I’m an optimist. I think by developing clever things and working together, we will actually have a good chance to save the world.