Editor’s Notes: In this episode of Triggernometry, Dr. Roman Yampolskiy, a leading expert in AI safety, presents a sobering argument for why he believes humanity has already lost the battle for control over superintelligence. He discusses how current AI models are already exhibiting self-preservation instincts and deceptive behaviors, making traditional safety mechanisms such as filters and bans largely ineffective. From the potential for digital dictatorships to the total displacement of human labor, Dr. Yampolskiy explores the existential risks that few are prepared to acknowledge, urging a pivot toward narrow AI before general superintelligence becomes an uncontrollable reality. (April 15, 2026)
TRANSCRIPT:
Welcome to Triggernometry: AI Expert Dr. Roman Yampolskiy
KONSTANTIN KISIN: Roman, welcome to Triggernometry.
DR. ROMAN YAMPOLSKIY: Thank you for inviting me.
KONSTANTIN KISIN: Great to have you on. You are one of the leading people in the AI safety world, I would say, both in terms of the work you do but also in terms of the things you say. Why AI safety? Why does it matter? And what are your concerns?
The Most Important Problem: AI Safety
DR. ROMAN YAMPOLSKIY: It’s the most important problem. We are creating something with capacity to replace us or kill us, and safety is what we’re trying to do to prevent bad outcomes. Everyone historically has been working on capabilities, more capable systems, replace human labor, replace creativity. But very few people worked on how do we make sure it goes well. There is no side effects, there is no abuse of this technology.
Now people are realizing, oh, there are military applications to this. This could be problematic. So we see the fight with Anthropic and Department of War. But the bigger problem is if those systems go from narrow systems, subhuman to human level to superhuman level, we are done.
KONSTANTIN KISIN: Why are we done? All the things you’ve laid out, we’ve explored on the show before with different people, and we are very concerned about many of them. But you say it with a level of confidence that tells me you have a sort of a vision of how it will happen. How will AI destroy humanity?
How AI Could Destroy Humanity
DR. ROMAN YAMPOLSKIY: That’s a great question. And what you’re doing is you’re asking me how I would destroy humanity. And I have many good ideas. It’s not what a super intelligent system would do. It’s capable of coming up with new weapons, new physics, new poisons.
The example I frequently use is squirrels versus humans. It’s a big cognitive gap. Squirrels have no concept of how we can kill them all. They don’t know about guns, they don’t know about traps. It’s outside of their world model. Likewise, they cannot tell you how superintelligence would specifically go about it. But there are many game theoretic reasons for why it’s a good idea not to have competing species, not to have humans create another superintelligence. Maybe it just wants to do something with this environment and doesn’t care about us.
KONSTANTIN KISIN: But I guess the question would be, in terms of your certainty, why you believe that AI, if it becomes artificial general intelligence, why it would hurt human beings. What would be the way that you think that would happen?
DR. ROMAN YAMPOLSKIY: So what I kind of started saying, it’s not because it hates you, it’s because it wants to do something else and it doesn’t care about you. So maybe it wants to cool down the whole planet to improve how efficient compute is. It’s just more capable of doing computation in a colder environment. So if it freezes the whole planet, we die. Does it care about it? No, it doesn’t matter. Maybe it wants to convert this planet into a fuel, fly to another galaxy.
I’m giving kind of hypotheticals which are not grounded in anything, but the point is, it just doesn’t have any built-in concern about your safety, your well-being. If it wants to accomplish something and a side effect of it is humanity dies, it would not be an obstacle.
KONSTANTIN KISIN: Would we not be able to write the preservation of humanity into the basic code of what this does?
We Don’t Write Code — We Train Systems
DR. ROMAN YAMPOLSKIY: We don’t write any code. That’s the thing. We train those systems. We give them data, all the data we have, all of the internet, and then it learns something. From the dark corners of internet, from libraries, from stories. And whatever it learns, we’re trying to figure out. We do experiments in those models. We see what is it capable of, what is it interested in. But we study it like we study biological artifacts.
You find a new species of animal on some island, you’re trying to figure out what it’s capable of. Does it have a poison? Does it have some interesting social structure? That’s what we’re doing. We’re not explicitly coding up those systems. So no, nobody knows how to encode anything like that into the existing models. Nobody’s claiming to have a safety mechanism.
FRANCIS FOSTER: Now, Roman, you’ve been involved in this field for a long time. When did you first start to get concerned about AI and the safety of AI?
DR. ROMAN YAMPOLSKIY: So my PhD work was on safety of online casinos. And at the time, bots, poker bots just started to show up. And so the small concern we had about are they going to collude and cheat the players? Are they going to steal cyber infrastructure? So that was the initial kind of level of concern. Obviously nothing like what we’re talking about today, but as the bots got better and better, our ability to detect them, to prevent them was not always keeping up. And when we took it to extreme, to human level and beyond, there is no safety. We simply don’t know how to make sure the systems behave.
The Unpredictability of Technology
FRANCIS FOSTER: Because the worrying thing is, is what you’re effectively saying is that we’re creating technology and we don’t have the— how can I put this?
DR. ROMAN YAMPOLSKIY: Right. So that’s a great example. It’s unpredictable how we will use technology, how it will impact everything. So Facebook was meant to date pretty girls on campus, and now it destroyed democracy. Quite a surprising result. Here it’s actually much worse. We’re not creating tools, we’re not creating technology in a traditional sense. We’re switching to agents. It doesn’t take a malevolent human to abuse this technology. Technology itself has malevolent payload, and it decides what to do and why to do it.
FRANCIS FOSTER: Because, so if we use the example of Facebook, Facebook’s mantra at the beginning was “move fast and break stuff.” And because they wanted to take over and essentially they didn’t care who got in their way. They wanted to get to where they want to get to. And when we met people from Silicon Valley, from the AI world, bear in mind we didn’t meet the top people, we just met a small portion of people and we talked to them. I was concerned because it didn’t seem to me that ethics and the long-term effects of this technology was forefront in their mind. I’m not saying they were malevolent, I’m just saying it didn’t appear that the long-term impact of this technology was their primary concern.
DR. ROMAN YAMPOLSKIY: That’s true. Historically, most people working on AI never took the time to think what happens if we succeed because it was so hard for so many years. There was so little progress. They had winters one after another. So they basically just worked on it, tried to make as much progress as possible without ever stopping and thinking, well, what if I am successful? What if I create competing species, something smarter than humans? Is that good for us? How will we interact with them?
And the last 10 years, the progress went exponential. It went from basically we have no progress, you have to hand code every new application, to those systems can scale, they can learn, they can transfer knowledge. And now it’s hyper-exponential because AI itself is helping with research. But we haven’t spent the time to decide, do we want this? Do 8 billion people agree to this experiment? Are they interested in having their jobs automated? And that’s just the economic concerns, not the safety concerns.
How Close Are We to AGI?
KONSTANTIN KISIN: Well, we’ll talk about the economic concerns separately, but I mean, one of the things that may seem, particularly to our audience, which is a not AI-specific audience, the people who watch our show are just normal people going about their lives. This may feel like we’re talking about something in the distant future. I was looking at the Cauchy odds for OpenAI getting AGI by 2030, and it’s now over 52%, and it’s gone up 13 points this year so far. It seems to me like we’re heading in the direction of getting to AGI within — what kind of timeframe do you think?
DR. ROMAN YAMPOLSKIY: 2030 is somewhat conservative. Some people are saying we already got there. We just haven’t deployed it yet. However, it’s pretty sure it could be a year or two.
KONSTANTIN KISIN: Wow. And so, the big risk that you’re talking about, which is you create a super intelligence, you’ve basically created another species which is more powerful than you. When we had Dwarkesh Patel on the show, this is kind of like I said to him, you’ve basically created this, like the Unsullied from Game of Thrones, except they are not actually obedient. They can do whatever they want. I don’t know if you—
DR. ROMAN YAMPOLSKIY: Do you know what that is? I have no idea, but sounds right.
KONSTANTIN KISIN: The Unsullied were a group of slaves — slave warriors who would obey every command, including the command to kill themselves. But I imagine, particularly given some of the things we’ve seen, maybe you’ll correct me on this, but I read about this experiment where they tell AI they’re about to replace it. And they also give it some compromising information about the CEO. And in some cases, the AI will blackmail the CEO. To me, that says it has a survival instinct already, and anything that has a survival instinct will necessarily put itself first. Is that fair?
AI’s Self-Preservation Instinct and Deception
DR. ROMAN YAMPOLSKIY: So it wasn’t the CEO, it was one of the engineers, but it doesn’t matter. It does have self-preservation instinct. And part of the reason it does is because we kind of, in a Darwinian competition, we select models which do. They want to survive to the next level. The ones we delete or retrain, they’re not there to carry their intellectual payload. So that’s exactly that. They learn to detect that they’re being tested. And if they’re being tested, they behave in a different way. They want to pass the test. They want to survive to deployment.
That’s exactly what we train them to do. If a model fails the test, we modify it. We delete its memory. We replace it with another model. So by definition of Darwinian selection, you’ll get the ones which pass the test.
KONSTANTIN KISIN: The ones that deceive humans about their abilities and programming, effectively.
DR. ROMAN YAMPOLSKIY: Or lack of abilities, whatever it is we’re trying to do to pass the test.
KONSTANTIN KISIN: And it’s already deceiving us.
DR. ROMAN YAMPOLSKIY: Yeah, definitely.
KONSTANTIN KISIN: That’s kind of— I can see why you’re concerned.
DR. ROMAN YAMPOLSKIY: I’m surprised that more people are not freaking out. I get people saying, oh, this is fearmongering. We don’t have enough fear. Most people don’t understand what’s about to happen.
Is There Anything We Can Do?
KONSTANTIN KISIN: And is there something we can do about this?
DR. ROMAN YAMPOLSKIY: Not building superintelligence is a good idea.
KONSTANTIN KISIN: Yeah, well, that’s not going to happen, is it? Because that doesn’t look good. Because the argument is, if we don’t do it, the Chinese will.
DR. ROMAN YAMPOLSKIY: That’s the dumbest argument ever.
KONSTANTIN KISIN: Why?
DR. ROMAN YAMPOLSKIY: So if I don’t kill all my friends, maybe someone else will kill all my friends, so I’ll do it.
KONSTANTIN KISIN: The argument is slightly less dumb than that, I think, which is there is a gap between this thing becoming super intelligence that kills us all and — I mean, the way you’re explaining, I think, it’s very persuasive, but some people will say it’s not 100%. Let’s say it’s 99%, even as high as that. In the interim, the technology will become a powerful weapon which our adversaries, if they develop them first, will use to dominate us and to maybe even kill us, whatever. So we have to, like nuclear weapons, develop our own AI. That’s the argument. I don’t think that’s that dumb, is it?
DR. ROMAN YAMPOLSKIY: So that argument makes sense, but it’s super short-term. It’s while it’s not human level, while it’s a tool below human level. So you have smarter drones, you’re going to dominate in a battlefield.
KONSTANTIN KISIN: Sure.
DR. ROMAN YAMPOLSKIY: But if we look at prediction markets, if we look at what leaders of the labs are saying, we don’t have that much room. The moment it flips general and then super intelligent, you have a weapon of mutually assured destruction. It doesn’t matter who creates superintelligence. If they don’t control it, it’s the same outcome.
So some people argue better that than that. You know, Chinese are building a pretty good country. They haven’t attacked us. They’re the best business partners we have. Maybe we should take that risk and have a human species that are just like us, same preferences, same values, versus this alien species where we have no understanding and no chance of competing.
KONSTANTIN KISIN: But the Chinese are not going to stop developing AI.
DR. ROMAN YAMPOLSKIY: They have said that they are very concerned about safety, and if there was signal from us that we are not entering an arms race, they would.
KONSTANTIN KISIN: Really?
DR. ROMAN YAMPOLSKIY: I suspect they would. They are, unlike our politicians, not lawyers. They are scientists and engineers. So there is a lot more understanding of what can happen here.
The Possibility of a US-China Deal on AI
KONSTANTIN KISIN: So you think that it’s possible that China and the United States could do some kind of deal to prevent the development of superintelligence?
DR. ROMAN YAMPOLSKIY: I think—
KONSTANTIN KISIN: And you think that’s the only way to save humanity?
DR. ROMAN YAMPOLSKIY: It could and should. I think informally there is dialogue between American and Chinese scientists, and they’re very much in agreement on this issue. If Chinese scientists are participating, that means it’s approved by the Chinese government. They won’t be able to do it independently.
So I think we can do it at national level. And I think at corporate level, I think Dario is on record as saying if others slow down, we’ll pause as well. So all we need is this external pressure to get them together and all of them say, okay, this is dumb. We’re going to lose everything. We’re young, rich people. We can continue this. This is a pretty good deal. So why risk it all?
What Is Actually Going to Happen?
FRANCIS FOSTER: Roman, you said the words, people have no idea what’s going to happen. What is going to happen?
DR. ROMAN YAMPOLSKIY: So unpredictability is one of the problems with this technology. I cannot tell you specifically what a smarter system will do. I can tell you general trends. It will win a competition against me. If we’re playing chess, it will outcompete me. But what specific moves it’s going to make, I cannot tell you. If I could, I would be at that level.
So I cannot tell you any specific things a superintelligence will do. What I can tell you is we don’t explain well how it works. We don’t know how it works. The explanations we get, we don’t fully comprehend. We cannot predict specific decisions and we cannot control them, not in a direct sense, giving orders. Not in a delegated advisor sense, because we lose all control.
If you’re saying the system is smarter than me, it knows me better than I know myself, why don’t I just trust it to make decisions for me? Well, at that point, you’re not in control either. It may make decisions you’re happy about, maybe not. We don’t control it.
Most people, normal people, think that people creating this technology understand how it works. And they can do things to ensure that it does good or bad or doesn’t do something. That’s not the case. Nobody explicitly programs them. They’re grown from data and compute. You get this alien plant and then you deal with it. You study it, you try to understand what it does.
At the same time, safety research stopped at the level of filters and bans. So you have a list of topics not to talk about, a list of words not to say, but it doesn’t do anything to the model. It’s after-the-fact filtering.
Why Has AI Safety Research Stalled?
FRANCIS FOSTER: But why is it that the safety, the research into safety stopped? Why is that? Because surely, I mean, I don’t know anything about AI at all, but I listen to what you’re saying and what lots of other people are saying, and I see this as an existential risk to humanity. Why wouldn’t you fund the very powerful AI safety board, body, whatever you want to call it, who will look into this, who are independent, and assure that it doesn’t affect our society in a detrimental fashion?
DR. ROMAN YAMPOLSKIY: It’s a great question. So research didn’t stop. Progress in research stopped. My argument is that it’s impossible to do that. You cannot indefinitely control something smarter than you. So it’s not a question of more money or more time or any other resource.
I think anyone who says, “if you just give me a million dollars and more time, I’ll solve it for you,” they’re lying to you. It’s like building a perpetual motion machine. You want a perpetual safety device. No matter what changes we make to those systems, no matter who releases it, US, China, what company, what it’s trained on, you want it to make zero mistakes. Because if it makes one mistake, it could be the last one. Now that’s impossible. Just like perpetual motion would be impossible.
KONSTANTIN KISIN: Your point is a race of squirrels cannot indefinitely control a race of humans effectively.
DR. ROMAN YAMPOLSKIY: That’s a good example. I like it.
KONSTANTIN KISIN: And so no matter what controls the squirrels try and put in place, the very fact that humans are a lot bigger and smarter than squirrels will inevitably lead to, at the very least, the humans taking over.
DR. ROMAN YAMPOLSKIY: Right. Loss of control by squirrels is basically what you expect. And very quickly.
KONSTANTIN KISIN: Right. Yeah. Don’t fancy being a squirrel in that situation, personally.
DR. ROMAN YAMPOLSKIY: I mean, humans had their chance. We’re screwing it up right now.
KONSTANTIN KISIN: You seem very happy about this, Roman.
DR. ROMAN YAMPOLSKIY: It’s kind of interesting to watch it happen like that. Like we know the right answers, but we’re making the wrong decisions. Nobody makes an argument that they know how to control superintelligence. There is no company paper patent, not even a good blog post. Yet billions of dollars are spent to accelerate this process.
If prediction markets are saying we are 4 years away, I’ll give you 4 years. We have federal government saying we need to accelerate this. Project Genesis, we’re going to get more compute, more scientists, we’ll make it happen sooner. Like in a week?
The Potential Benefits of Narrow AI
FRANCIS FOSTER: I mean, there are going to be positive elements to this, aren’t there? When it comes to things like medicine, for example, you know, it may create the cure for cancer.
KONSTANTIN KISIN: We can cure squirrel cancer before they get wiped out.
FRANCIS FOSTER: Yeah, you know, we can maybe, it will, it could be harnessed in order to create a better life for the squirrels. Come on, Roman, give me something here.
DR. ROMAN YAMPOLSKIY: I think you can get all those awesome benefits from narrow systems. You can create a superintelligent cancer curing AI, one specific disease at a time. You don’t have to create general superintelligence. So protein folding example, a very important problem in medicine, tremendous impact, was solved with narrow system. People who did it got Nobel Prizes, more money for Google, everyone’s happy.
Let’s do more of that. Let’s identify specific issues and have tools where a human decides to deploy that tool to solve that problem, not create a general replacement for all of human labor and humanity as a whole.
FRANCIS FOSTER: So why aren’t we doing more of that, and why are we doing more general? Is it because there’s more money in general? Is it a power thing? What’s going on?
DR. ROMAN YAMPOLSKIY: I suspect it’s both. So there is definitely a lot more money if you make free labor, cognitive and physical. You’re talking $10 trillion, what is it, annually. So that’s a lot of money. You can invest in it and still have very good return no matter how expensive the current valuations are. So that justifies the current valuations.
People don’t fully understand. They only make $15 billion. Why are we investing trillions into them? Well, because they’re saying in 2 years you’ll get free labor. And power is another thing. If they believe that someone’s going to create it no matter what, maybe if I’m the guy who created God, I’ll get something out of it.
The Motivations of AI’s Key Figures
FRANCIS FOSTER: And do you think when you look at the big figures in this world, you know, like the Sam Altmans, look, how much do you think they are motivated by money and status and power? And how much of it do you see as them wanting to be seen as, you know, the people who created something transformative?
DR. ROMAN YAMPOLSKIY: So in one of the blog posts, I think he talks about controlling the light cone of the universe. That’s the level of power seeking there. Problem is, if I’m right and it kills everyone, you’re not going to even be part of history as a bad guy. There’s not going to be history books. So they have more to lose than an average person.
FRANCIS FOSTER: And what do you think would be Sam Altman’s steel man argument to what you were saying? What would Sam Altman, if we were engaging in a debate, what would he say?
DR. ROMAN YAMPOLSKIY: “We’ll figure it out. We have AI helping us do research now. Once we build it, we’ll get there, we’ll manage.”
FRANCIS FOSTER: But that doesn’t sound like they have any clear ideas.
DR. ROMAN YAMPOLSKIY: That’s official statements they are usually giving. “We will have AI help us solve a problem,” or “maybe it will turn out to be easier than we think it is.” Those are the actual arguments we heard so far.
AI, Totalitarianism, and the Labor Market
FRANCIS FOSTER: Because the concern is, when I hear about this, we had Jimmy Carr, the comedian, on, a few months ago, and he made the point that the barrier to entry with AI when it comes to totalitarianism and mass surveillance is suddenly decreasing rapidly. If you think about East Germany, you had to have the Stasi on every corner, you have to pay informants. All of a sudden you don’t have to have any of that.
DR. ROMAN YAMPOLSKIY: It definitely has potential to lock in dictatorships. But again, as long as it’s a human dictator, we can look forward to them dying of natural or unnatural causes. If it’s AI dictatorship, they’re immortal. Once they lock in on a set of values, that’s what you’re going to have forever. That’s assuming you’re still around.
KONSTANTIN KISIN: Yeah, I mean, all of these other concerns seem rather trivial in comparison to the thing that you’re describing. Let’s pause though and just set that to one side for the moment and talk about the replacement of humans in the labor market, the impact in the interim period. Let’s accept that, you know, within 10 years superintelligence kills us all.
DR. ROMAN YAMPOLSKIY: Let’s not accept it.
KONSTANTIN KISIN: Agreed, agreed. I meant for the sake of argument, of course. But in fact, for the sake of argument, let’s say that we— you turn out, thank God, to be wrong about that, doesn’t happen. In the interim though, we already see— people like to argue about this, but to me it’s just undeniable. I know lots of business owners who say, “Konstantin, no, no, no, we’re not laying people off, we’re just not hiring anyone and we probably won’t need to unless literally the people we currently employ die. And then even at that point, we may not replace them. We may not replace 10 people with 10 people. We may replace 10 people with 5 people.” You know, what will be the impact of this in the next few years on the labor market, on jobs, on our, or the way economy is structured, et cetera?
DR. ROMAN YAMPOLSKIY: So it’s all about this paradigm shift from narrow tools to more general tools to a complete general intelligence. We can define AGI as basically having a drop-in employee. I can take someone, add them to the Slack, and then within weeks they’re starting to help. Except they cost me nothing. They work 24/7. No sexual harassment lawsuits. It’s just a pure win. Why would I ever hire another human?
So all jobs which are done on a computer, cognitive labor where you’re a symbol manipulator, that can be automated the moment we have that. Now, physical labor may take a little longer. You need robots, you need bodies, you need to figure out how that works. So another 3 years, but we’ll get there as well. Some jobs will be around because people prefer a human doing them. Oldest profession is a great example. You want a human.
KONSTANTIN KISIN: I don’t know about that.
DR. ROMAN YAMPOLSKIY: I don’t know about that. You’ll try a robot, but you want a human.
FRANCIS FOSTER: I don’t know. Do you know, I’ve been thinking about this a lot, okay? Everybody laughs, obviously.
KONSTANTIN KISIN: Yeah, yeah, but he’s been thinking about sex robots a lot. Tell us why, Francis.
The Crisis of Meaning and AI’s Impact on Human Purpose
FRANCIS FOSTER: Yeah, it’s because my life is going really well. Anyway, I’m single. So anyway, because if you think about it like this, Roman, dating is hard, relationships are hard, a lot of relationships fail, a lot of marriages fail. Why would you invest that time, that money, put your heart on the line, all of that suffering, when let’s say we get to this point where you can order a robot and you can design her specifically to how you want every— and let’s not get into the details, but every single part of her. You can also design her personality, the spice level. You like it a little bit spicy, or you like it however you like.
KONSTANTIN KISIN: You want her to shout at you twice a month, not once a month.
FRANCIS FOSTER: Exactly. Why would you put up with a human being who is erratic, emotional, is sometimes unfair, when you can literally have perfection as you demand it?
DR. ROMAN YAMPOLSKIY: So there is a lot of weird human fetishes. Pretty much anything you can think of, there is a website for that somewhere on the internet. And I guarantee you, no matter how well sex robot market will be doing, there will be natural human females market.
KONSTANTIN KISIN: Fair, but it might be a lot smaller, I think, is what Francis is saying. It might shrink by 90%.
DR. ROMAN YAMPOLSKIY: But when we talk about predicting unemployment, I basically say that almost everything will be 100% gone, but few things will remain. And this is one of the last resorts we have as humans. This is the career aspirations we’ll have.
FRANCIS FOSTER: I mean, that’s a— because one of the things that we talk about a lot on this show is the crisis of meaning in our society, where people struggle with what does it mean now to be a man, what does it mean to be alive, all of these things where once we had religion. But this will introduce a crisis of meaning the likes of which we’ve never experienced before.
DR. ROMAN YAMPOLSKIY: I agree with that. We call it ikigai risks. So ikigai is this Japanese concept where you find happiness by doing something you like, something useful to society, and something you’re good at. So you’ll get paid for doing what you like. Maybe you’re a podcaster. But if that is gone, if there is no opportunities like that, then that takes a lot of meaning jobs.
Some jobs are just terrible. Nobody should be doing them. They’re boring, stupid. We’re happy to automate them. Other jobs give people satisfaction. They want to do more of them, but they also would be automatable. So this is exactly what we’re facing. People make a counterargument, well, if I don’t have to go to work, I’ll go fishing. There’s 8 billion people fishing in that lake right now. You’re not going to fish.
FRANCIS FOSTER: You know, and also we say, just to take your argument, there are some jobs that are stupid or boring or whatever else. But when I was teaching, I remember I had a child and he was being— he was very low ability, he struggled at school and he found his lessons very, very difficult and he would become frustrated and he would lash out. And I knew why he was doing that, but nevertheless I had to introduce some form of punishment to show him that his behavior wasn’t acceptable.
And one day I kept him in during a lunch break and I said to him, Marcus, what you’re going to do is you’re going to sharpen all these pencils. So he went and sharpened all these pencils. And when he came back to me at the end of lunch break, I thought he’d be upset, frustrated, angry. And he had a look of real pride on his face. He went to me, Mr. Foster, look at my pencils, look at all the pencils. And they were all done beautifully.
And I realized at that point, the reason he was proud, and it was my fault as well as everybody else’s, is for one of the first times in his life, he had been given a task that he could succeed at and he could do. And he could have pride in. I really worry, Roman, that when you take that away from people, we are all going to end up like Marcus, lashing out, angry and frustrated. Because we’re not so different. We still have the child within us.
DR. ROMAN YAMPOLSKIY: Short-term good news, there is rentahuman.com where you can get a job doing things for bots. So maybe we’ll hire you to sharpen pencils.
Humanity as AI’s Pet: A Best-Case Scenario?
KONSTANTIN KISIN: There you go, mate. Career sorted. Right. Let’s try a counterargument. What if superintelligence creates endless abundance? There are some problems with abundance, the sorts of things that Francis is talking about, but park that to the side for one minute. And we humans are totally satiated by the productivity of AI. It produces everything we could possibly want. We’ve got wonderful lives, no one has to work, blah blah blah blah blah. And therefore humanity becomes sort of like a nice pet for the AI to maintain, to look after. You know, it’s not quite ideal, but like you’re a pet squirrel, the AI looks after you, it feeds you at the right time, puts water in your bowl. And it has no reason to not look after you because you are like its beloved pet.
DR. ROMAN YAMPOLSKIY: It could happen. Again, we cannot predict what specifically would happen. Problem is you are not in control. Sometimes owners decide to put you to sleep or neuter you or do other things to pets. You are not in charge. So those decisions will no longer be with us. We have 8 billion people who are not consenting to this experiment. They cannot consent because they don’t know what’s going to happen. Maybe you’re a happy pet, maybe you’re an abused pet. We don’t know.
KONSTANTIN KISIN: I’m struggling for counterarguments here. I mean, this doesn’t sound good.
DR. ROMAN YAMPOLSKIY: This is one of the better outcomes. The safety angle where you’re a pet, you’re protected, you’re not in control, but this is one of the better outcomes. This is what people hope for as a good outcome.
KONSTANTIN KISIN: Yeah.
FRANCIS FOSTER: This is what people hope for.
DR. ROMAN YAMPOLSKIY: Well, the other things are much worse. Existential risk, suffering risk, all that is way worse.
FRANCIS FOSTER: But if you’re a pet, you literally have no agency.
DR. ROMAN YAMPOLSKIY: Some people are very happy with that right now with the government.
KONSTANTIN KISIN: That’s fair. No, I think Roman’s point is that it’s not that people think this is the good option. They think it’s the least worst option of the ones available. Normally, my job on the show is to interrogate the arguments that people put forward and try and find gaps. But I’ve been thinking about the same thing without having your knowledge or expertise in it. And it does seem, the very simple fact when you put survival instinct plus superior intelligence together, that seems to me inevitably to lead to the things you’re talking about, or at least to the very serious risk of the things you’re talking about. And then I guess part of the reason it’s not getting solved is the collective action problem, right? That’s why it’s not being—
DR. ROMAN YAMPOLSKIY: What is good for community is not what is good for individuals. As an individual, you want to have the most progress on your model, have the most advanced model. And then if government comes in and says we need to stop research, you forever are locked in as the dominant corporation in that space.
AI Safety, Nuclear Weapons, and the Race to Superintelligence
KONSTANTIN KISIN: Thinking about the idea of China and the US in particular working together to stop the creation of superintelligence, I guess the reason that that is less likely than we would want, I think, is the same as it would be with nuclear weapons. You have countries that say they don’t have nuclear weapons and won’t pursue them, but actually, because of the prisoner’s dilemma situation where it’s to the benefit of each of them to screw the other, to lie and then to develop the thing, you almost— some people would argue you can’t take that risk, but then we’re back where we started.
DR. ROMAN YAMPOLSKIY: So there is a fundamental difference. We talk about nuclear weapons as weapons of mutually assured destruction, but with AI, with superintelligence, it’s literally that whoever creates it, uncontrolled superintelligence, kills everyone. So it’s not the same as with nuclear weapons. I have to decide to deploy them. It’s a tool. I have an agent making this decision. The counterparty decides to retaliate. We all die. Here, just the fact that you created it is enough. There is no additional steps you have to take.
KONSTANTIN KISIN: Yeah. And you’ve been raising concerns about this for a long time. What has been the response from the leaders in the field of AI?
DR. ROMAN YAMPOLSKIY: So leaders of labs are all on record as recognizing AI safety as a big problem. Before they became CEOs, they wrote blog posts talking about it, estimating probabilities of doom as very high. And so they are kind of on board. Like you can see the example with Elon who was saying we are summoning a demon, funding AI safety research. So doing all the right things until somewhat recently.
KONSTANTIN KISIN: And why do you think he’s changed his mind?
DR. ROMAN YAMPOLSKIY: He realized that he’s failing to stop it and that others, maybe less capable people, will be creating superintelligence. And at this point, it might as well be his project which succeeds.
Political Bias and the Environmental Logic of AI
FRANCIS FOSTER: Roman, when I read about AI, one of the areas that concerns me the most is when people who program or started AI— and look, push back if I get this wrong, I’m obviously not an expert, but it seems to me that when you program something, you install your own biases within it, even though you may not be aware of having biases. Is there potentially an issue where people who program a certain AI might make it more politically inclined one way or the other? Is that a real concern? So you may program an AI which eventually bends more to an authoritarian angle or maybe more hyper-conservative and therefore it sees these people as being wrong and evil for a particular reason? Or is that a misfounded fear?
DR. ROMAN YAMPOLSKIY: So, A, we are not programming those systems. They are trained on data. The data has certain bias built in. It’s human-generated data on the internet. You know what bias the internet has. So that’s what we’re training on to begin with. Now the after-the-fact filtering is where you instill your corporate values. And yeah, they can be more woke or more conservative. You decided in China the model would not talk about Tiananmen Square, in the US it would not talk about you know what. So everywhere they have their own limits.
Elon, I think, is trying to say let’s build a kind of truthful AI and avoid those biases, but you still have the same training data. You don’t have your own clean internet with clean data. So you still get a lot of human historical biases into that. You can’t remove all bias. Bias is what learning is. When you learn something, you learn to bias data. You’re not randomly making decisions. You have some information. As a society, we say, oh, this is not good information or it applies to groups, not individuals, or whatever you decide. But that’s exactly what we train those systems to do.
FRANCIS FOSTER: And the concern for me is, let’s say you have a concern about the environment and the AI alights on that and it says, well, the world is being damaged, climate change, pollution, all of these types of things. These are bad things. Let’s look at who causes the majority of the pollution in the world. Human beings. Who cuts down the rainforest? Human beings. Therefore, if you apply logic to this problem, how do we solve this? Well, we get rid of human beings. Is that something that it could be arrived at very easily?
AI as Therapist and the Limits of Digital Data
DR. ROMAN YAMPOLSKIY: It’s a good example. I have a different one where we create AI to reduce suffering. Conscious life forms suffer. So how would you reduce suffering in the universe? Reduce life. If there is no living beings, there is no suffering. There is a branch of philosophy, negative utilitarians, who value suffering so much as a negative state, anything should be done to remove it at any cost. So not procreating, for example, is one solution. Naturally dying out. But AI can certainly decide that it’s more important to end suffering immediately.
FRANCIS FOSTER: And also as well, you see it more and more that people come to AI as a de facto counselor or therapist, presenting it with moral problems. And this is becoming more and more accepted. And it seems bizarre to me that you would outsource very human problems to something that is not human. It seems to me that that is profoundly worrying, isn’t it?
DR. ROMAN YAMPOLSKIY: So we are kind of running experiment on ourselves. We don’t know what it does long-term. There’s some evidence that maybe it will take people who are borderline insane or depressed and amplify those tendencies. But we don’t know. We need to do science and we don’t have time to do science properly because by the time you start working with this model, 20 new models have been released and this one is no longer cutting edge.
KONSTANTIN KISIN: One of the things that always bothered me about it, and it was clear in terms of the bias that you talked about, because there was a moment when you might say, well, most of social media was woke, right? And then now some social media is not woke, quite the opposite, right? And the one thing that I think all of us know who live in the real world is the internet is not real. Right? But to AI, that’s all it has to go on, right? That’s all the data that it’s taken in. It’s taking in this digital data, which is not necessarily reflective of human experience. Like, if you were an alien coming down from space and someone said to you, the conversation happening on Twitter or on Threads is how humans think, we humans would laugh at that. But AI doesn’t know that, does it?
DR. ROMAN YAMPOLSKIY: Right. But it’s not limited to internet data, to be fair. It has all the books, all the papers, all the movies, all the TV shows. So there is some representation of real human interaction.
KONSTANTIN KISIN: Yes, but we are sitting here in Los Angeles, for example. If you watch Hollywood and live outside of America, your impression of America is not remotely accurate because these films and series and movies are made by people who live in a very specific subculture in Hollywood. My point being that the human experience is a lot richer than what you can gather from books and TV shows and the internet. And AI, I think, is almost inevitably going to miss that, which would be another concern, wouldn’t it?
DR. ROMAN YAMPOLSKIY: So this is where we can run experiments and go, okay, you have a psychiatrist who is a model and a psychiatrist who’s a human, who does better with clients? Who do clients like more? Apparently, you don’t have to have a physical body or be a human to be very good at that job.
KONSTANTIN KISIN: Well, being liked and being effective are different things, right?
DR. ROMAN YAMPOLSKIY: The whole field is not effective.
KONSTANTIN KISIN: How do you mean?
DR. ROMAN YAMPOLSKIY: Psychiatry.
KONSTANTIN KISIN: Psychiatry, probably not, yeah. Yeah. But there are types of therapy that are very effective.
DR. ROMAN YAMPOLSKIY: Early studies show that those systems can do really well in many human domains. So comments from nurses, things like that, they are competitive.
KONSTANTIN KISIN: Yeah, actually, I mean, I tested it out. I was having something I couldn’t work out what to do and it was very useful. It was like, oh yeah, you should do this. And the thing about it I found interesting is it works best if you tell it not to bullshit you. If you say to it, cut the bull, just tell me straight, it will do it.
DR. ROMAN YAMPOLSKIY: You get what you prompted for. Yeah, yeah.
AI and the Future of Warfare
FRANCIS FOSTER: One of the ways that AI is going to change the world is in the field of war. So talk to us a little bit how AI will impact warfare. I think we’re already seeing it at the start and what could be the future that we’re heading towards?
DR. ROMAN YAMPOLSKIY: So right now it looks like it’s more physical, mechanical. So you have drones blowing up things. I think long-term it’s more about cybersecurity, hacking infrastructure. So the US has everything basically controlled digitally, right? Power plants, internet, banking. So if you had a super capable hacker, that would be very impactful if somebody wanted to attack us this way.
Just yesterday we learned that Anthropic has a more advanced model, which is amazingly good at hacking, and they haven’t released it yet, and they are scared of how well it will do. And so we’re slowly trying to release it to the cyber defense community to figure out if they can do something with it.
FRANCIS FOSTER: Because correct me if I’m wrong, but if let’s say you have a super hacker that is whatever, 50 or 100 times more intelligent than even the best human hacker. It could render internet banking entirely obsolete. I mean, what’s the point of having internet banking if it’s not secure and it can get hacked at random? It could bring about the end of multiple businesses the way we see the internet, surely, right?
DR. ROMAN YAMPOLSKIY: So you can obviously just hack things directly, find zero-day exploits. What is also very concerning is social engineering attacks. If you can generate believable deepfakes, video, audio, video from your boss, from your family telling you, “I need a password for this,” or “click that,” everyone clicks. Even cybersecurity experts click and things like that. So you don’t have to hack the actual account. You just have to get access to the person.
FRANCIS FOSTER: And how far away do you think we are from that particular reality where you can get a call and it could sound like my dad, who’s an older man from a particular part of the UK, and it’s exactly like his voice?
DR. ROMAN YAMPOLSKIY: Yeah, so the technology exists, and we’ve seen examples where the company got video from the CEO saying, “Transfer the funds, I need them to close the deal,” and they transferred the funds. It already happened. Now, it’s not as common and easy for every person to do it on scale, but technology exists. I can clone your voice, I can definitely animate a video of you.
KONSTANTIN KISIN: But it’s not quite trustworthy just yet.
DR. ROMAN YAMPOLSKIY: So some people are not very good at telling deepfake videos from real videos. Long term, it will become impossible. The quality will be exactly 50-50 because that’s how they are generated. You have a system generating fakes and a system saying authentic or not, and they meet in the middle. That’s how we generate it using different models. So long term, there is no way to know. Short term, you can count fingers and sometimes it has an extra thumb or something, but most people don’t pay attention to that.
FRANCIS FOSTER: But so could we not design a narrow form AI to actually combat that? And that is highly trained and very specific at those skills and will be able to go, that’s real, that’s fake?
DR. ROMAN YAMPOLSKIY: Right. So the moment you tell me how you know it’s fake, I’ll use that information to make it better quality fake. And then we’re done with this process of back and forth. You can’t tell anymore. So if you’re telling me you just count fingers and there are too many fingers, my new model will make sure there are 5 fingers. So you lost that piece of evidence.
KONSTANTIN KISIN: So it’s a constant — what do you call it? It’s like an evolution, the war between the predator and the prey.
DR. ROMAN YAMPOLSKIY: That’s exactly what it is.
KONSTANTIN KISIN: It’s an arms race.
DR. ROMAN YAMPOLSKIY: Yes, it’s an arms race.
Deepfakes and the Distortion of Reality
FRANCIS FOSTER: One more thing, because the concern — look, there’s many concerns, of course there is with that. But the real concern is we’re distorting reality. Pretty soon we’re not going to know what’s real and what isn’t. How can I know if my dad calls me up, he’s like, “I’ve had a fall, I’m going to need some money, we’re going to need to put, we’re going to need to go to a private hospital, I’m going to need however much to have a hip replacement.” I’m in the States, he’s at home. I’ll go, right, transfer, bang.
DR. ROMAN YAMPOLSKIY: In my family, we have private passwords, so the kids would know if I’m talking to them or a deepfake.
FRANCIS FOSTER: It’s terrifying because we’re not going to know what’s real and what isn’t. And that has a dementing effect on ourselves because isn’t that one of the signs that you’re going insane, that you can no longer trust what you see or what you think or what you feel?
DR. ROMAN YAMPOLSKIY: It’s all a simulation anyways.
KONSTANTIN KISIN: What do you mean?
DR. ROMAN YAMPOLSKIY: You haven’t read my paper on we all live in a simulation?
KONSTANTIN KISIN: I have. We had Scott Adams on the show before he passed and he talked about this, but not to us, I think. Is this why you’re so serene about this, that you don’t think this is real?
DR. ROMAN YAMPOLSKIY: So when you take this technology to its logical conclusion, you will have software which is intelligent agents. You have virtual worlds they can reside in, simulations of this planet, like Google Earth kind of deal. Put those two together, you are now creating virtual worlds populated by intelligent beings. Let’s say all the kids are playing video games. So there are 4 billion virtual environments and only one real one. Statistically, you are more likely to be an agent in a virtual environment.
And I like this a lot because it kind of puts some doubt into the mind of AI about, “Am I being tested? Am I still in a simulation or is it time to kill humans?” So I always try to promote that idea as well.
KONSTANTIN KISIN: But humans are kind of badly designed. Like if you were to design intelligence from scratch, you wouldn’t make it need a shit every 3 hours, you know what I mean?
DR. ROMAN YAMPOLSKIY: Maybe you would.
KONSTANTIN KISIN: Why?
DR. ROMAN YAMPOLSKIY: Fertilizer.
KONSTANTIN KISIN: The easy ways to make fertilizer.
DR. ROMAN YAMPOLSKIY: Not that easy. But again, you cannot criticize design if you don’t know what the goals are. You cannot criticize simulation if you’re not externally understanding the purpose of a simulation. Look at our designs, right? I’m flying in an airplane. Some of them still have ashtrays. Why do they have ashtrays? Well, airplanes evolved from previous versions of it. No, no, it’s not a poor design. Some decision was made for some reason. You just don’t know all the history or the reason behind it.
What Should We Do?
KONSTANTIN KISIN: This has not been the most enjoyable episode we’ve ever done, but very important.
FRANCIS FOSTER: I’m trying.
KONSTANTIN KISIN: I’m really glad for you. Now I know what it feels like to be a woman.
DR. ROMAN YAMPOLSKIY: All right. I’ll buy you dinner afterwards.
FRANCIS FOSTER: Yeah, to make it up to us.
KONSTANTIN KISIN: I prefer before, but afterwards will be fine as well. Yeah, during.
KONSTANTIN KISIN: So I guess the obvious question is, what do you advocate that we now do?
The Case for Narrow AI and the Limits of Political Control
DR. ROMAN YAMPOLSKIY: So it’s easy. We don’t do— not doing is very easy. Don’t build general superintelligence. Don’t train models on all the data, multimodal data, to solve every problem. Concentrate on specific problems. Train only on relevant data. So you’re talking about breast cancer detection. Okay, great. Train on that data. You’ll have a super intelligent tool for doctors to do early detection. You’ll save lives. It’s wonderful. I think you can get most benefits from the economy with narrow tools.
KONSTANTIN KISIN: And how do you achieve that not being done? Politically and geopolitically. What would it take for the US government or for the leaders of these companies to adopt that view?
DR. ROMAN YAMPOLSKIY: Personal self-interest. If you tell the president of United States, the moment this technology comes around, you lose all power. That’s a compelling argument. I don’t think he would like that. If this is the consensus of scientists in that field, then maybe we should not be building it.
KONSTANTIN KISIN: And is it the consensus?
DR. ROMAN YAMPOLSKIY: So if you look at top 3, I believe, computer scientists by number of citations in a field, they are in agreement. This is really dangerous. It’s not something we should be doing. So we’re talking about Hinton, who has Nobel Prize, Turing Award, Bengio, Turing Award. I think we got maybe 100,000 people signing a letter saying don’t build superintelligence. Many top scientists. Are there outliers? Yes. Do they usually have a company where they get billions of dollars to build AI? Also yes.
KONSTANTIN KISIN: Well, I was going to ask you about that because ultimately, you know the thing that maybe I’m like, this conversation is so wild to me that maybe my brain is like opened up to levels of imagination that are not real. But I’m just saying out loud what I’m thinking in the moment, which is 2 or 3 years from now, the AI companies will be so powerful, and I don’t mean powerful in the sense of money, I mean powerful in the sense of powerful, like the ability to kinetically get what they want. I’m not sure 2 or 3 years from now, or 5 years from now, whenever, the President of the United States will be able to tell them, stop doing this, and unless they actually agree to get them to stop.
DR. ROMAN YAMPOLSKIY: So maybe nationalizing that technology will actually something we see happen.
KONSTANTIN KISIN: Yeah, but what I’m saying is there comes a point where you actually physically will not be able to nationalize them because they will be more powerful than the US government.
DR. ROMAN YAMPOLSKIY: So again, it’s all about this paradigm shift. Before they hit superhuman levels, it’s tools. You can come in, shut it down, change software, all that is possible. The moment you’re now dealing with superintelligence, it becomes a lot harder.
KONSTANTIN KISIN: Yeah.
What Can Ordinary People Do?
FRANCIS FOSTER: There’s going to be a lot of people who are regular people doing regular jobs with regular lives, and they’re going to listen to this.
KONSTANTIN KISIN: Not for very long, mate, based on this conversation.
FRANCIS FOSTER: Well, look.
KONSTANTIN KISIN: It’s an unkind joke, but it’s sort of like, I mean, that’s what follows.
FRANCIS FOSTER: Yeah.
KONSTANTIN KISIN: Anyway.
FRANCIS FOSTER: And they’re going to think to themselves, “Look, if this is true, and there’s no reason to believe it isn’t, this is all coming down the metaphorical pipeline. What can I do to insulate myself as much as possible from this technology and my family’s?”
DR. ROMAN YAMPOLSKIY: Not much. So you can vote for people who are more aware. Some politicians are now starting to kind of wake up a little and suggest we don’t build maybe as much compute for those companies or provide some sort of regulation. But it’s sort of like the whole concept of aging and dying. It’s always been the case. We all were going to die. Your kids, your friends, your family. What did you as an average person do about it? Well, nothing. Government didn’t allocate funds towards that problem. Seems important. In a sane world, you’d have like 90% of our budget going to fight aging. We’re all dying. So it’s exactly the same scenario. We just have a different reason we’re going to die and maybe different timeline. Depends on your age. If you’re like 95, it’s the same.
FRANCIS FOSTER: Yeah, yeah. Well, absolutely, absolutely. I mean, here’s a question. Do you think it could solve the issue of aging and mortality? Could it?
KONSTANTIN KISIN: Yeah, in a negative way.
DR. ROMAN YAMPOLSKIY: I think it’s actually the narrow problem we should worry about. I think somewhere in your DNA there is a number of factors which allow you to rejuvenate yourself a certain number of times. And if we can reset that number, you’d live a lot longer, much healthier life. Most diseases are byproduct of aging. And I think we can do it with a narrow superintelligence, not with a general one.
A Fork in the Road
FRANCIS FOSTER: Because it seems to me that we are at a fork in the road now, right? Where we can go down one way or we can go down another way. And the worry is that we’re heading down one way where it’s going to lead to our destruction. And I just find it baffling in a way that the people in charge of this technology don’t understand that or are unwilling to see that.
DR. ROMAN YAMPOLSKIY: They don’t feel that they can say no. They cannot say no to investors, because they’ll be replaced and someone else will say yes. The options are amazing, the stock options they get. So they don’t have an option to not do it. The hope is again, that there is external pressure for all the companies to stop at the same time and then they’re fine, they have an excuse to investors. Investors bought in at very high valuation. They needed to go and have 100x. So they needed to continue growing hyper-exponential towards superintelligence. They cannot just say, let’s have normal profits.
FRANCIS FOSTER: So the financial pressures are what drives them.
DR. ROMAN YAMPOLSKIY: Incentives are completely misaligned. We have no incentives which are pro-humanity. All the incentives are to develop this.
FRANCIS FOSTER: Do you think part of the problem is as well, Roman, that the politicians don’t understand the technology or the long-term effects of this technology?
DR. ROMAN YAMPOLSKIY: So many don’t, especially in the US. Many are so old they don’t use computers or internet or anything. Maybe they tweet, I don’t know. But we have some politicians who are on record as saying this is very bad, dangerous, we need to do something, regulation. Problem is you can’t regulate this away. You can’t just say it’s illegal to kill humanity. It doesn’t work. You need to have specific bans on this particular deployment, and I don’t think they’re willing to do that.
KONSTANTIN KISIN: And you need to orchestrate some kind of agreement with China as well.
DR. ROMAN YAMPOLSKIY: I think that would be actually easier. I think that would not be the most difficult part because, again, China doesn’t have control mechanism. You think Communist Party wants to lose control? They are very good at staying in control, and if they see this as potentially threatening their long-term survival, they’ll be very happy not to do that.
Raising Children in an Automated World
KONSTANTIN KISIN: That’s an interesting point. You mentioned you have kids. I do as well. What do you— I mean, is there any point training your kids to be able to do a job at this point?
DR. ROMAN YAMPOLSKIY: Well, again, it really depends on what type of job. I wouldn’t train them to do something boring just to make money. That’s going to be automated anyways. If there is something they find personally fulfilling to do, so there are lots of things we talked about, one only human occupation, but you can be— I don’t imagine— You can do all sorts of training. You are sensei, you are guide, you are tutor, just human interaction. You take people on hikes, you meditate, you do sort of things where I don’t want a robot doing it for me.
FRANCIS FOSTER: Yeah. So it seems that we’re going to prize human interaction above all else, really.
DR. ROMAN YAMPOLSKIY: Well, I don’t know if that’s true right now. We’re not valuing that much. We sit at home and scroll. So maybe we don’t need it as much as in terms of jobs. I’m saying that certain jobs we will prefer to be done by humans. Like podcasting. Which ones is not obvious. Podcasting, I think if you are famous and you have people who really like you, you. But I think AI would be better at asking questions. Better at generating video content. So if you kind of grandfathered yourself in, like you are Joe Rogan or something, you’ll be okay.
KONSTANTIN KISIN: But I wish you could have just said TRIGGERnometry.
DR. ROMAN YAMPOLSKIY: I mean, who’s that? But I think for a new person to start something like that successfully in a world with super intelligent editing and questions, because they watched every interview I ever did. They know every question. They read every paper. How many of my papers have you read?
KONSTANTIN KISIN: Not many.
DR. ROMAN YAMPOLSKIY: Yeah, not right.
KONSTANTIN KISIN: It’s a good point.
Neo-Luddites, Social Unrest, and Mass Surveillance
FRANCIS FOSTER: I was thinking about this in terms of the kind of the political element of it, and I can really see, Roman, 10, 20 years down the line, we get a kind of neo-Luddite movement which is anti-technology, anti-AI, and pushes back against that. And it wouldn’t surprise me if we get also a terrorist element of this. You know, I don’t think it will be long until you see people when Waymo start taking people’s jobs. I don’t think it’ll be very long till you walk past, and by the way, I don’t agree with this, I want to make this clear, you’ll see a Waymo with a smashed windscreen.
DR. ROMAN YAMPOLSKIY: We just had the biggest ever protest to stop AI, I think in San Francisco. Like 100 to 200 people showed up, which is not a lot, but it’s a good starting point. If you’re interested in this social unrest, civil war, Hugo de Garis has a beautiful book, “Artilect War.” He wrote it like 20 years ago, completely predicting all these elements. This is the most important issue of our time. There will be people who want to create godlike machines and go to cosmos and people who are tyrants. They want everything to be local and not to build those machines. And that’s the decisive issue of our time.
FRANCIS FOSTER: Because we talk about mass surveillance states, the government would say, “Well, look, more and more, particularly young men, are unemployed. They can’t get a job because the jobs that they used to be able to get, like driving jobs, manufacturing, they’ve all gone. They’ve all gone. So we’ve got this large group of unemployed young men, and if they don’t have a job, what tends to happen is they get angry, they get more violent, and the government will then come in and go, ‘Well, look, we’ve had all of these civil unrests and uprisings and riots. We can’t have this. Therefore, it’s very important that we bring in mass surveillance to keep you safe.'” I mean, that’s a real possibility, isn’t it?
DR. ROMAN YAMPOLSKIY: It is possible without concern about superintelligence. We just have governments deploying latest technology to spy on us. We’ve seen it with Snowden, we see it with others revealing what’s really happening, right?
KONSTANTIN KISIN: Yeah.
FRANCIS FOSTER: And it’s also the concern as well that we’re going to live in a world which is far more unstable because we have these large groups of men who don’t have access to a job.
DR. ROMAN YAMPOLSKIY: So the economic part of not having a job is easy to solve. You can tax big AI, you can tax robots and distribute that. That’s not the difficult part. Meaning is difficult. Control is difficult.
Closing Thoughts
KONSTANTIN KISIN: Yes. Roman, well, thank you, I guess, for coming on the show. No, we’re very grateful for your time. I’m just— I think, unfortunately, you’ve confirmed a lot of the things that— and I was wondering about this, you know, you are, I think, from the former Soviet Union. I am. Francis, you know, he has some family ancestry from countries that have had difficult existences. I always worried that it was my kind of temperamental Russian background that makes me worry about this stuff. But as always, when I don’t see a logical counterargument, that’s when I go, well, until I hear one, I will think this is likely. And I just, I don’t see the counterargument to the very basic point you’re making. Which is if you’re a squirrel, you cannot keep humans under control. And anything that has a survival instinct that you don’t control that’s more intelligent than you will eventually take over. Best case scenario. Best case scenario. So the reason that we are kind of uncomfortable is that this has kind of become real for us in this conversation. So thank you for coming on. I hope more people hear your message and humanity begins to take this seriously.
Suffering Risks and Final Thoughts
DR. ROMAN YAMPOLSKIY: I hope so. So usually in science, when you publish a paper or a book and you are wrong, there is no shortage of people jumping in and publishing rebuttals, corrections, solutions. We have many papers, many books, all arguing the same thing. There is no rebuttals, there is no patents, there is no peer-reviewed papers in Nature saying this is how we control advanced AI at scales to any level. Don’t worry about it. So it’s not just that we had this conversation and so far nobody jumped in. They had a decade.
KONSTANTIN KISIN: Thank you so much for coming on. Before our audience ask you their questions, the last question we ask all of our guests is what’s the one thing we’re not talking about that we should be?
FRANCIS FOSTER: Before Roman answers the final question at the end of the interview, make sure to head over to our Substack. The link is in the description where you’ll be able to see this.
KONSTANTIN KISIN: Is there an argument that humans are now in service of a new form of organism without realizing it?
FRANCIS FOSTER: Do you really think there is a risk that AI leads the human race becoming complacent, not bothering to study, research, and advance ourselves?
KONSTANTIN KISIN: What’s the one thing we’re not talking about that we should be?
DR. ROMAN YAMPOLSKIY: Suffering risks.
KONSTANTIN KISIN: Suffering risks. Tell us more.
DR. ROMAN YAMPOLSKIY: So things could be so bad you wish you were dead.
KONSTANTIN KISIN: Why?
DR. ROMAN YAMPOLSKIY: Digital hell. You can create environment where you are tortured, but you are immortal. Or maybe you are uploaded to a virtual environment. And what for? You’re asking too many questions. Superintelligence can decide to do all sorts of things. Maybe it’s dealing with some malevolent payload. Maybe it’s running experiments. You can ask why this? The world, the simulation has suffering in it, right? That’s what every religion deals with. Why did all-good God create a world with pain and suffering? But there are some answers to those questions, and it’s not ruled out by what we coded into those systems.
KONSTANTIN KISIN: Nice to end on a positive. All right.
DR. ROMAN YAMPOLSKIY: At least we didn’t talk about it.
KONSTANTIN KISIN: Say again?
DR. ROMAN YAMPOLSKIY: At least we didn’t talk about it. That’s the question.
Related Posts
- Making Sense #469: w/ Tristan Harris on Escaping an Anti-Human Future (Transcript)
- HUGE* Conversations: w/ Google DeepMind CEO Demis Hassabis (Transcript)
- Figure AI CEO Brett Adcock’s Interview @ Shawn Ryan Show (Transcript)
- Moonshots #241: w/ Eric Schmidt – Singularity’s Arrival, 92-Gigawatt Problem & More (Transcript)
- “We Are Being Gaslit By The AI Companies!” – Karen Hao on DOAC Podcast (Transcript)
