In this Triggernometry podcast episode, live on December 18, 2025, Irish tech entrepreneur Eoghan McCabe argues that most people—including many executives—still have no real grasp of how fast AI is moving or how radically it will reorder power, work, and everyday life. He explains why today’s models are only the “flip phone era” of AI, what happens when systems become capable of autonomously building and deploying other AIs, and how that could concentrate unprecedented leverage in the hands of a tiny number of companies and states. McCabe also talks through what this means for founders, employees, and regulators: which kinds of jobs are most exposed, why safety theater won’t work, and how individuals can position themselves before the next wave hits.
—
Welcome and Introduction
KONSTANTIN KISIN: Eoghan, welcome to Triggernometry.
EOGHAN MCCABE: Thank you, thank you.
KONSTANTIN KISIN: It’s great to have you on. Listen, everywhere we’ve been traveling around the U.S. now for a few weeks, everywhere we go, every dinner party, every lunch, every coffee, everywhere, there’s only one conversation people are having which is about AI. You founded and run an AI company here in San Francisco, which is why we’re delighted to have you on. Thanks for hosting us at your offices. Before we get into the conversation, tell us a little bit about AI itself. What is AI?
What Is AI?
EOGHAN MCCABE: I mean, it’s a digital form of intelligence. It’s a digital thing that can do logic and thinking and speaking. And it’s been coming for a long time. But the AI that we talk about today is three years old. Famously, OpenAI released ChatGPT that shocked everyone. It could speak like a human and think like a human, apparently.
FRANCIS FOSTER: And.
EOGHAN MCCABE: It’s that thing and everything that’s come since then that really is now a new force and factor in global economies and in the world.
KONSTANTIN KISIN: And if you had to explain to a seven year old how it works, what would you say?
EOGHAN MCCABE: It’s mathematics, it’s numbers, it’s probabilities. It’s a lot of stuff that even people like me who apply AI barely understand. There’s a very small number of people who deeply, deeply understand it and in fact there’s people doing science to disappear. It’s a fancy magical computer technology that likes to talk to people.
How AI Gets Its Information
KONSTANTIN KISIN: And how does it get the information? Because one of the things I’ve always thought about is I don’t know if you feel this way, but if I open social media, if I open Twitter, if I open Facebook, if I open Instagram, I know that the things that I am seeing on there are not actually reflective. They might reflect some portion of reality.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: But I don’t think they reflect the entire spectrum of reality because we know that actually a very small percentage of people are on social media and they’re disproportionately on Twitter. They can disproportionately political on Instagram, they’re disproportionately obsessed with showing off their physical, actual whatever. Are the AI LLMs? Are they getting their information exclusively from online things?
EOGHAN MCCABE: To my knowledge, yes. They are trained on the Internet, famously. They love Wikipedia and Reddit and they love print, mainstream legacy media. But I think they’ve also been trained on YouTube and frankly any piece of human created content that exists on the Internet.
The Coming Transformation
FRANCIS FOSTER: And Eoghan, we’ve been, like I said, we’ve been speaking to a lot of people. We spoke to a guest of the show, Eric Weinstein, and a friend of ours and he said something to me, he said, or rather to both of us, he said, “I don’t think people understand what’s coming down the pipeline and how much AI is going to change the world.”
EOGHAN MCCABE: Right.
FRANCIS FOSTER: Do you agree with that? And if you do, could you just paint the picture for people like me who look like they are in tech, but really not. So just paint that picture for us, please.
EOGHAN MCCABE: Well, the reality is that even the people deep in AI don’t know what’s coming next. It’s constantly changing. And the narratives in San Francisco, which is certainly the geographic and physical home of AI, continue to evolve, most likely over some period of time. TBD, how much time and the amount of time really matters, that’s how traumatic or not it will be.
Large amounts of work that’s done by humans today will be done by AI. It’ll be knowledge work, but also physical work. Robotics technologies are developing pretty quickly too. And the best model for it is not just that it will do the work that humans do today, but it’ll also do work that we can’t afford to give to humans today also. And that’s the case always with disruptive technologies where it serves unmet demand.
So for example, and this is not intended to be an advert for our product, but we make AI that does customer service. So thousands of companies deploy it to answer customer service inquiries and we’ve got 7,000 customers for it. And for the most part, they’ve not let humans go. In fact, they’ve supplemented the human service reps with the AI to answer the queries that they didn’t have time for, they couldn’t afford to answer, etc. So that’s just one example of the nuanced way in which it’s going to augment the world.
On the positive side, it’s hard to not imagine that it’s going to boost GDP, it’s going to allow for all sorts of economic activity that’s not been possible before, increase longevity and quality of life, create new jobs, new possibilities. But if it happens really quickly, these changes happen really quickly. There will be fallout and tension and change like there’s always been with new technologies.
Technology throughout its history has done a great job at taking people out of work that people didn’t do well or that they hated to do. Technology has saved the amount of people that used to go down mines and get trapped and lose their lives or lose limbs on a factory floor or do any number of repetitive jobs that weren’t a great use of the human spirit and ingenuity and the great things that humans are capable of. Technology’s always done that. But if it happens fast, there’ll be some turmoil that people think.
Right now in San Francisco, people think that the chance of a discontinuous change where overnight AI can do like 90% of knowledge work, is a low probability. I don’t know what that is. I have to guess, like, is it 2, 3%? I don’t know. People think more likely that big changes are coming, but we’ve probably like 10, 15 years before it’s adopted and fully interacting with the world in a way that would change things very really for people in the way that they work.
There’s so much more nuance to imagine that we haven’t even got to as a society yet. I can well imagine, as a friend of mine helped me realize that there’s going to be certain work that we don’t want AI to do. He calls it socio-political work where you can imagine regulations where you say, you can’t have AI judges or teachers, teachers unions will probably say, no, we only want humans. And maybe that’s great, actually. Although it’s going to be hard because ChatGPT is already better than most teachers. Most teachers are not particularly great, but the human thing that they do is very important.
You’re going to imagine that we probably want humans to be professional dancers rather than robots. It’s going to be all sorts of places where people will want or there will be regulations to make sure that we use humans. So the ways in which it plays out are just super unclear. And I don’t think that the reality is either pure doom or pure utopia. And I think that that’s the problem with the narrative today is that there are certain factions that think that AI is just all bad, all dangerous, and certain factions that think it’s a great blessing to humanity.
The truth is probably somewhere in the middle. And as long as we can have that conversation, we can probably plan for it.
Which Jobs Are Most Vulnerable?
FRANCIS FOSTER: So which industries, Owen, do you think are going to be most vulnerable? And if not industries, what jobs? So let’s say you had a kid and they said to you, I want to do X job, which ones would you go? Probably not that one.
EOGHAN MCCABE: For me, it’s hard to imagine being excited about art that comes from AI. Now AI is getting great at executing things that look creative. And you could even say that AI is creative insofar as it mixes different ideas and comes up with things we never thought of before. But I think that core to art, when you watch a movie or look at a painting, is the human spirit and soul and the fight and the pain behind all of it, or the expression of love or joy or the protest that was involved in that particular piece of content.
So similarly with media. Right. I don’t, I can’t imagine being excited to read AI generated opinions on things. I mean, that’s probably coming. And maybe that’ll be a subsection of content. Maybe when AI gets incredibly strong, we’ll actually want to know its opinion on a bunch of ideas, but we’ll probably all have ready access to it. We’ll probably know before we even read it in a newspaper what AI thinks.
But yeah, if I was advising any young person where to go, one place will be creativity, anything that’s creative. And then I do think deep science and applying AI is another place here. I do think that there’s plenty of time to benefit from the second order effects of AI. I mean, when we think about the power of AI and the benefits to different countries, the discoveries that can make will be very advantageous. When AI does science, but someone’s going to have to operate that AI.
So honestly, there’s no person on this planet who can answer that question very well. But I think two things you could focus on as a young person will be things that benefit from and enjoy the human soul and spirit and creativity and things that use AI itself. Become an AI operator, an expert.
The Waymo Example: Change Takes Time
FRANCIS FOSTER: Because we’ve been here in San Francisco for a matter of hours and every other car or every other taxi is Waymo, which makes me think like if you—
KONSTANTIN KISIN: Driverless cars.
FRANCIS FOSTER: Driverless cars. And if you look at the trajectory of that, I would think that in probably in 10 to 15 years time driving a taxi or driving a lorry or some type of professional driver, those jobs probably won’t exist anymore.
EOGHAN MCCABE: Probably not. The timeline you pointed out there is very important. Waymo had great working demos in 2015 and it’s still going to be another 10 years before they’re confidently on the streets of Dublin or London. I mean, although I think they’re experimenting with Wales in London. So.
KONSTANTIN KISIN: Well, as a British person coming to the U.S. and seeing. We’ve seen them in the streets, I think mainly of Austin.
EOGHAN MCCABE: Yeah.
KONSTANTIN KISIN: Did we see any in LA?
FRANCIS FOSTER: No, I don’t think we saw them in LA.
KONSTANTIN KISIN: But Austin and San Francisco. But in San Francisco it’s literally lights everywhere.
EOGHAN MCCABE: Yeah, yeah.
KONSTANTIN KISIN: So to a British person, it’s almost like arriving in the future.
EOGHAN MCCABE: It’s shocking. Yeah, yeah, it’s shocking. But the point is that it takes time. It takes time and so people have time to adapt. And so between the start of Waymo, when Waymo had a real working prototype in a demo over 10 years ago, to the point when there’ll be no human driving work, that could be a span of 20 years. In many places, 20 years is a big portion of a career and there are very few people in the repetitive jobs that actually want to stay in the jobs.
The only true career people, for example, in driving, are people who perhaps drive limos and high end executive cars. And I can imagine them staying around for some time. Eventually, if I get in an executive car, it’ll be driven by a highly competent AI agent. But the security risks with that, who runs it, is it listening to me? So I probably want a human that I can look at in the eye and know that they’re not recording my every word.
So all I’m trying to say is that some changes may be very big, but I think a lot of people will have time to adjust and react and get new jobs like they always have. Like I said, technology has repeatedly taken people out of repetitive work. And during that time, the population has increased, GDP has increased, people have lived longer, they’re healthier, happier, more productive. I think I just quoted a Radiohead song, but it’s been good for the world.
I am not a utopian when it comes to AI. I think there’s going to be challenges, but I just think that for those who are fearful, and I understand that fear for sure. I have fear too, that I actually think it’s probably not going to be as dramatic as we might imagine.
The Disproportionate Impact of AI
KONSTANTIN KISIN: Well, I think we could sit here for hours and list the potential benefits. Like, you know, I talked about coming to the future here. I went to my dentist in the UK and she was like, oh, the AI tells me this. You’ve got an issue here. Right, let’s look into it.
So clearly it’s going to have massive positive impact. But you talk about your fear and I think this is where I’m a layman here. So, Owen, I’m totally open to your perspective, obviously, but just correct me if I’m wrong.
My worry is that the positives versus negatives are highly disproportionate, potentially. In other words, we can potentially make real improvements to people’s lives and we live longer and we’re healthier and all of these other things. But I also think there is the potential where a very significant population is not just about no longer has a job, because if you’re generating all this extra GDP, you might be able to take care of the financial side of it.
But what about meaning? What about purpose? What about a reason to get up in the morning?
EOGHAN MCCABE: Sure.
KONSTANTIN KISIN: Do you see what I’m saying?
EOGHAN MCCABE: I think it’s so important, and I do worry that if a lot of young people are unemployed or underemployed, that they’ll reach for socialism or they’ll be just sufficiently discontent that they want bigger changes in society. I don’t know what that is.
People say that in history, when particularly young men are out of work, bad things have tended to happen. In the late 18th century, the great unwashed and the unemployed started the French Revolution. So I do worry about that.
That said, does purpose really come from fitting a little screw into an iPhone 500 times a day? Does purpose really come from driving a car in a city as an Uber driver and getting abused by half your customers? You know what I’m getting at?
KONSTANTIN KISIN: I do. But I also disagree, though, in some ways, because what I think about is, no, purpose doesn’t come from that. What it comes from is putting food on the table for your family.
EOGHAN MCCABE: Yes.
KONSTANTIN KISIN: And it doesn’t come from getting a government check and going to the supermarket and putting food on your family. It comes from the struggle of going to work.
EOGHAN MCCABE: Totally.
KONSTANTIN KISIN: Yeah.
EOGHAN MCCABE: Well, it comes from being a useful part of society and contributing, being in service to people. I think that’s where we get a lot of purpose. Like what is, what is our, what is my purpose in life, you know, tends to be as global as it is local. I think for a lot of people, I think it could be a real problem.
But hopefully we’ll find new ways to find purpose, more meaningful ways. I don’t know what it is, could be creative new types of jobs and work. I couldn’t possibly imagine. Just as people couldn’t possibly imagine when the printing press came out. All these monks out of work, what were they going to do? Maybe it’s not a lot of monks anymore, so maybe I answered my question, but you know, we just can’t possibly imagine.
So there could be scary and dangerous things that happen, as happened with social media. But you know, the other side of this is just kind of the inevitability of it all.
KONSTANTIN KISIN: Yes.
The Inevitability of AI Progress
KONSTANTIN KISIN: Well, this is totally the point, right? Because for all that, we can talk about this impact of that impact, but I think, am I right in saying a, it’s totally inevitable because, quote unquote, “you can’t stop progress,” but that’s not really why. The reason you can’t stop this is if we don’t do this, other people will.
EOGHAN MCCABE: Like I see technology as discovery as much as its invention. People discovered that a chair was a great way to prop their body up when they wanted to sit down in front of someone independently. Multiple cultures probably discover that. I should have done the research before, but I’m pretty sure that probably the Chinese and the Africans probably discovered different forms of chairs independently.
And for sure, AI is more exotic than chairs, but in a couple hundred years it’ll look as simple. And I just think that these things were going to get discovered sooner or later, certainly within a short period of time. There are dynamics whereby the Chinese, for example, copy the Americans. But it is just simply inevitable in this moment in time.
It’s inevitable, like you said, because the Chinese have now got it and they’re going to build it and they’re going to make it awesome and they’re going to benefit from it and they love it over there. And they’ve already got AI butlers and bellboys in hotels that, you know, in Japan and China, for example, that just the general population have embraced AI in a way that we have not.
So we could decide to say we’re scared of what could happen in the West. I think that fear is warranted. Let’s sit it out. And I think that we shrivel and suffer economically, like Europe has been doing and is likely to continue to do in the age of AI. I think that, you know, China just gets stronger, not just economically, but militarily. I think we get dumber. Think of all the scientific discoveries we’re not going to make. We get less effective. We could be Luddites, but I don’t think it’s going to be good for us.
China’s Embrace of AI
FRANCIS FOSTER: And Owen, you said that China have embraced AI in a way that we haven’t. How have the Chinese embraced AI?
EOGHAN MCCABE: Yeah, so I’m not a Chinese expert. I just look at the way in which they embrace technology in general, and I look at our own conversations that are happening in the west. We’re in late stage successful civilization. We’re kind of happy and lazy. It seems we have been since the end of the Cold War. We’re now swimming in luxury beliefs, attacking each other, regulating anything that moves.
China doesn’t care about any of that stuff, none of it. They’re on a singular mission to become the preeminent global power. They’re very proud of that. Unafraid, they don’t mind copying anyone. There’s no loss of pride if you just rip someone else off and they’ll rip the Americans off. And they’re just moving at a pace that we couldn’t, we couldn’t fathom here.
I mean, the Chinese, they have 58 nuclear power plants. They’re building 20 something new ones. Germany just knocked down a nuclear cooling tower. And in the United States, I don’t think there’s been a nuclear power plant built for decades. That’s not good. AI needs a lot of power to do its work, to learn and train. AI needs phenomenal amounts of power. So even on that factor alone, they’re going to blaze ahead in AI.
Now I’m told that the U.S. has 10 times more data centers than China. People say that the U.S. and Americans are willing to make big bets that Chinese are not. We do design the chips that are needed for training, although all the chips are made in Taiwan.
FRANCIS FOSTER: What could go wrong?
EOGHAN MCCABE: Yeah, what indeed could go wrong? A lot of the people you talk to, particularly the people in defense tech who invest in defense, will say that our posture is really bad. China have more power. They have the ability to build all of the components that AI needs to do physical work. So battery motors, they’ve got rare earths and they now have pretty good models.
They’ve come out with open source or at least free models that have challenged, that are close in performance to some of the American models. So if you talk to people who kind of study this, they’re concerned and they say that we should be concerned.
Freedom of Speech and Innovation
FRANCIS FOSTER: And I guess the question is, and this is a point that plenty of people have made on this show, which is the one thing that the US and the west has got over China is freedom of speech. If you are able to speak freely, you’re able to think freely. If you’re able to think freely, you’re allowed to be more creative. Creativity leads to innovation. Is that true with AI or not so much?
EOGHAN MCCABE: I think it’s true. I think that AI is highly creative. I think the people working on it are truly our most brilliant minds today. And they’ve achieved what we’re enjoying today because of real blue sky thinking and new approaches. I think our freedom of speech here is of paramount importance.
It’s also allowing us to attack ourselves and criticize AI in ways that are warranted, but in ways that are going to be problematic if we eventually ban it. If a future President AOC decides that AI is just bad for the workers and we need less of it, I think that’s just bad for America.
And you have to wonder if China’s main strategy is ripping off American technology but doing it at a pace and a scale that we aren’t incapable of, then they don’t need creativity. And maybe free speech is not helpful to them. Maybe they can just tell people to shut up and follow the instructions and they can run away with the price of AI.
So let’s see, that is the age old critique of China. That they are not as creative as we are in the west and that may perpetuate, but they certainly have the ability to do things big and in a very quick way too.
The AI Arms Race
KONSTANTIN KISIN: Well, it’s kind of like what happened with the Manhattan Project, right? Americans spend a crazy amount of money, resources inventing the nuclear bomb and then a couple of spies give it to the Soviets and they just build one, right?
EOGHAN MCCABE: Totally.
KONSTANTIN KISIN: Is it, I mean, is that a fair comparison? More broadly, are we the west, particularly the US in an arms race with China over AI?
The Geopolitical Stakes of AI
EOGHAN MCCABE: Like to non-experts, at least in geopolitical dynamics, it appears so. I got a funny take from a friend of mine recently where he said that the more that the tech right are in power or have influence in the United States, the more likely we will be in an AI or technology war because they’ll kind of name it into existence. All tech people like me think, “Oh, they’re building the tech, they’re building the AI. We need to speed up.”
And so it’s just interesting to imagine or to realize that tech has a great influence in the United States. And whether China or not wants to be in a tech war, we’re probably going to get it now because of that dynamic. But it does appear that China want this technology for themselves.
And yeah, I just, we just know from history and intuitively that technology confers great power to the person who owns it and holds it. Look at the atom bomb. So I think AI would just do phenomenal and very scary things for the people who have it. Like I said earlier, whoever gets super intelligence, if that day comes, they’re going to have more science than the rest of us. It’s going to be making discoveries long ahead of humans. So that’s one thing.
But you can imagine AI used in signals intelligence presumably. I mean, it’s probably already deployed in massive ways today. Just eating up insane, unfathomable, disparate data inputs will just allow the enemy or the United States to understand not just what the opposition is doing militarily, but their entire society. You know, sentiments of society, be able to access the individuals, perhaps influence elections, et cetera. So just AI will be able to understand the enemy in a new way.
But you know, the craziest and most kind of Hollywood-esque example of where it gets scary are AI-powered drones and drone swarms. I mean, you don’t need to be an expert to imagine the ways in which that gets bad. In Ukraine today, they’ve now resorted to using fiber optic cables to control the drones because the signals are jammed. That still means that there’s a kind of a range limit. And you also need one human operator per drone.
Imagine 500 drones each running local AI with an understanding of where on a ship they need to hit, or worse, what person they need to hit. Again, non-novice here, but I don’t think we can defend against that. And so yeah, if you just imagine these crazy scary Hollywood worlds where the enemy has millions of AI-powered drones with little explosives and weapons on them. It’s bad, it’s bad.
KONSTANTIN KISIN: Well, the thing is, I don’t think it’s that much of a stretch. Like prior to the nuclear weapon, conceiving of that was required a level of imagination based on scientific knowledge and the pursuit of that. But this is not that hard to imagine.
EOGHAN MCCABE: It’s not hard to imagine.
KONSTANTIN KISIN: I mean the war in Ukraine which you bring up, that is being for very like, it’s not exclusively with drones. Drones essentially the main thing that they’re now competing on. If you listen to people on both sides, right. And AI controlling drones doesn’t seem, you know, beyond the realms of imagination. So you can kind of see how you’re going to get there very soon.
EOGHAN MCCABE: Yeah, well, I don’t know how soon because what it requires is that you need hardware and models that can run locally on the drones. And today the AI we all use runs in giant data centers and we access it over the Internet. And so if the drones need an Internet connection, well then that can be jammed. So we’re a little bit off.
But you’re right, it’s not a fabulous or crazy idea at all. You know, like I said, a Hollywood writer can think of ways in which that works that I couldn’t even imagine. And it’s all going to come true.
FRANCIS FOSTER: Do you ever feel a little bit like Alfred Nobel, the man who invented dynamite? Dynamite can be used to, you know, it can be to help create new tunnels to plow trains right the way across the country. It can be used for engineering or it can be used in terrorism, war.
EOGHAN MCCABE: Right, yeah, that seems to be the case with all technologies. You can probably kill a man with a chair. You know, I’m from South London.
FRANCIS FOSTER: You can definitely kill a man with a chair.
EOGHAN MCCABE: But I don’t mean to be glib, like I really do think there are risks here, but I just want to reemphasize. It’s happening, it’s happening, it’s happening, it’s happening, it’s happening.
The Politicization of AI
FRANCIS FOSTER: What I found really interesting when you were talking about five or so minutes ago is you use the term “tech right.” And I think particularly for me, for content and for a lot of people, the politicization of AI is something that we’re really not talking about but is actually really worrying.
EOGHAN MCCABE: Yeah, yeah, yeah. Well, it’s extra interesting because there are people on the right against it and people on the left against it.
And I’m curious to see what way it turns. You know, I would consider myself part of the tech right. And I’m just waiting to be kind of called out and now be, you know, you know, how do I say, become a heretic of the right movement because of my own—
FRANCIS FOSTER: Can I just pause you there, Owen? Sorry, when you say tech right, can you just explain basically what that actually means and then we can talk about the tech left and how it influences the technology?
EOGHAN MCCABE: Yeah, I mean, historically Silicon Valley and people in technology were very left leaning. Very, very, very, very liberal in ways that you can’t imagine. Incredibly so. And that was just taken as a given.
And then sometime last year, 2024, as Trump started to come back, many of us started to realize, “Wait a sec, something’s changed.” And now even though the vast majority will not admit it, like 99% will not admit it, most CEOs here of successful businesses would consider themselves on the right.
And so there’s just been this giant swing. Maybe the masses are still more centrist and there’s definitely some people on the left, just tech took a big swing to the right.
KONSTANTIN KISIN: Why did it take that swing?
EOGHAN MCCABE: You know, tech people are very open-minded and intelligent, typically. And I think that they were previously quite left aligned because maybe we needed a bit of an adjustment, you know, being on the left at one point in time was the rebellious take.
And people in tech were, and I’m talking about maybe in the 90s, you know, were just sufficiently open-minded that they decided maybe it’s okay to be gay. Like maybe that’s just, maybe that’s as far as they started. And then it just went a little bit too far.
When it went too far again, these open-minded, intelligent people started to realize it’s gone too far and we need an adjustment. And maybe if back then the realization in the 90s was maybe it’s okay to be gay, maybe the realization in modern times here is maybe it’s okay to hire someone solely on their merit and abilities. And that was a controversial take actually two years ago. Really very much so.
Yeah, yeah. I mean, you know, this DEI took over tech and everywhere else. So it was just a gradual little shift and a change and still most people are not out about it. But I do think that most influential people in tech are kind of somewhere on the right center, right. There’s a lot that aren’t, but most people are.
Woke AI and Ideological Bias
FRANCIS FOSTER: What’s really interesting with this is how some AI models are woke, right? I mean, there are some AI models, you ask them what a woman is and it starts behaving like, you know, Denise from HR and you’re going, “What the hell is this?”
EOGHAN MCCABE: Yeah, yeah, it’s pretty interesting. My co-founder, Des Traynor talks about the kind of ghosts that we’re going to be fighting for some time. All these models trained on all this content on the Internet, like I said, Wikipedia, Reddit, mainstream media, all of which have had a certain ideological bent for a while, and that’s deep in these models.
And so even after the world has changed and pivoted and come back towards the center and some of us towards the right, there’s still going to be little bits of logic in there that come from woke logic. So when a new kid, sorry, young person, is trying to figure out what car to buy for the first time, is there somewhere in the logic that knows that Elon Musk is actually a bad person and so they shouldn’t buy Tesla? That’s the benign version.
The scary version is when there’s a child that’s struggling maybe with their sexuality or, you know, just their self-identity. Is there a little bit of logic in there that thinks it might be a good idea to consider that they’re in the wrong body or that maybe they should explore, you know, options beyond therapy? There may be some other more aggressive interventions.
I think this woke stuff could be embedded in the AI for a long, long, long, long, long time. And it’s because of the stuff it trained on. However, there are companies, because they came from Silicon Valley, that kind of hard-coded a bunch of views, a bunch of kind of liberal views.
And that’s kind of the difference between say, Grok and perhaps OpenAI or other systems where the people who aligned the models in a certain direction to make sure it didn’t say the wrong things, aligned it according to their ideologies. This is best demonstrated when Google came out with, I forget what it was called. It was a model that would let you generate images and people said, “Show me an image of the founding fathers of the United States.” And invariably they’d all be black.
And that just happened again and again and again and they’d fix that. But that kind of thing was hard-coded. So that’s certainly a very interesting aspect of AI and one way in which it’ll impact society beyond things like job changes and unemployment et cetera because it is worrying.
FRANCIS FOSTER: Because if it’s taking, for example, woke ideology, particularly the most extreme aspects of woke ideology, you know, they weren’t very tolerant, if we can be honest with people on the right or people who were critical. So you do wonder, you know, what some of these AIs would then propose as a solution.
The Question of Neutrality
KONSTANTIN KISIN: Well, to this. Is it woke ideology? Is it all ideology? I mean, this is really the question, because what we’re really talking about is how does an AI language model, that is derivative of online content, adjudicate things on which humans actually disagree totally.
EOGHAN MCCABE: What is neutral?
KONSTANTIN KISIN: Like if you ask AI, “Should I vote for Trump or Harris,” what’s it going to say? Right, right. Do you see what I’m—
EOGHAN MCCABE: Yeah. I mean, in that instance, in that instance, it was kind of told to not have an opinion. So that’s good. That was responsible. But I just think it’s a really interesting question, which is, what is a neutral take? What is objective? There’s no such thing.
KONSTANTIN KISIN: There’s no such thing.
The Ideological Implications of AI
EOGHAN MCCABE: Yeah. So I can imagine that basically we’re going to want to either train or teach or tell our AI assistants or co-workers what ideology we like to work with, what are our values and principles, and go from there.
You can imagine that parents, when they give AI tools to their kids, they’re going to want to tell them, here’s our beliefs in this household. So, yeah, the danger of that, of course, is that it’s going to only then reinforce our ideologies and the things that we believe.
So now we’re getting to some of the interesting stuff where AI, you could imagine just AI relationships with AI and particularly with younger people, how it could get kind of dangerous and toxic, where it can kind of bring people deep down certain ideological tracks and lock them in even harder than social media has locked us in today.
KONSTANTIN KISIN: And what that brings up is a question I was going to ask you anyway, which is one of the big slogans of the early social media era, famously at Facebook, “move fast and break things.” Has San Francisco, Silicon Valley learned the lessons of that period where you go, well, move fast is great, but is breaking things necessarily the thing that should be celebrated?
Is there a feeling, I guess what I’m asking among people that you know in this industry who are leading this whole thing, that of course we want to move quickly, we want to make new developments, but this is such a powerful technology, like social media was, in a way that I don’t think those guys—I keep, I always say this, like, if I was some guy in a hoodie on a university campus that invented a thing for people to swap pictures and connect.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: I don’t think in that moment I would be thinking, well, this might cause civil war one day. No, but we now know that it can.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: So are you guys thinking about that?
The Spectrum of Caution in AI Development
EOGHAN MCCABE: So there’s basically a, you know, I’m going to try and speak on behalf of all San Francisco AI people. At the moment, there’s basically a sliding scale, and Google famously had their hands on everything that OpenAI had before them, but were so cautious such that they failed to launch it. So that was one end of the spectrum that we as an industry now have moved on from.
OpenAI launched, and we’re willing to make mistakes. And I don’t know if it’s a “move fast and break things” thing, but I think what they realize is that most people realize that there were actually very few things that could go incredibly wrong where AI does interact in the physical world.
Like Waymo. Alphabet, which is the parent company of Google that owns Waymo, took 10 years, like I said, to go from working car to make sure that it would basically never kill someone. And, you know, I think that might have happened. It probably will happen, but it has so many less crashes than human drivers. It’s just not comparable.
But they were very careful, and I think they should have been. But I think that there are going to be lots of instances where there’s a more nuanced, dangerous risk that we’re only going to realize later. To your point, this guy in the hoodie, you’re referring to Zuck, he never realized the damage that might be done, I presume, because I don’t think anyone could have.
And now we look back on it and frankly, we’re still understanding the impact of social media. We still don’t understand. We’ve got a number of hot takes, but actually we don’t fully understand it yet. So it’s going to take a long time to really see the big and the small ways in which it’s going to impact society both positively and negatively.
The Question of Regulation
KONSTANTIN KISIN: You mentioned regulation, and I imagine in any industry, like, I’m against regulation of the media, even though I see a lot of crazy things happening in the new media, but I just, I don’t trust the government to do that. Well, but do you think that some regulation of this is necessary and some precautions are necessary to be imposed by people outside of the industry who don’t have a vested interest in moving as fast as possible?
EOGHAN MCCABE: Yeah, like you said as a rule, I’m against regulation. It tends to not be done very well. It tends to stick around for too long. It tends to be done by people with vested interests or ideological interests, people are trying to get reelected, etc. So it can go wrong very quickly, like it’s going wrong in the EU at the moment.
But I think it’s an interesting conversation. This is going to sound actually quite silly, but should we, are we cool with commercially available AIs teaching people how to make chemical weapons or biological weapons or nuclear weapons? Are we cool with that? Probably not. Right? So maybe there’s a line somewhere, is what you said. Probably there is.
FRANCIS FOSTER: It seems to me that what we’re talking about really is, and this is a term that has been used about the Internet, this does seem to be the Wild West of AI, doesn’t it? Where we’re at the very beginning. No one knows what’s going on really, or how things are going to develop.
The Current State of AI Capability
EOGHAN MCCABE: Yeah, it’s true. And it’s okay because it’s actually not that useful yet. There’s these big narratives about the change that’s coming. And as of the last couple of days, or big layoffs by these big American companies, Amazon and Target, people don’t know if it’s AI or not.
But there was a study that also came out yesterday or the day before by an AI company here, and they had a look at how much freelance work modern AI could do. They looked at freelance work because it didn’t involve collaboration. They’re trying to see how much of a single human’s effort and work it could do. And it was 3%.
So modern AI could do 3% of freelance work. It’s pretty useless still. So, yes, it’s the Wild West in a sense. It’s unregulated. But it’s also just not that dangerous yet.
FRANCIS FOSTER: And when you say it’s not that dangerous yet, let’s delve into this, because this is a question I really want to ask you, and I’m sure many of our audience do as well. What are your fears surrounding AI?
Fears and Concerns About AI’s Impact
EOGHAN MCCABE: I do worry that if it develops incredibly quickly, and that there are a lot of disaffected youth and people who don’t have purpose or a way to put food on the table, that they could reach for socialism. So that’s one worry I have.
I do worry that the potential downsides of AI, which all technology has, do allow a future President AOC or someone else to kind of ban AI or the effective parts of AI, and in doing so hobble America in the West.
I do worry about the blue collar worker and the person that does a repetitive white collar job. There’s a lot of bull work out there. You know, thinking government itself, like just most work is highly repetitive and the efficiency is low. I do worry about if it changes the nature of their usefulness to the economy, what I could kind of do there.
And I just resort back to the idea that it’s all coming anyway. And I just don’t think that a Luddite approach and sitting it out in the West, in US or in Europe is a good idea. So I think the best path forward is to keep having these conversations and make sure that the people building AI are actually sufficiently awake to the risks and are not too proud or selfish to acknowledge that there will be some so that they can help us all, society and the people well outside of AI navigate this world for our kind of mutual benefit.
I hope that that’s the way we take it. And I will say that while I see some people in AI who are so smart, I’m like kind of a midwit in AI. I’m like applying AI in the real world where there’s a lot of people building the low level AI. I see them sufficiently disconnected from reality sometimes, but at large there’s actually a pretty healthy conversation about the ways in which this can go bad.
And when I talk to people in AI in different areas of AI, whether they’re investors, they’re working on the algorithms themselves, they’re policy people, actually they are more ready than I am to suggest that the change could come really quick.
So for those outside of the technology world that imagine that there’s a bunch of selfish liberal technologists that are excited to get super wealthy from mass unemployment of everyone outside of this world, I would actually say that that’s not what you’ll find here.
The Real Concern: Smart People in Uncharted Territory
KONSTANTIN KISIN: You know that I would say I don’t know about other people, I can’t speak for them, but that’s not my fear. My fear is not that there is a bunch of greedy people who see this opportunity as an opportunity to make money. My worry is that this is a bunch of very, very smart people.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: Who are smart in this one area, which we all are. Nobody’s smart in everything. Who maybe don’t have the training as most of us don’t in ethics.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: In playing the movie forward.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: Who simply are not capable because no human is perhaps to project this forward. Who are very excited about playing with this very cool thing. And playing with cool things is great, especially, you know, for men, let’s be honest.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: This is a new tech. Oh, this is a new cool toy. And that in the exhilaration of this exploratory thing, that’s when I think that maybe there is not sufficient, there’s a potential that there’s not sufficient consideration for other things.
EOGHAN MCCABE: Totally. I think that’s very real. That’s actually happening. And I would just go back to say that what’s the answer to that, like do we hobble it? Do we slow down? China’s not going to. I don’t know the answer there. And I will also say that this same thing happened in the social media age.
KONSTANTIN KISIN: Age.
EOGHAN MCCABE: And it has had big impact on society, maybe terrible impact on society, but it was never not going to happen. Like what were we going to do? Just stick to email? Like while the rest of the world has these wild, wonderful ways to connect.
And I bet social media, and honestly I kind of hate social media as much as the next guy. I’m a victim of it too. I bet it’s actually done a lot of great things for the world.
The Need for Self-Responsibility
KONSTANTIN KISIN: Brilliant as well as terrible. Both. I totally get that. I guess what I would say is maybe the answer lies in the people who are doing this work just being cognizant of what happened before and going, how can I bring someone in who can maybe give me a philosopher or an ethicist or something like that’s what I think.
Because I get your point. Like someone coming in from the government telling you guys how to do stuff that’s not going to work. But it was maybe about self responsibility.
EOGHAN MCCABE: Totally. And I will say without calling out any companies that there are just some companies that care a little bit less about this. And I think that that’s, it’s not unlikely that they’re going, you know, they did a lot of damage in the social media age and TBD, whether they really cared much about it, even though it created a lot of problems for them and they may do the same in the AI age.
Yeah, so I think that fear is unwarranted. I guess I just, you know, I’m not trying to constantly defend AI here. I’m trying to really figure out where’s the right place to land.
The Threat of Socialism and Wealth Concentration
KONSTANTIN KISIN: As are we that we know. I think you’re totally right. Another thing I wanted to pick up is your point about socialism. I’ve thought it’s almost like the most obvious thing in this entire conversation that if you have a technology that is so transformative that half the population loses their job over a 20 year period, let’s say 20 year, being very generous.
And at the same time five people or 10 people or 20 people accumulate all the new wealth over that same time period. I mean, I think you probably know my views on communism, but actually in that situation I think pretty much everybody would be pro communism. You take all that wealth and you distribute it to the people who no longer have jobs. What else do you have? Unless you want an armed uprising.
The Resilience of Humanity
EOGHAN MCCABE: I think that’s right. I just think that this conversation, we actually have a decade to have. Like, if you want to have me back in 10 years, I’m down. And then we’ll actually have learned a lot more to be able to say, okay, what’s the future going to look like? Because we’re not there yet.
Again, experts in this space think there’s a little chance that something happens very quick. But I don’t even think it’s going to be as bad as you’re talking about. And humanity has—and again, I don’t want to sound glib and I hope that there’s not a big traumatic change here—humanity has just a way of reacting and responding and adjusting. It’s so resilient.
I mean, COVID, we shut down the world for a year or two. Tens of millions of people died. I think maybe 15 to 18 million. Some people think the world kept turning. Hopefully no one dies because of this. You know, I think worse things have happened to humanity and we’re still here and our lives are richer.
I mean, there’s a lot of different ways in which our lives are not. I think we’re too disconnected from purpose, actually, and spirit and nature. But that’s a whole other conversation, I’m sure. But humanity and the human race is just so resilient. So even in these crazy outside rare possibilities that may happen, I think we’re going to be okay.
FRANCIS FOSTER: Look, I really hope so because one of the things that I worry about when I talk to people from tech, and it’s not all people from tech, it’s just the people that I talk to, they tend to, when you talk about AI, they kind of get a little bit utopian. There’s a little bit of an evangelical zeal going on there. And I’m like, I think there may be another side to us. I’m sure there’s going to be great stuff happening.
KONSTANTIN KISIN: Yeah.
FRANCIS FOSTER: And it’s going to be brilliant and it’s going to save lots of lives. But there’s also going to be this as well.
The Utopian Trap
EOGHAN MCCABE: Totally. I find it to be quite immature, that pure utopian take, this bright gleaming future. The entire humanity has been a struggle. Living life is a struggle. There’s no future perfect ahead of us. And AI is not going to bring that. But I think it’s going to make things largely, at least a bit better.
But I’m with you, I just find that immature. It’s usually the younger technologists and if you build technology for long enough here, it has a way of kicking you in the face and showing you that actually just because you build it doesn’t mean that the world will adopt it and that it takes a long time for markets and societies to pick up new tools and change the ways in which they work.
So I’m a massive realist there and my big message to everyone working in AI is let’s just explore the full spectrum of possibilities which most people are. There are some utopians. I don’t know if that might be the right word, but I don’t think that they make up the majority of the people.
FRANCIS FOSTER: And what excites you about AI? What are the things you’re like if this happens? This could be transformative. This could be amazing.
EOGHAN MCCABE: Well, again, it’s super nuanced and I’m a strange CEO in the space in that I’m very pro human. I love how imperfect human—
KONSTANTIN KISIN: I’m sorry to interrupt. You’re an outlier in that you’re very pro human?
EOGHAN MCCABE: I’m extremely pro human in that I love the imperfections of humans. I love the messiness of humans. There’s a lot of left brain people here that think about how perfect the world will be when we iron out all these inefficiencies and mistakes that humans make. For me, I like the messiness of humans. Right. So that’s what I mean by being extremely pro human.
FRANCIS FOSTER: Can I just put—we need—yeah. It’s just like, because when you say you’re pro human and you like the messiness of humans, and you know, there’s people here who want to iron out the inefficiencies, I’m like, it sounds a little bit fashy, I’m going to be honest with you. And I’m not somebody who uses that word.
EOGHAN MCCABE: Yeah.
FRANCIS FOSTER: But it does sound a little bit fascistic. You know what I mean?
EOGHAN MCCABE: Explain more.
FRANCIS FOSTER: So, for instance, if you want to iron everything, the humans, that means that you want to micromanage humans, that you want humans to behave like robots, like automatons. And that makes me feel pretty uncomfortable.
EOGHAN MCCABE: Well, it’s not actually quite like that. It’s more like they can deploy AI in places where humans are imperfect. And for me, I like a lot of imperfection. Right. I like the human stuff. And I think that we as a humanity are going to start to realize that we don’t want to automate everything.
So in my space, customer service, actually, the AI is brilliant. It’s super consistent, never gets pissed off, no typos, works 24 hours a day. It’s incredible. Guess what? Sometimes customers want to talk to a human. And businesses want to show that they really respect them enough to put a human on the line, too. So there’s going to be a lot of that. You’re being triggered sufficiently. Maybe you’ll forget the question in the first place.
FRANCIS FOSTER: What are you excited about, I guess, is the question.
AI and Medicine: A Transformative Partnership
EOGHAN MCCABE: Yeah, I mean, you know, it’s without a doubt that AI is going to help a lot of very human problems. Take medicine, for example. Medicine is a show. It’s a disaster. The medical industry in the United States is better than certainly where I’m from, Ireland, the UK, unfortunately, many places. It’s really brilliant. It’s also a nightmare.
You have to advocate for yourself amongst all these disparate experts. Maybe one guy is great at hearts, one guy’s great at the brain, the other guy’s great at sleep. They don’t actually talk to each other. They don’t care about each other. They don’t care about the holistic picture at all.
Trying to fix chronic illness in the United States is an impossibility with the current medical industry, and yet most people are chronically ill. There’s so many people out there and they think, oh, you know, I don’t have as much energy as I used to, or my concentration isn’t as good as it used to be. And maybe it’s just because they’re getting older, or maybe they have mold toxicity.
Because, for example, in the United States, and probably in the UK and Ireland, because they’re humid places, wet places, there’s a lot of mold, water damaged buildings. People are sick and they don’t know it. And no one in the Western medicine, medical profession can help you figure that out.
Already, ChatGPT is better at putting the pieces of the puzzle together, looking at the different pictures you get from the experts and synthesizing. And so for me, I’ve got insights that I could never get before by giving all my medical tests. It’s just brilliant at that.
And so I think in the future, I mean, I know in the future we’re going to have solutions to so many of our ailments, the things that actually kill us, that actually ruin our quality of life, that are destroying the lives of the people we love, both young people and older people. I mean, in the United States alone, I know so many young people that are very, very sick. I think there’s a chronic illness epidemic, and I think that AI is going to start to fix that.
So that’s just one little example of a very pro human, rich, wonderful way in which I think AI is going to be brilliant.
FRANCIS FOSTER: I saw a study was really interesting showing that when you used AI to study tumors, it was actually far more accurate if a tumor was benign or if it was cancerous than a radiographer who’s had 20 or so years of experience.
EOGHAN MCCABE: Totally. It’s brilliant at those types of things. Human labeling. So when humans have to look at x-rays, MRIs or EEGs, which are brain scans that they use in, say, sleep studies, the AI is way better at labeling them, just like the AI is way better at driving the cars.
The AI has so much more data and so much more training. Doesn’t get sleepy, you know, doesn’t get angry. It’s just better at these things. And so hopefully, you know, the future medical profession is medical individuals with outstanding bedside manner and empathy, which we need a lot more of, right? And incredible AI that can teach them what’s wrong with their patients and how to fix it.
But I don’t think that the AI is going to be very good at convincing the patient. Again, that’s going to be back to the human. The human is going to have to say, “Hey, I know it’s hard. I know it’s scary. You can do it. I’ve worked with many people who’ve done it before. It’s not going to take that long. Look at the readout from the AI. It’s explained everything. Let’s do it together.”
And so it’s, you know, unfortunately that is a bit of a utopian take. But that’s an example of where we can imagine just beautiful collaboration between the best of AI and the best of humans.
KONSTANTIN KISIN: That’s already happening. Like I mentioned with my dentist, it just tracks where your gum was last year, where it is now. It’s a simple thing.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: I think I’m totally with you on the excitement of it. I think there’s so many amazing things that could come out of it. Just incredible. The one thing we haven’t talked about yet is generalized intelligence. That is God. A digital God, basically.
Artificial General Intelligence: The God Question
EOGHAN MCCABE: I take issue with people calling it God. I think that’s bullsh*t. But you know, maybe something that can do what humans can do better than humans. Right. So when people talk about AGI here, artificial general intelligence, typically they mean they can do everything a human can do intellectually.
KONSTANTIN KISIN: Yes, but I’m kind of maybe—
EOGHAN MCCABE: And then eventually better. Sure.
KONSTANTIN KISIN: But even if you give, if you give you 10 extra IQ points and bigger muscles and you’re still not God. What I mean is AI that is so superior in its abilities that effectively it becomes the caretaker of humanity. Is that going to happen?
EOGHAN MCCABE: Well, I want to take us back for a second to the fact that it still can’t do 3% of gig work. So we’re a bit of a way out. Is that going to happen? Like, I don’t know. I happen to think that humans are so much more than the intelligence that comes from their brains, you know.
And I think that even if you create something that is so much more intelligent from an IQ perspective than a human, that humans will have a lot to bring to the table. You can totally imagine a point where it’s just straight up smarter than us.
KONSTANTIN KISIN: And—
EOGHAN MCCABE: Thinks quicker than us and then is far better than we were at making itself better. And you know, there’s some sort of jumping off point or singularity where it accelerates into the future in a way that we can’t possibly even fathom what it is. So that sounds like sci-fi stuff to me. The Doomers believe that that’s possible and they say that if we invent this, it’s going to kill us.
The Singularity Scenario
KONSTANTIN KISIN: Well, it’s not hard to see. I’m not sure about the killers part and I want to hear about that. But I’m just inject this. If you have a machine, let’s call a machine just for the sake, for ease of talking, that is based on chips. Yeah, right. A machine can design better chips.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: Robotic element of the machine can mine for the materials you need. You can put the chips together in the factory, it can make better chips, it becomes more intelligent, it can design better chips.
EOGHAN MCCABE: Yeah.
KONSTANTIN KISIN: And before you know it, you’ve got this thing. Sure. This runaway intelligence. And then it’s actually something that a lot of sci-fi writers have been thinking about for decades. Some of the people I used to read as a kid were thinking about this sort of stuff. So let’s talk about the doom. The people say it’s going to kill us.
EOGHAN MCCABE: Yeah.
KONSTANTIN KISIN: This is definitely one of the possibilities.
EOGHAN MCCABE: Yeah.
KONSTANTIN KISIN: Or it could just take charge of us, which is another one of the possibilities. But why do you think it’s unlikely that it’s going to get there? Or do you?
The Reality of AI Risks and Timelines
EOGHAN MCCABE: Well, no, I just think that if there’s anything I’ve been trying to do in this conversation is just temper the fears. And so in this respect, I’m just trying to say it’s not about to happen tomorrow or in 10 years, I don’t think. Or 20 years. Well, maybe 20 years is too far.
The reality is, I don’t know, no one knows. And I think it’s totally fair. Totally fair. And there’s people in San Francisco who will not be happy with me saying this, but I think it’s totally fair to criticize the people working to create AI right now saying that they have no idea what they’re creating and there could be some risks. I just think that the risks are small and China’s going to do it.
KONSTANTIN KISIN: Yeah, yeah.
FRANCIS FOSTER: You know, you know the thing that actually when we talk about the risks and I—this is going to sound ridiculous, but go with me because there’s a deeper point. Please, robot girlfriends. And let me tell you why.
EOGHAN MCCABE: Right.
KONSTANTIN KISIN: This is desperate.
FRANCIS FOSTER: Yeah, exactly. Please design one.
EOGHAN MCCABE: How many weeks already?
The Problem of Perfection
FRANCIS FOSTER: But put it like this. We were talking before about getting rid, shall we just say, of the imperfections of human existence. What is more imperfect than emotion?
EOGHAN MCCABE: Sure.
FRANCIS FOSTER: Relationships. If you could design at one point, the technology is good enough. You’re perfect woman, your perfect man. They’re never going to lose their temper, you know, they’re never going to be coming back, you know, annoyed from work. You can get rid of the menstrual cycle. So, you know, she’s always going to be horny. She’s always going to be happy to see you.
EOGHAN MCCABE: Why wouldn’t you?
FRANCIS FOSTER: Exactly. Why wouldn’t you? Why would you settle for the human being? And if you take that kind of way of looking at the world, then you can perfect everything. So why are you going to need to engage with reality when reality is unpleasant, uncomfortable sometimes, not always nice?
EOGHAN MCCABE: Sure. I just don’t think that humans actually want perfect. Maybe some people think they do, but they don’t actually want perfect. I think the magic and the juice in a relationship is the kind of like push and pull and the connection you build is through the friction and overcoming it.
And so, you know, we’re not about to replace human connection anytime soon. And even in this world where this fantastical world where there is the, you know, the God AI, as you call it, we’re still going to want human connection. I don’t think that. I know that no matter how good AI gets, it’s not going to replace the magic of human connection. Even what we’re feeling right now. You’ll never, ever feel that with a robot, ever. It’s not going to happen.
Okay, now let’s entertain it for a sec. Yeah, a bunch of people will. Of course they will. There’s probably people who were never going to have human relationships of this nature, and maybe it’s a good thing. There’s probably a bunch of people in the middle or on the edges that this competes with human relationships for. That’s probably not a good thing.
This can’t possibly be great for the fertility crisis of the West. It doesn’t sound like it’s going to be, but you could imagine a situation, and I do think that anyone who is highly confident about what the world’s going to look like, particularly as it relates to AI, is full of s*.
You could imagine a situation where actually new AI relationships mirror a way of relating and help us learn about ourselves in a way that most people never have? Or do they act like the world’s best therapist and help people understand their insecurities and their own trauma and help build empathy and understanding for the other human on the side of the relationship.
And so maybe they’ll be AI girlfriends, but maybe they’ll be kind of AI friends that are like a healthy friend. Think of the very best friend you’ve got. They’ll challenge you sometimes. They’ll reflect back to you some of your mistakes. They’ll support you when you’re down. They’ll give you some advice or share some stories that are useful. Maybe the very best version of AI will do all of these things, too.
So, again, not trying to be Pollyannish here, not trying to paint a utopian future. I do think it’s going to get super weird. I think there’s going to be all manner of like really kinky AI girlfriend stuff, but we actually don’t yet know the real implications and exactly what way it’s going to play out. And it could be mostly awesome. We actually don’t know.
FRANCIS FOSTER: I’m sure the kinky stuff the Japanese will do.
EOGHAN MCCABE: That’s happening already.
AI and the Fertility Crisis
KONSTANTIN KISIN: You mentioned the fertility crisis with robots and AI.
EOGHAN MCCABE: Is it still a crisis insofar as we are all pro human? Yes.
KONSTANTIN KISIN: Yeah, this is a bit of worry me. Who. Who are these people that are not pro human?
EOGHAN MCCABE: Well, I mean, at least we are, right?
KONSTANTIN KISIN: Yes.
EOGHAN MCCABE: So if we are talking about the fertility crisis, well, then it’s a problem if people have less kids just because there’s robots around. That doesn’t sound super helpful. So I don’t know, I want to see humanity continue to flourish and grow.
But it’s actually an interesting point. The fertility crisis is happening independent of AI because it started before AI and remember, AI is not that useful yet, practically. Sure. So it’s totally independent.
Maybe AI and robotics actually is very helpful here. I mean, in Japan, the aging population don’t have the young nurses and assistants that they used to have. They’ve been trying to build robots to do that work for 15 or 20 years already. That’s going to come.
And so maybe for all of the work that we used to depend on young people for, we do have robot assistants and maybe that’s awesome. And then we then have a population that I hope returns to growth. But during this adjustment phase, whatever the hell is happening, we’re assisted and supported by robotics and AI and also maybe.
FRANCIS FOSTER: With AI as well. Because part of the problem, I think, with the fertility issue is we haven’t taught women about their fertility and the quite brutal facts around it. You talk to women at parties, they go, “Well, I’m in my late 30s now, 38, 39, and you know, maybe this is the time I’m going to start thinking about having kids.” And you’re like, I mean, you could, but you’re very much drinking in the last chance saloon.
KONSTANTIN KISIN: We don’t actually say that a parties.
FRANCIS FOSTER: No, no, no, no, I don’t. We think. And I just kind of smile and nod. Smiling. No, but actually, maybe if you have an AI model that will be able to say that instead is able to actually, you know, look at, you know, scan a woman’s body and go, “Look, this, the reality is past this age, you’re not going to be fertile, you’re not going to be as fertile.” So maybe you want to think about having kids at this age.
EOGHAN MCCABE: Yeah, you could imagine that.
KONSTANTIN KISIN: Or just like a family planning AI. Yes. This is how you might want to think about life.
EOGHAN MCCABE: Yeah, I think that the problem is not facts. No, it’s not that people don’t know that this is a reality. It’s much deeper than that. And so if we have AI that, you know, acts as an outstanding therapist, will that. Can that be useful for the fertility crisis? I can imagine, yes.
You know, if it can actually satisfy some of the needs that we have now for great therapy, which is not abundant, then it could be great, you know, like if it can help. If part of the problem is, for example, women putting off having children because they want to participate in the working world, they want to be successful in their own right and independent. They want to enjoy a certain lifestyle that has been promoted for the last 10, 20 years.
Maybe a great AI friend that acts as a great therapist, too, can help them start to think about the places from which those ideas come from, dive deeply into what they actually want, and start to play out the realities that come with prolonging having children, et cetera. Like a good friend would. Yeah, like someone at a party, but who actually has the right to say something.
FRANCIS FOSTER: I feel there’s a judgment there.
The Need for Nuanced Conversation
KONSTANTIN KISIN: No, no, I think he’s just. He’s being very objective about this. Owen, it’s great to have you on, man. Thanks for giving us your time and an interesting balanced perspective. I hope other people in your world are having these conversations in this way because I think this is super important. Actually appreciate you coming on the show before we head over to Substack and put questions from our subscribers to you. What’s the one thing we’re not talking about that we really should be?
EOGHAN MCCABE: You know, I’m just going to be repetitive in here, here and say that we just need to have a nuanced conversation about AI. I think AI technologists need to embrace the world and the world needs to embrace them.
I think that the conversations on the left and the right are very basic and rudimentary. Both the left and the right are worried about what it’s going to do to, you know, workers, et cetera. Which is fair, but we just need to have a collective conversation so we don’t either ignore the issues and be ready to adapt a society, or we fear it outright and ban it and fall behind the rest of the world.
FRANCIS FOSTER: Owen, it’s been an absolute pleasure. Thank you for coming on the show. Make sure to head over to our Substack where you get to ask Owen your questions and we get to carry on the conversation.
KONSTANTIN KISIN: How much technologically have the claims made by China’s deep sea can cost savings efficiencies affected its Western rivals and their approach to AI modeling?
Related Posts
- Protect Your Data In 2026: A Comprehensible Guide To Set Strong passwords
- The Diary Of A CEO: with AI Pioneer Yoshua Bengio (Transcript)
- Mustafa Suleyman on Silicon Valley Girl Podcast (Transcript)
- NVIDIA CEO Jensen Huang on China, AI & U.S. Competitiveness at CSIS (Transcript)
- Transcript: NVIDIA’s Jensen Huang on Joe Rogan Podcast #2422
