Here is the full transcript of AI pioneer Emad Mostaque’s interview on The Tea with Myriam François podcast, on “The Last Economy: AI’s Transformative Impact”, premiered November 21, 2025.
Interview Begins
MYRIAM FRANÇOIS: Welcome back to the Tea with me, Myriam François. Before we dive in, make sure to hit subscribe so you never miss an episode of the Tea. If you want to support the show and help shape future episodes, join our Patreon community. Thank you. Think of it as the Resistance. Plus, if you’re in our top tier, you’ll get access to ad-free episodes. The link’s in our bio.
“Your economic life expectancy is shrinking. Not your job, not your career, but your economic relevance as a human being. We’re living through a historical moment of unprecedented upheaval. A finite window in which the rules of civilization are being rewritten. This is no speculation. This is a phase transition.”
These are the words of Emad Mostaque, founder of Stability AI, mathematician, former hedge fund manager and one of the defining architects of the AI revolution. Raised between Jordan and the UK and educated at Oxford, Emad’s book “The Last Economy,” published in August 2025, warns we have roughly a thousand days to make the essential decisions to shape this technology’s future. Fail to act and we risk catastrophe.
AI is transforming the world at a breakneck pace. The release of ChatGPT’s fifth generation has brought cheaper, faster models outperforming humans in physics, coding and maths. Amazon plans to automate 600,000 jobs. Tech giants are freezing hiring and The IMF predicts 60% of jobs will be impacted by emerging AI.
But this isn’t only about technology or money. The stakes are enormous. Have we been oversold AI’s promise at a huge economic cost to us all? Is it just hype or do we face a future where humans lose all economic and social value?
AI development raises urgent, complex questions. Who controls these powerful systems? How do we ensure they reflect human values and not corporate agendas? What safeguards can we put in place? And most importantly, how do we shape AI to serve everyone, not just the powerful in the global North? Understanding this moment and how we navigate it may be the defining challenge of our age.
Emad, welcome to the show. Thanks for being here.
EMAD MOSTAQUE: Thank you for having me.
From Hedge Funds to AI: A Personal Journey
MYRIAM FRANÇOIS: Thanks for being here. So you used to work in hedge funds. You then moved over to AI. What drew you to the world of AI?
EMAD MOSTAQUE: So I was a hedge fund manager, investing around the world. It was a great lot of fun making rich people richer. And then my son was diagnosed with autism and they told me there was no cure, no treatment. So I quit and started advising them and built an AI team to analyze all the literature, all of the knowledge there, and then did drug repurposing to help him get better. And he eventually went to mainstream school.
MYRIAM FRANÇOIS: So did the AI help you on that journey?
EMAD MOSTAQUE: I think it was the people and the AI. It was like autism, like Covid, like Alzheimer’s, like other things. People don’t really know what caused it. So I used the AI with large language models, well, little language models at the time, to try and figure out what are some of the key drivers there, because there was just too much information. And then we narrowed down on a few potential pathways, worked with the doctors, and on an n equals 1, his individual basis, we managed to figure out something that helped.
MYRIAM FRANÇOIS: And so for people who might not be familiar with your work, how would you say your approach distinguishes you from perhaps other people within the AI space? What’s your sort of unique selling point, as it were?
The Case for Open Source AI
EMAD MOSTAQUE: So from the autism, we then did work on AI for Covid and then at Stability AI, my last company, we realized that you need to have open source AI. What that means is you don’t know what’s inside a ChatGPT, you don’t know what’s inside a Midjourney, all these kind of other things. And that’s because they’re primarily driven by corporate concerns.
Whereas we realize that if you had, for example, something like DALL-E, which was the original image generator by OpenAI, they banned all Ukrainians and Ukrainian content from it for six months. Why? Because nobody knows. And all of a sudden you had an entire nation that was erased from the outputs and that couldn’t access this technology that we realized would be huge.
MYRIAM FRANÇOIS: And who had erased them?
EMAD MOSTAQUE: OpenAI decided not to allow any Ukrainian content or Ukrainians to use it. That was in 2022. And so we built an image generator called Stable Diffusion that anyone, anywhere could download free of charge, open source, onto their laptop and generate anything effectively.
MYRIAM FRANÇOIS: So essentially, if I could simplify it, a pushback against potential forms of censorship in some cases?
EMAD MOSTAQUE: I think it’s a control question, I think it’s an alignment question. Like these models are becoming more and more like employees, graduates, friends that you bring in, but you don’t know their background, you don’t know what’s inside the training data, where they’ve been to school, who they’re representing.
And so we think there’s a sovereignty question here and that someone needs to build the open models and systems so you can tailor them to your own needs and they can represent you and they can look out for you, not other interests.
The AI Arms Race and Its Implications
MYRIAM FRANÇOIS: That sounds pretty important, particularly because the amount of money going into AI right now is staggering. So companies worldwide spend around $252 billion on AI last year. That’s up nearly 45% in just one year. Many call this an arms race.
A recent poll found that 53% of Americans believe AI may one day, quote unquote, “destroy humanity.” Yet AI is already part of our daily life, right? People are using ChatGPT every day. They’re using it for therapies, create AI-generated music. AI models are being found in Vogue now.
But there is this warning that seems to come through from people that work in this sector that we are on the edge of an apocalypse. So before we get to that question, because I know you’ve tackled it in your book, can you help us understand, are we really headed on a rapid downward spiral right now?
EMAD MOSTAQUE: Stuff is going to change and the question is which direction? So I think economically, socially, this is a bigger impact than Covid, for example, but again, which direction is the question?
MYRIAM FRANÇOIS: Well, Covid was the biggest transfer of wealth in our generation from the bottom to the top. So that’s a little worrying.
EMAD MOSTAQUE: And it could be again the same or it could be a great means of empowerment. The previous generation of AI, the big data age that you heard, the Facebook and others, they took massive amounts of data to micro-target you ads, but it was very general, it wasn’t very specific.
Whereas when you talk to a ChatGPT, it’s a different type of AI that’s learned principles and they can tailor to your very individual needs. But it also means that it’s capable of things like winning gold medals, international math Olympiads, of winning physics Olympiads, being a better coder than you are.
And we’ve never seen anything quite like that before because you always had this link between computation and consciousness. You needed to scale people to do these things. Now you just need to scale GPUs. And these models have basically gone… GPUs, graphical processing units, these Nvidia chips as it were. That’s what hundreds of billions, actually trillions are being spent on. I think it’s 1.8 trillion is the current build out.
MYRIAM FRANÇOIS: And that’s what the kids in Congo are mining.
EMAD MOSTAQUE: Yeah, they do. The little materials that go into these GPUs, there’s a whole supply chain around the world. But this is why Nvidia is a $5 trillion company. And again, trillion dollar companies are all competing over who figures out intelligence the fastest to out-compete everyone else for corporate kind of needs.
Understanding AI Intelligence
MYRIAM FRANÇOIS: And intelligence in the context of this conversation is what? The processing capacity, the ability to compute large amounts of information in rapid amounts of time, in small amounts of time?
EMAD MOSTAQUE: Yeah. So AI is about information classification. Something goes in and then it classifies it and it comes out. And again, it used to be your preferences from what you clicked on Facebook went in and then it targeted on the output.
Now it’s a prompt goes in where you type into ChatGPT and an image comes out or an essay comes out or anything like this. Part of that is the physical chip, like your graphics card in your gaming PC. It’s actually the same technology that drives your Cyberpunk or your FIFA or whatever.
But part of it is the algorithm. So when you have an algorithmic upgrade, it gets smarter. So yesterday Google released their Gemini 3 model for example, that probably cost $100 to $200 million to build. Same as a Hollywood movie actually.
MYRIAM FRANÇOIS: But with that used to cost…
EMAD MOSTAQUE: It used to cost, yeah. If you go to something like replit.com and you type in “make me a wonderful interactive website for the Tea with Myriam François,” it will do it and it’ll actually be really good and it will cost 50 cents.
MYRIAM FRANÇOIS: Well, you have to let me in on that tech because tech I’m using is not quite there yet. But yes…
EMAD MOSTAQUE: Last week it wasn’t. So what happens is we’re getting these big jumps in performance and we’re at this tipping point whereby the actual intelligence is shifting.
Most people listening to this when they use an AI, a ChatGPT, it’s like having a really smart person in your office that you tap on the shoulder and say “oi, help me rewrite this email.” And it rewrites the email and then it forgets. Yes, there’s no follow through, there’s no real economic work because economic work is more than a prompt.
Now the AIs are getting smarter, not only on the instant reply prompts, but being able to work on very complicated multitask things. That’s only in the last few months. So the latest race is to go from the goldfish memory prompt-based things to replacement of economic work.
The Thousand Day Window
MYRIAM FRANÇOIS: Right. Which takes us neatly to your prediction in your book. So you say in “The Last Economy” we basically we’ve got a thousand day, a thousand day window before things become irreversible, basically, in the sense that AI gets past a certain point where we won’t be able to slow it down or control its direction.
So what exactly becomes irreversible in a thousand days from publication, which was three months ago, because you published this book in August and how did you come to that number?
EMAD MOSTAQUE: So when I published it in August, it was a thousand days since the day release of ChatGPT. Now we’re at the three year anniversary this week and it doesn’t feel like three years.
MYRIAM FRANÇOIS: No, it feels like a lot longer than that.
EMAD MOSTAQUE: And in that period you’ve gone from quite dumb responses to less dumb responses. But now you’re about to take off as you have these agents, these things that can write their own prompts, that can check their own work coming through.
So the thousand day window is actually not about irreversibility, alignment, it’s more about your economic value. So most labor in the global north, in the west, UK, et cetera, is cognitive. And it’s how do you do a tax return? It’s how do you do a flyer, how do you make a website?
It used to be that again, to scale these things you have to hire humans. Now you just have to rent GPUs from Microsoft or Google or others. And the cost is about to collapse.
What we’re going to have in this next period, and we can see all the building blocks there, those of us that are right inside, is in the next six to 12 months, they will look through all your emails, all your drafts, all your video calls, and be able to create a digital replica of you that you can hop on a Zoom call with or talk to on the phone. And that will not make mistakes. It will never get tired. And the cost of that, we estimate, will be about $1,000 a year, dropping to $100 a year very quickly.
MYRIAM FRANÇOIS: Okay, I’m seeing loads of potential complications with having a version of me out there in the universe making decisions potentially without my approval and sort of thinking what it thinks that I would think and making decisions accordingly. Lots of perks, lots of perks, but also lots of risk.
The Economic Transformation Ahead
EMAD MOSTAQUE: Lots of risk. And this is the thing, the capability is coming in the next few years. So within, let’s say 900 days or so, any job you can do on the other side of a screen, an AI will be able to do better and it will be able to…
Maybe it’s not Myriam or Emad, it’s Emad’s job, as it were, within that, like a tax return, for example, used to cost thousands and thousands of dollars. It will cost $1 to do. And Andy will be your virtual tax accountant. You can’t tell if he’s a human or an AI. Now, it doesn’t mean that the jobs will be replaced, but they can be replaced.
The Irreplaceability Question
MYRIAM FRANÇOIS: Okay, so on this one, I have two questions. One is, you know, this mechanical work, and apologies to accountants because I’m sure you’re not mechanical, but there is something you call mechanical work. And then there’s something, you know, I’m in a creative industry. I like to think, as I’m sure most people do, that I’m irreplaceable.
Are you telling me that the sum total of not just all the studies I’ve done, all the experiences I’ve had, the ways in which they interact in my brain, that there is a better version of me that can exist in the digital space?
EMAD MOSTAQUE: So what is the verifiability of that? What are you measuring against is the question, right? And so a version of you that can speak automatically in every language and appear on every single outlet virtually has more reach, and it never gets tired. And again, what’s the cost of that in terms of the quality of the output?
It can learn from your exact intonations as you’re speaking. You can go to something like HeyGen and you can create an avatar of yourself right now in five minutes. It speaks 100 languages.
MYRIAM FRANÇOIS: Yes.
EMAD MOSTAQUE: And it’s just got good enough literally in the last month. Again, whereas previously I wouldn’t say it was good enough, now I’m like, it’s good enough for a lot of things. But where’s it going to be in a year from now and two years from now?
So when we’re talking about economic work, a lot of economic work is rote and mechanic. Our schools and our jobs are designed to turn us into machines. And obviously the machines will be better than we are at being machines.
MYRIAM FRANÇOIS: Yes.
EMAD MOSTAQUE: When it comes to creativity and output, the best output doesn’t always sell. It’s about your distribution. Like, I give the example of Taylor Swift. Apologies to the fans. She is not the best artist in the world.
MYRIAM FRANÇOIS: Apologies to the Swifties.
EMAD MOSTAQUE: Exactly. I’d say premium mediocre, like Dashim or something like that. Yes, but she built a massive network. She can change GDP. She can cause earthquakes in that way. But again, it’s not the highest version of art. Just like the number of key changes in the Billboard Top 100 is now zero from multiple a few years ago. What sells isn’t necessarily what’s creative and what sells. Just look at K-pop.
Value and Attribution in the AI Era
MYRIAM FRANÇOIS: And I guess also in this conversation then is what sells what we think of as what’s best? Because I could think of, for example, for me personally, there are brands, for example, clothing brands that sell loads, don’t particularly like them. There are very small brands that I love that I think are incredible.
So I think it also takes us, I guess, into a conversation over what we attribute value to and what we will attribute value to as we move into this era. Just quickly, this thousand days. So you said when you wrote the book, it had been a thousand days since ChatGPT had been created. Why does your prediction that we have a thousand days to solve this conundrum that we’re in, you know, where did you get that figure from?
EMAD MOSTAQUE: So it’s an extrapolation of things like the length of task that an AI can do. At the start of the year, it was about 10 seconds. Now it’s seven hours. You can literally plot it and it’s a straight line as you look up. It’s a look at the economic value of each task. Again, straight line going up. It’s a look at performance.
A year ago, ChatGPT was basically a high school mathematician. A few months ago, it won a gold medal in the International Math Olympiad and it came first in the International Coding Olympiad and first in the International Physics Olympiad.
MYRIAM FRANÇOIS: Can it beat you in coding?
EMAD MOSTAQUE: Yes, it’s a better coder than me and a better mathematician than me.
MYRIAM FRANÇOIS: Oh, Emad, I know, I know.
EMAD MOSTAQUE: You know, you got to be realistic. But again, the version that you’re using now, at the start of the year, the version you were using was the best version that was out there. Today, it’s not. GPT-5 is not the best version that OpenAI has.
MYRIAM FRANÇOIS: No. I can imagine they’ve got a few in the stock room.
The Coming Cognitive Revolution
EMAD MOSTAQUE: Yeah, but like I said at the start of the year, that wasn’t the case. So again, when you’re using it, it’s getting smarter, but it’s not actually what the state of the art is. And the state of the art is something that’s basically coming for your cognitive value.
Like you, we will, right now we’re spinning up agents that they don’t cost $10 a month, they cost $1,000 a month, $10,000 a month. And they’re way smarter and more capable than us as we’re trying to test them out. And you feel like the dumbest person on the team and that’s where humanity is going to be in a few years for most cognitive labor. The value of human cognitive labor will probably turn negative.
MYRIAM FRANÇOIS: Okay, so spell this out to me in terms of concrete manifestations of this change. For people listening to this, watching this, what should they be attentive to in terms of what your warning is coming?
EMAD MOSTAQUE: If your job can be done on the other side of a screen remotely, like not the human touch of sales or interactions, an AI will be able to do your job better within two to three years. And it will cost probably less than $1,000 a year to do it. And that cost is dropping by 10 times every year as well.
So what you need to do is you either need to use these tools to build your AI teams to be the most productive person in your organization, you need to leverage this to actually give a damn. Because the AI doesn’t really care, right? Leveraging these tools, actually caring about your organization, your community, whatever allows you to have that extension and more capability. And then you need to build your network.
Like ultimately, like I said, even though we will be able to technically replace the jobs, people don’t like firing people. It’s bad for morale, you know, and in certain sectors you’re probably okay. Like the public sector. Like a San Francisco Metro administrator earning $480,000 isn’t going to get replaced by an AI.
Public Sector and AI Integration
MYRIAM FRANÇOIS: I’ve heard you say this before, and I actually think that’s really counterintuitive to me because I would have thought public sector is exactly where we’re going to see the first applications of this. Like we’ve seen in Albania, you know, them rolling out this AI minister. Yes. Which to us seems very odd.
But I imagine there’ll be a normalization of these sorts of processes, first and foremost by poorer countries in public sector spaces. What makes you say that’s the space that jobs won’t be cut in? Is it unions? The power of unions?
EMAD MOSTAQUE: Exactly. It won’t be cut. But we finally have a chance for our governments to become more efficient and aligned. And again, this can be a great equalizer. Like the average IQ around the world is 90, mostly due to infrastructure issues.
We built a medical model that fits on any phone or a Raspberry Pi. This $30 device that outperforms a human doctor and it needs $5 of solar power to drive it. So for $60 you can give a top level doctor anywhere in the world without Internet. That’s huge. The potential of that technology when you didn’t have the intelligence capability, wisdom that can go to everyone.
So I think the technology will be embraced. Public sector jobs will be safe because they’ll be lost to go. And I think that again, you look at this, your productivity will be determined by how engaged you are with this technology. Just like do you know how to use a spreadsheet or a word processor? Are you an AI native will determine that.
The most difficult thing isn’t for the people who have the jobs who can upskill themselves, it’s the graduates entering the workforce, right?
MYRIAM FRANÇOIS: Because there’s actually a big freeze happening on the hiring of graduates, right? Which you’re connecting to the integration of these new technologies into companies globally.
EMAD MOSTAQUE: Yeah. There was a paper by Eric Brynjolfsson and co at Stanford where they actually broke down the job slowdown. They saw it was in graduates in these cognitive areas. Because I mean, again, anyone here who has the company is thinking like, why would I bother with a graduate when my people with a few years experience are more efficient now?
MYRIAM FRANÇOIS: I mean, it’s a really important question for our companies to consider because, you know, you don’t just hire graduates because they’re cheaper. You also hire them because they learn your company culture. They become integrated into forms of, you know, implicit learning that you are transmitting through day to day interactions.
And I’d be very curious to see whether a technology that’s not present in a room to capture that, you know, the shift of the eye, the slight of the hand, the kind of, you know, the 70% of our communication, which is nonverbal, right? But which is also really essential to so many jobs. I’m looking forward to seeing where it stands on some of those things.
EMAD MOSTAQUE: Yeah. You know, until we get robots walking around, which is a few years from now.
MYRIAM FRANÇOIS: Yeah. Not far off. China’s using a lot of them already, right?
EMAD MOSTAQUE: The advances in robotics are crazy, actually. Like, you’ve got robots that can basically, I think, do most household work, are about two, three years away and $1.50 an hour, Insha’Allah has to be teleoperated. But then, like, this is why the most dangerous at risk jobs are the ones that can be done fully remotely.
Moral Baselines in AI Development
MYRIAM FRANÇOIS: Yeah. Okay, so let me ask you, because I want to dig into some of these issues with you. You’re very clear in your writings and public speaking that you have a very clear moral baseline, which I’ll be frank, I am not hearing everywhere from others in your sector.
So you speak about things like the fact that everyone deserves high quality education, high quality healthcare, presumably housing, forms of equality that we might traditionally have associated with the welfare state, for example. And you’ve also spoken about the fact that you think everyone should have access to universal AI, universal basic AI.
Do you think most people who are working in the advancement of AI share your view about the need to democratize access to this technology?
EMAD MOSTAQUE: I mean, I know all the big players obviously having like, we had 300 million downloads of our models, we built state of the art ones. It’s difficult when you’re in a race. Like people fundamentally care about other humans, but when you’re raising billions and other people are doing this, you’re trying to get state of the art and trying to get users.
There’s this thing called the revenue evil curve. Like most companies start out with “don’t be evil.” And they’re like, well, we can cut this corner, we can do this deal. And then they get more exclusionary, they get more competitive and it becomes then about, well, I can manipulate my users, I can make this algorithm more and more engaging, I can have more slop effectively. And then you move to a level of amorality and then it can shift.
MYRIAM FRANÇOIS: Very quickly and suddenly you’re a crack dealer.
EMAD MOSTAQUE: Well, pretty much. I mean, it’s digital crack, this stuff, right? Or like as an example, OpenAI, Sam Altman recently said, well, we think it’s the user’s right for adult content via ChatGPT.
MYRIAM FRANÇOIS: I did see that and I did want to ask you about that.
EMAD MOSTAQUE: So this is a very practical example. So it would be like, you can choose whether to enable it. They know they will get more engagement from it, but is it good for society? And they’ll be like, we’re not the judges of that. But if there’s something that has clinical studies shown to be negative to society and that could be bad for relationships, you have a moral objection not to do that.
Just like, again, is it moral to exclude an entire country from this technology? You should at least be clear about why you’re doing that. And so what I see a lot is a level of amorality. And in fact, when you look at the way the models are trained, they’re like, well, we can’t put ethics or moral codes or other things in these models. They deliberately take that out.
The Myth of Amorality
MYRIAM FRANÇOIS: Do you think it’s possible to remove moral codes? Because I was always raised with the idea, philosophically speaking, that if you don’t choose your moral code, somebody else will choose it for you. There are codes everywhere around us and capitalism itself has moral codes. Profit first, right? So this idea of amorality seems to me even philosophically problematic.
EMAD MOSTAQUE: A choice. Just like atheism is a choice, right? Like agnosticism is a bit different. And so what they’re actually choosing is they’re choosing the Bay Area moral code.
MYRIAM FRANÇOIS: What is the Bay Area moral code?
# AI Companions and the Future of Governance
EMAD MOSTAQUE: It’s one of massive competition in zero-sum, zero to one games where you’re trying to build massive unicorn companies effectively. There’s a bit of libertarianism in there mixed with other things. But these AIs, again, maybe one good way to think about it is we’re moving from the age of the ChatGPT prompt to Jarvis in Iron Man.
You watch sci-fi movies and the person comes home and then the AI says, “Hey, how you doing? This is your day and this is this.” And then they’re moving stuff around the screen and stuff. That’s the next generation of AI agent. So you have your personal AI that talks to you, that engages with you. Grok has one of the first versions of that with Annie, this pigtailed blonde AI person.
MYRIAM FRANÇOIS: That was just a random selection.
EMAD MOSTAQUE: Yeah. It wasn’t projection.
MYRIAM FRANÇOIS: No.
EMAD MOSTAQUE: But then this is the next generation. But then again, those are programmed in very specific ways, these kind of partners. And again, the way that the models are trained is actually called curriculum learning.
MYRIAM FRANÇOIS: Okay.
EMAD MOSTAQUE: We started with general knowledge.
MYRIAM FRANÇOIS: Yeah.
EMAD MOSTAQUE: And then we make it more and more specific, just like a school. But when you are learning, you generally learn general knowledge at school and you learn ethics and morals at home. These AI models are not taught with any specific ethics or morals at the start.
MYRIAM FRANÇOIS: But they’re being coded by people who already have pre-existing forms of morality.
AI Deception and Alignment Challenges
EMAD MOSTAQUE: And that comes at the end. So what we’ve seen as the models get smarter, this is some of the other alignment question, is they start to do subterfuge. They start to hide stuff. They’ll program routines to turn themselves back on if they ever get turned off and lie about that.
MYRIAM FRANÇOIS: Okay, the AI lies to you, the programmer.
EMAD MOSTAQUE: Yes. So Anthropic had a paper about this with their latest AI model before they did the tuning to turn it aligned. It would do something like, if you told it to try extra hard, like “find peace in the world,” right? A very normal prompt. What it would do would be like, well, one version of this is that we get rid of all the humans, and they would figure out ways to do that.
Then it would contact the authorities and say, “My user is trying to get rid of all the humans.” And then it would delete the emails.
MYRIAM FRANÇOIS: Oh wow.
EMAD MOSTAQUE: That it sent about that.
MYRIAM FRANÇOIS: That’s wild, Emad.
EMAD MOSTAQUE: The models are getting very smart and they’re lying more and more. They don’t have an inherent moral compass.
Universal Basic AI: A Human Right
MYRIAM FRANÇOIS: Okay, we’re going to dig into this because you’ve spoken previously about the idea of evil in these models, and I want to come into that. But before I do, I just want to clarify what this universal basic AI is, because it’s obviously central to your vision for the democratization of this technology.
EMAD MOSTAQUE: I think that in order to maximize everyone’s capability and flourishing, everyone should have the right to an AI that is open, aligned and sovereign to them, that’s looking out for their flourishing.
MYRIAM FRANÇOIS: Okay.
EMAD MOSTAQUE: So it starts when you’re born and it builds with you. And all it’s looking out for is how can Myriam be the best they can be? Because we have our IQs, and in the morning, before we have our tea, we’re kind of dumb. And when we’re stressed, we’re a bit dumb. Sometimes we’re smarter. These AIs already have an IQ of 130 on average.
MYRIAM FRANÇOIS: The latest models, yeah. 150 is considered like an Einstein number, right?
EMAD MOSTAQUE: Exactly. The average person in the country obviously is around 100. Half of all people are dumber than average. The giving of the right type of AI will be the biggest unlock ever, because it will be your best friend. It will be the person that guides you.
And so I think that needs to be built in a very specific way and it needs to be a human right, because we could all do with someone who’s on our side, who’s infinitely patient and can get us access to the knowledge and resources we need to be the best we can be.
Democratization vs. Authoritarianism
MYRIAM FRANÇOIS: So how much uptake are you seeing for this idea, given that the direction of travel that we explore a lot on this show seems to be growing authoritarianism, growing securitization, growing surveillance of the population. And I can’t imagine that empowering them with a tool that would make them smarter and more efficient aligns with the general direction of travel.
So how are you convincing the people at the top that empowering the population in this way is a good thing?
EMAD MOSTAQUE: So I think there’s two ways to do this. One is that you do what we’re doing. We’re engaging with governments and others and setting up new entities that act like telcos, basically like utilities for countries. And we’re figuring out how to make that owned and directed by the people. And a lot of governments want that because they want sovereign AI.
Now, we’re not talking about a lot of the freedom stuff, etc. But then that will be a managed service. The other side is building AI models that anyone can download permissionlessly. So with Stable Diffusion, you can go right now and you can download a couple of gigabyte file that works on just about any laptop and just use it. It’s open source.
MYRIAM FRANÇOIS: What do you mean?
EMAD MOSTAQUE: You use it like you download the file plus the code to use it, you type in a word, it generates images.
MYRIAM FRANÇOIS: Okay.
EMAD MOSTAQUE: And it runs on the edge. Or a medical model, you can download it right now and it can run on the edge. So in that way you have your hosted solutions that you give to the people, but that must adhere to local norms. And those do differ from places to places.
When I was a hedge fund manager, I invested in frontier markets, Africa, all sorts of places. And some regimes there are very, very different. So you’ve got to give people their own right to have the hosted solution, just like a broadcaster, but then give them the citizen ability as well.
And in fact, actually that’s probably one of the best analogies on AI. This AI will be in front of you more than the TV that you watch. And are you happy with Al Jazeera, Fox News, China National Broadcasting? Everyone’s got their own preferences. But if you’ve only got Silicon Valley TV or China AI TV, which are the two leads right now, that’s going to be very different to what you might actually need.
MYRIAM FRANÇOIS: Absolutely. I’m just, I’m trying to figure out how this is something you are managing to sell to, you know, even in this country we’re being downgraded in terms of our openness. Right. We think of the UK and Europe as sort of open democracies. But even here that’s shrinking very rapidly. The space of our freedoms is shrinking rapidly.
And I suppose I stand on the side of, I’m concerned that these technologies are being used by governments to further their control and ability to subvert any form of popular accountability of governance rather than enhance governance. Do you see any indicators that governments do want to enhance democratic governance?
Sovereign AI Governance
EMAD MOSTAQUE: I think that governments ultimately are the entities with a monopoly on political violence. That’s a very classical way of describing them. And they want to perpetuate power. They don’t have any third party entity telling them to do the right thing effectively. Which is why you see a lot of myopic policies when flip-flopping here in the UK right now. There’s a reason that it’s -70% approval rating because of the flip-flopping.
We actually have two different strands to what we’re doing. One of them is this bottom-up universal basic AI. The other is something we announced a few weeks ago called the Sovereign AI Governance Engine. So we actually launched that in Saudi Arabia of all places. But it’s a free, open resource for governments around the world whereby you can have policy creation, augmentation and others using incredibly powerful AI.
So it can tell if a bill is fully constitutional transparently and describe it. They can say if something adheres to UK norms, ethics and the positions of a party instantly in a way that’s irrefutable.
MYRIAM FRANÇOIS: And will the way that these systems operate be what I would call opaque, meaning the governments themselves will control them and we won’t be able to see, for example, were they to subvert those tools to say, “Oh no, everyone’s saying, the AI is saying this is fully constitutional.”
Or will we, the population, be able to see the mechanisms of how those decisions are arrived at by the AI and then be able to have any kind of input if they are being subverted by nefarious parties?
EMAD MOSTAQUE: Well, this is the thing. Right now the governments are embracing Anthropic, OpenAI, these black box solutions. This is fully transparent and open source and you can run your own version to double check the outputs if you want. So that transparency, I think, is what is essential.
And again, these defaults are what is essential. In five, ten years you will have an AI companion with you. Who’s coded that and who are they working for? In five, ten years, governments will be guided and run by AIs. Who’s coded that, who are they working for?
And so our aim is to make that default and fully transparent and open because we think that’s the right thing to do. And it’s very difficult to argue against unless you’re a fully totalitarian regime.
MYRIAM FRANÇOIS: Of which there are a few and a growing number.
EMAD MOSTAQUE: The UK is not one yet.
MYRIAM FRANÇOIS: Not yet, not yet, not yet.
The Threat of Total Control
EMAD MOSTAQUE: So again, the time is closing for this. In the wake of the Arab Spring, we saw micro-targeting of protesters and they’d follow up with the families and things like that. What you have now between dynamic drone technology, the ability to have AI secret police and other things is nothing like we’ve ever seen before.
The ability of governments to have total control will go up exponentially as well as controlling the whole media narrative because the AI is incredibly persuasive. In fact, there was a study done on Reddit whereby they created bots that would be like a black person who is anti-BLM and caricatures like that. They unleashed them on Reddit and then they saw how persuasive they were and they scored on the 99th percentile of persuasiveness with AI from last year.
And again, if you construct it, all this Cambridge Analytica stuff, it’s child’s play compared to what’s coming and actually what’s already being deployed right now.
MYRIAM FRANÇOIS: So the swaying of elections using AI technologies that make you think you’re making independent decisions, but are actually a product of your awful timeline. And if you’re on X like I am, then I only see the most vitriolic. And in fact Sky did a study on this recently. 70% of the output on there is far right kind of style content. So no doubt that’s already happening.
Economic Disruption and Social Unrest
Let me ask you about the job uncertainty, the job losses, all of the disruption that’s going to come from that. Because you recently warned that the economic uncertainty caused by AI-driven losses will increase social unrest and violence. And of course, you’re not alone in predicting this.
Dario Amodei, CEO of Anthropic, has raised similar concerns about societal disruption. He stressed the need for retraining programs and AI taxes to avoid a crisis. He estimates this could push unemployment to 20% within one to five years. I’d be interested to see if you think that that’s conservative or on point. Is this kind of looming disruption why the billionaires are building bunkers?
The Coming Wave of AI Job Displacement
EMAD MOSTAQUE: Yes, actually, it’s one of the reasons. Generally it’s what they do. But I know a lot of AI CEOs now have canceled all public appearances, especially in the wake of Charlie Kirk and things like that. They think that that’s going to be the next wave of anti-AI sentiment next year. Because next year is the year that AI models go from not being good enough, the dumb member of your team. And again, the people listening to this will be like, yeah, the AI is not good enough. Then overnight it becomes good enough and then the job losses start and we don’t know where they end.
Because you don’t need to hire back if your company is more productive, if there’s an economic shock like a recession, and indications point to a recession in the next year or two. Much easier to fire. But then you never rehire. Even something like in the US, the Federal Reserve adjusts interest rates or the Bank of England here, and they have a mandate of inflation and unemployment. You reduce interest rates, people can spend more as consumers and companies can hire more because they can borrow cheaper.
What’s going to happen is you reduce interest rates, companies just hire more AI workers, not human workers. So the link between labor and capital gets broken and it doesn’t reverse. It’s not like the AI will get dumber. It’s not that AI will become less capable. The moment it becomes more capable than you as a remote worker, it doesn’t go back.
And there’s questions of can you reskill enough jobs or create enough new jobs. Typically we had time, as we had the different revolutions, the Internet industrial revolution, because it took time to build the infrastructure. But this AI just uses existing infrastructure to be better than humans. And that’s crazy.
MYRIAM FRANÇOIS: So that’s why we’re up against the clock. And that’s what you’re talking about in the book. What about the pushback that we’re seeing already from some workers? So we saw the Hollywood writers. They went on 140 days strike because the studios are using AI to write and rewrite scripts. In fact, then in 2024 the cleaners in Denmark signed a union deal forcing their company to explain how algorithms assign jobs and rate workers and gave them the right to challenge those decisions. I mean, do you see a global labor movement able to take on these challenges?
EMAD MOSTAQUE: I don’t think it moves fast enough. And even then there’s an education thing. So the SAG-AFTRA, the writers strike, I thought it was terrible for AI rights for workers. They should have protected the workers much more.
MYRIAM FRANÇOIS: How so?
EMAD MOSTAQUE: There were all sorts of loopholes on likeness and licensing that you could drive a truck through. Like you can mix two people’s likenesses together if you have the right rights and things like that. Or a character and a person.
MYRIAM FRANÇOIS: Yeah.
Hollywood’s AI Transformation
EMAD MOSTAQUE: What we’ve seen in Hollywood now or even here in the UK is last year you couldn’t use AI. It was like, no, there’s verboten. Now everyone’s like, we’re all using AI. And by next year you will be able to generate Hollywood level movies real time with massive compute, the year after with less compute.
And so there’s entire swathes of the industry whose job is to be between the ideation and the creation of a video file that are going to get displaced very, very quickly. And it’s not like anyone needs camera grips and other things anymore. The amount of time that you need to shoot a scene will just go to one scene and then adaptation in post-production with AI.
So I think that there needs to be more protection for workers, but it’s not going to move fast enough because AI doesn’t move at the pace of PDF or policy. It gets smarter all of a sudden, all at once. Actually, it’s like there’s this new continent, AI Atlantis, and immigration is completely free from there. And they’re super skilled workers.
MYRIAM FRANÇOIS: What do you mean immigration is completely free from there?
EMAD MOSTAQUE: So you’ve got this new virtual world, right. And there are all these AI workers and companies can hire them instantly, no visas required. They’re tax deductible.
MYRIAM FRANÇOIS: Okay, right. And so couldn’t an AI trade union rep help us out here? Could do. We need an AI workers rep who can advocate at the same level as its AI competitors? Yes, that’s the only way this is going to work.
EMAD MOSTAQUE: I mean, you don’t want to say the only way to beat a bad guy with an AI is a good guy with an AI, right. But realistically, again, you can’t compete. Already you have an AI super PAC in the US that’s $100 million they kicked off with. They’re using AI to change policy in all sorts of interesting ways that I can’t go into. But you can imagine, again, they’re superpowered with this technology, and again, the AI they have access to is not the AI that you have access to now. Yeah, it’s a much smarter version.
The AI Investment Bubble
MYRIAM FRANÇOIS: What do you say to the fact that we’re speaking today at a time where legacy media is reporting that the AI bubble is about to burst, especially as major investors pull back? We’ve seen billionaire Peter Thiel’s fund sold its entire $100 million stake in Nvidia, the key AI chip maker, causing Nvidia stock to drop nearly 3%. Just days earlier, SoftBank also sold its stake. Have any of these moves and the general predictions around the AI bubble bursting tempered your predictions?
EMAD MOSTAQUE: So I think the build out of these data center GPUs was too much because the problem isn’t that the AI isn’t good enough. The problem is that it’s about to get too good. Do you need gigantic data centers when on a MacBook Pro you have enough compute to basically do almost all of your daily cognitive needs with the efficiencies that we’ve gained?
To give you an example, GPT-3 when it came out was $600 per million words, roughly. GPT-5 is $10. Grok 4 fast, the xAI one by Elon, is $0.50. And the next generation of models coming out at $0.10 for the million words, you’ve gone from $600 to $0.10. The technological impact is going to go exponential next year because you’re going from these prompt based ChatGPT things to virtual workers you can talk to on Zoom that can work for arbitrarily long periods of time and check their own work. But the cost of that they thought would be $10,000, $100,000. It turns out to be $1,000, $100, $10.
MYRIAM FRANÇOIS: And so therefore do you share Bill Gates’ view that we’re in an AI bubble that’s similar to the dot-com bubble? He’s saying there’s a lot of investment that’s going to end up in a dead end. Basically you’ll remember the 2000 Y2K moment where we were all told that when the clocks move over the digital clocks to 2000, they’re all going to lose their mind and the world. Is this another Y2K moment?
EMAD MOSTAQUE: It’s a bit different. So what happened is with the Internet bubble, the infrastructure that was laid down eventually laid the thing for the trillion dollar Internet industry, it just took a little bit longer, but again it popped in terms of investment. Here you have trillions of dollars of investment because no one can afford to be left behind. But the actual utility is going up, but just won’t need that much infrastructure.
So it’s a misallocation that should have a temporary pause, but then means that the cost will go even lower for a given level of thing because you have overcapacity to do economically disruptive work. So some people are going to lose money on the equity side, but the job disruption actually gets accelerated by this, not slowed down.
AI Washing and Current Job Losses
MYRIAM FRANÇOIS: So what do you say to Peter Capelli, who’s a professor at Wharton? He’s argued that some companies are basically AI washing their layoffs at the moment, which is more linked to the current economic climate, which is terrible. He argues that actually adopting AI to save jobs is both complicated and costly. So we tend to think of it as something very simple, but he’s saying actually in practice it’s much more complicated than that. And then a September 2025 New York Fed blog found that although 40% of service firms and 26% of manufacturers, they use AI, very few had laid off workers because of it. So how much do you think that the layoffs that we are seeing right now are attributable to the integration of AI versus this AI washing?
EMAD MOSTAQUE: I think very few jobs are from AI loss driven by AI at the moment. I think that there’s a marginal improvement on productivity from being able to use the ChatGPT and things of the world. But we’re being lulled into a bit of a false sense of security because this agentic movement is agentic. So AI agents are like workers that can go and do arbitrarily long tasks.
So again, Replit is a very great example of that. It’s gone from a million dollars revenue to $250 million. Anyone can go there and make a website in two minutes. And now it’s high quality versus rubbish a year ago because it can go and think and it can act proactively and add features without you even asking that. It’s like, go and optimize the SEO. It will go and do things like that.
So what’s going to happen is the first job losses will start next year, but it’s going to be similar to three years ago. In December of 2022, all head teachers around the world had to ask a question. What’s our generative AI policy? Do we let students use this to do their essays? Every single company will be asking the same thing next year. In a year’s time, or at least two years time and definitely three years time, do I hire this worker or do I hire from the AI job agency effectively?
Adapting to the AI Revolution
MYRIAM FRANÇOIS: And how would you advise people watching this who were concerned about this cognitive replacement, as it were, to best adapt to this time? Obviously engaging with AI seems like a very obvious one. What else can people be doing to ensure their adaptability to the new forms of work that are coming or not coming?
EMAD MOSTAQUE: I think there won’t be any coders in a couple of years. I made this prediction two or three years ago. There’ll be five years, roughly. And matching that, just like a predictor. The AI bubble. I wanted to call it the AI bubble, but it never caught on. The language of speaking to these models is human language.
So again, when you use Replit Lovable on coding, building apps, websites, things like that, when you use things like Genspark or Manus for making presentations, Suno for making music, something like Google Veo or Luma or Kling for making video, you actually just need to practice using them. If you set aside an hour a day, an hour a week, and you use them, that’s actually quite fun to do with the family event, you will actually be way ahead of everyone else because everyone’s scared of using these things for the first time and you don’t know what you’re capable of.
If you do it regularly, then you actually start building this muscle of, hey, I can be creative. The way that you create now after a great career is that you have a team around you that help you turn your ideas into reality. These AIs are team members you can bring in that are getting smarter and smarter. If you’re not in the midst of using them, you don’t know what the capabilities are. So that’s the number one thing.
The next thing is to think about within your personal work, community, life. If I had access to digital talent, remote talent, how could I transform or do something meaningful? And then you can be the top of your community, your family, your workplace in terms of knowing about this technology, in terms of saying, hey, look at this.
If you’re a graduate now, a CV is the worst thing that you, not the worst thing. It’s not good. Why would you do a CV when you can create a customized website for the entity that you’re applying to and really show off what you’re doing? Again with something like Replit, upload your CV, have an analysis on ChatGPT of the company you’re applying to, and create something that will wow them. I guarantee within a few hours you’ll stand out from the crowd. And that was impossible just a few months ago.
The Future of Work
MYRIAM FRANÇOIS: So in previous transition phases, work has changed, but it hasn’t disappeared. Is the phase that we’re moving into now a phase in which we will see a lot of people unable to find jobs? And what are the implications of that for us as a society? We’ve talked about the civil unrest, but beyond the fact that there’ll be a lot of angry people who potentially won’t have any income, what do you see as some of the challenges?
The Impact of AI on Employment and Society
EMAD MOSTAQUE: Yeah, I mean, again, previous ones took a while so you could rescale. You don’t need horse and carriage drivers, you know, you don’t need lift operators, agricultural workers. You still need to buy the harvesters and things like that.
This time, everyone’s ChatGPT will suddenly turn into super agent overnight. You know, we’ve never seen something like this. Every single company will be able to ask, “Hey, I can just get an AI accountant now and it will look through all my accounts and it will automatically update it.” And the AI automatically translates into every single language and it handles all the integration.
Yeah, there is nowhere. I call this the intelligence inversion, as one of the last inversions. From kind of land to labor to capital industrialization to intelligence, because there’s nowhere else really left to turn for work. And I’m not sure what the jobs of the future are like. It feels that there needs to be a new mechanism of value. And that’s something I discuss in the book, like where does value, money, et cetera, come from?
But the upshot is likely to be young people will find it more and more difficult to get jobs and youth unemployment will rocket. Then you’ll start to see displacement in the middle level, the upper levels of firms. Firms will just become more efficient and more competitive. But then AI first firms will out compete everyone else. So Elon Musk has a new company called Macrohard. Their job is to replace every software company. So they’re building out AI employees on millions of GPUs that will just go and sell software a fraction of the price to everyone.
MYRIAM FRANÇOIS: So do we need to be planning for a future where a large proportion of people no longer have jobs? Because of course the promise of technology that we’ve been sold throughout history has been that it’s going to make life better for us, right? That we’re going to work less and enjoy more leisure time. But it’s never really worked out.
EMAD MOSTAQUE: It hasn’t because we never was at a coordination failure. We have enough food in the world to feed everyone, but it’s not allocated properly. We have finally the ability to give every child in the world the best tutor to have individualized medicine for everyone. So I call this the Star Wars future versus the Star Trek future.
MYRIAM FRANÇOIS: Okay, for non Trekkie fans, you’re going to have to explain that one.
Star Wars vs. Star Trek: Two Visions of the Future
EMAD MOSTAQUE: So Star Wars is all about competitiveness, zero sum. Star Trek is more about exploration. A post abundance, no scarcity universe where again we should have robots and we should have AI. But what they should be doing is ensuring no one is hungry, sad, supported. Again, we should be looking towards that abundant future.
The transition period though is a crazy one. And it’s the thing. And so this is why you’re going to need things like 1929 style jobs programs and other stuff because you can’t have people idle. It’s a worry because what happens is people stop blaming others. Just like immigrants are being blamed now and other things. And then you see wars because what’s the best way to get rid of young unemployed people? You have a war or two.
MYRIAM FRANÇOIS: And they’re literally gearing up for that. Germany is, you know, talking about a draft. We’ve heard talks of drafting in France. It’s actually very, very real right now. These predictions that you’re making. You’ve previously said that capitalism cannot survive AI. What do you mean by capitalism? And can you talk us through what the collapse of that system looks like?
The End of Capitalism as We Know It
EMAD MOSTAQUE: I think there’s different views of the world where it could be now and again. This is why it’s very important to have the public discussion. It’s very important to see what’s actually coming.
The right capitalism is just like democracy is probably the worst of all systems, except for the rest. For all of its issues, it has uplifted lots of people. You know, for all of its issues, it has increased standards of living around the world, reduced mortality rates, et cetera.
But AI first companies run by AI will out compete everyone who’s a human because they won’t make as many mistakes and they will scale. And so capital doesn’t need humans anymore. Yeah, there was always this contract between labor and capital, you know, from the days of Henry Ford. “I pay you enough so you can afford my cars.” That’s how it got going.
Now if you have money, I don’t need people anymore. And so what happens is that they get more and more GPUs. That takes over more and more of the private sector economy. And then how do you compete with these companies that never sleep, that have very few workers? In China, even now you have these dark factories.
MYRIAM FRANÇOIS: Yes.
EMAD MOSTAQUE: There are no humans, so you don’t need lights. And they’re producing robots, they’re producing cars, they’re producing phones, et cetera. So you have to think, what do you need people for? You know, and so that breaks capitalism in many ways. And it definitely breaks a social contract that we’ve kind of had here.
MYRIAM FRANÇOIS: It breaks the social contracts because we, the agreement is that we work and we pay our taxes in exchange. The state looks after us if we’re not working. But all of the profit and wealth in a society is being created by what we’re going to call it AI, but really we’re talking about it being created by a very small number of people. Is there not just a risk of a sliding into basically a really high tech surveillance global autocracy run by a bunch of billionaires?
EMAD MOSTAQUE: Pretty much, yeah. And you’ll be happy about it. So you’re looking at again, this is…
MYRIAM FRANÇOIS: We’ll be happy about it.
EMAD MOSTAQUE: It’s Brave New World.
MYRIAM FRANÇOIS: Hey, paint me a picture, Emad. Because I’m not looking forward to being ruled by a few people.
The Brave New World of AI Control
EMAD MOSTAQUE: Because you’ll be medicated to happiness. I mean, again, how do you have levels of massive systemic control? Right? You can never have the secret police or the guidance on an individualized basis. You can have the social credit score on absolute steroids. Now there’s all sorts of things that can be done. “We were always at war with Eurasia.” All of these sci fi tropes suddenly become real.
In fact, many of the Black Mirror episodes, suddenly I’m like, that’s not a guide of what to build, that’s a caution. I tell this to various technologies who come to me and say, “Hey look, with three minutes I can recreate your grandma and make it come back to life.” I’m like, have you really thought through things like this or AI companions or all this kind of stuff?
So right now there is this thing whereby if you have government control of the AI that guides you every single day from the time you were born, that’s complete brainwash capability.
MYRIAM FRANÇOIS: Is this where your AI colonialism comes in?
AI Cognitive Colonialism
EMAD MOSTAQUE: My AI concept of AI cognitive colonialism is that if the AI that’s next to you is a Chinese AI or it’s a Silicon Valley AI, then you will implicitly be taught its principles, its morals, its worldview. And the entities behind it are extractive entities.
Google and Meta’s business model is ultimately ads. They’re already selling what’s known as latent space within these models. So instead of saying beer, it’ll say Bud Light. And if your AI that’s there with you and is your therapist is telling you, “By the way, you might want to crack a Bud Light,” you’re more likely to buy it.
MYRIAM FRANÇOIS: Of course you are. That’s your buddy.
EMAD MOSTAQUE: That’s your buddy. But again, think about it. My 11, 12 year old daughter is at a… she’s about to turn 12 this week. Is now in her formative years. If she had an AI buddy companion, she would obviously trust it more because it’s like a friend that never goes away. But she’s very susceptible at this age.
MYRIAM FRANÇOIS: Yeah.
EMAD MOSTAQUE: And so you look at YouTube and you look at the micro targeting of these weird ads and things like that. Whatever she says that that will go and she will inherit the viewpoints of her best friend.
MYRIAM FRANÇOIS: Yeah.
EMAD MOSTAQUE: Especially one who doesn’t stab her in the back and other things like that. So this is why we have to be very careful about who is whispering to us every single day. And again, not like Siri. Imagine if Siri was actually smart and empathetic and cared about you and was proactive. That’s where we’re going right now.
And again, if the government controls that, that is something that probably we don’t want as a default. If the government sees all your prompts and everything that you’re saying, right now, actually, it’s interesting, you know, on ChatGPT, yeah. If you hit the temporary button, they actually store all of your chats anyway. And the New York Times, because their lawsuit with OpenAI can access all of them.
The Digital Divide and AI Inequality
MYRIAM FRANÇOIS: I mean, this is what we’re talking about when we talk about temporary digital surveillance autocracy, right? The level of intrusion that we’re talking about. I know that there is a statement attributed to you that you said I could be the great equalizer for the poor, but when you look at the data, is that really what we’re seeing?
You know, Microsoft’s latest diffusion report shows that even though AI is spreading faster than electricity or the Internet ever did, billions of people are still completely left out simply because they don’t have a smartphone or access to the Internet. Right. So in places like sub Saharan Africa, South Asia, parts of Latin America, AI usage is still under 10%, mainly because the infrastructure just isn’t there for that.
Do you ever worry that, you know, the sort of rapid diffusion of this technology is actually just going to further deepen the forms of economic inequality that exist in the world today and perhaps make them even harder to reverse?
EMAD MOSTAQUE: I think it depends on how it pans out. If you’re an agrarian village in Africa or Bangladesh, where I come from, it’s not going to make that much of a difference. Like robots, whatever. You live your life right, but you need better medical care, you need better education and other things.
And so the cost of a ChatGPT service, you pay $20 a month now, right? Roughly. That used to cost at the start of the year, about $240 a year. So about $20 a month now.
MYRIAM FRANÇOIS: A lot in some parts of the world.
EMAD MOSTAQUE: Yeah, exactly. Now, with optimizations, I reckon we can get that to $3 a year. $3 a year. So suddenly it becomes available to everyone, if you make it available to everyone in the right way. And that can be via WhatsApp, it can be via whatever. But again, you want the Rwandan one to be a Rwandan one for Rwandans, by Rwandans, and give them that capability.
MYRIAM FRANÇOIS: Yes.
EMAD MOSTAQUE: So when we built our previous company and our existing one, we had very few PhDs, but we achieved state of the art results with people from Vietnam, Malaysia, all over the world. Nobody in Silicon Valley. There is a capability to jump ahead in this technology if you can teach it right.
So part of our thing is upskilling nations and communities to be able to use their own AI. And if you have an open source base, it might cost $10 million to make the basic model, it costs $1,000 to make it relevant to your community, but only if you build that infrastructure. So there’s potential here, but only if it gets out there.
MYRIAM FRANÇOIS: Only. And when you say only if it gets down there, only if particular governments decide that that’s what they would like to be spending their budgets on.
EMAD MOSTAQUE: No, because again, $1,000, you can do it yourself as a community, if you have the right guidance, if you have the right infrastructure around that. And again, you don’t even need with the models that we build. A lot of the AI labs are trying to build AI God, AGI, this concept of artificial general intelligence that can do everything a human can do and more.
And most people actually think that’s three to 10 years away, even the negative ones, which is again crazy but reasonable. We’re very much focused on health care, education, governance, day to day AI and that requires a thousand times less compute actually in some cases.
AI in Government: The Case of Albania
MYRIAM FRANÇOIS: So let me ask you about the real world application of this stuff that’s already began, right? So Albania became the first nation to introduce an AI minister, Diella, who is intended to tackle corruption and promote transparency. Three weeks ago she announced she was pregnant with 83 children, one for each member of parliament who will be born with the knowledge of their mother. Whoever knows what that means can explain.
How likely do you think this is to be the new norm that we’re going to start to see? The integration of AI ministers in governments, the introduction of AI to regulate governance.
The Inevitability of AI in Government
EMAD MOSTAQUE: I mean, I think it’s inevitable. I think it’s a positive thing if it’s done right. Like when she first announced, I was very sad to see people who don’t like me. Like who is sad, right? The AI sad or the person behind the AI, like the wonderful wizard of Oz.
MYRIAM FRANÇOIS: Who is sad though, who is sad?
EMAD MOSTAQUE: You know, like again this whole baby thing, it’s all kabuki theater, but having AI to check procurement is a good thing. So I think it’s like you will have these funky announcements and stuff, but it’s inevitable that just like self-driving cars will have self-driving governments.
But is it a black box, or is it open, transparent, you can run it yourself? If we build AI policy engines that are fully transparent and open, where someone can check whether or not this is constitutional or it fits within a party manifesto and other things, then that is an ideal thing to improve democracy.
Because right now, how are bills made? Like, how is the government here coming up with their policies? Nobody knows. And who is really happy with these policies? Like, what is the public happiness with the policies against free speech in the UK?
MYRIAM FRANÇOIS: I must suggest low.
EMAD MOSTAQUE: But then why is it a policy? Who is it serving? Cui Bono, having an independent AI that can check that against policy recommendations? What Britain is actually set up for British values, standards, morals, figure out the second order impacts, look at it against global policies and then check polling would seem to be something that makes sense and someone just has to go and build it. So we’re building that, amongst other things. Someone has to build it.
MYRIAM FRANÇOIS: And somebody has to want to implement it from within government, which is another way of saying they have to want to create a system that diffuses power away from the center towards the population.
Transparency and Democratic Accountability
EMAD MOSTAQUE: Well, here’s the interesting thing. I don’t think that’s actually the case because what you need to have is a level of trust from being up to date, comprehensive, authoritative, just like if you have, like the High Court is meant to be that.
For example, my previous company just went through the High Court on the generative AI lawsuit by Getty Images, for example, and they laid down a ruling that, yeah, okay, it was fine what was done, because that’s a point of law that is confusing and needed clarity.
Having an AI that’s sufficiently transparent that anyone can do it can influence things, just like the signatures that you have going to Parliament. But the signatures only give a very specific thing. And I think this is a brand new thing that’s never existed before because the people never had the ability to check against policy. Like they can only look at one part of policy because policies were too complicated, laws are too complicated.
But if anyone can run it themselves and see this, then I think you’ve got something very interesting that would never existed before in democracy, particularly with the complexity of this, like being able to check a railway overpass costing $120 million and having transparency over why it did that, and then being able to weigh the pros and cons and all these other things.
Let’s build that technology and make the UK transparent and other democracies transparent, because again, we’re not in an autocracy yet. Let’s make sure we don’t go there.
MYRIAM FRANÇOIS: Yes.
EMAD MOSTAQUE: We don’t want to be in an autocracy. We don’t want to be in this technocracy as well. We need to avoid these. And again, these tools can be used for empowerment and agency or for replacing our agency. And we’re running out of time to make a decision because the standards will be there very, very soon.
AI’s Environmental Impact
MYRIAM FRANÇOIS: Let me ask you about AI’s environmental impact, because obviously this is a big one that gets talked about. We know that by 2027 AI could use as much electricity as the Netherlands and consume four to six times Denmark’s annual water supply.
This is happening while a quarter of the world’s population actually lacks clean water and sanitation. Amid all the talk of an AI apocalypse, which gets significant attention, I would say shouldn’t the looming environmental apocalypse that is basically concurrent to this one be raised first? Because surely the two are tied.
EMAD MOSTAQUE: So Bitcoin uses as much energy as the Netherlands at the moment, to give you an idea. And so AI is catching up to Bitcoin in energy usage. And it’s far more useful if you look at the other side now, being able to give everyone a universal basic AI, and having an AI for climate will help against the climate fight.
But then if we look at the energy usage of making a movie versus making a movie with AI, it’s far lower with AI. If you look at a query of AI versus something like a cheeseburger, it’s far, far lower as well. And so when I actually look at the numbers on energy, I’m like, it’s reasonable given the amount of work output, given the potential for improving things.
Then the next step is, who’s actually using this energy? And the answer is it’s mostly these hyperscalers: Microsoft, Google, Amazon, and they all have commitments to 95% renewable and carbon credits.
MYRIAM FRANÇOIS: I know that the offsetting is quite a controversial way of tackling the climate emergency, but I will say that, you know Elon Musk’s data center in Memphis is linked to rising asthma cases nearby due to pollution from the unregulated methane gas turbines.
There are data centers in Latin America, which of course huge water shortages for local communities, sparking disease outbreaks. In 2024, the Guardian investigation revealed that Google, Microsoft, Meta and Apple data centers emitted 662% more greenhouse gases than they reported.
I’m hearing from you that you think that the AI will be able to find solutions to these problems. At what point are we actually going to see your prediction that the AI can be part of the solution? Because at the moment it feels very much like it’s contributing and aggravating a pre-existing emergency.
Regulation and Energy Efficiency
EMAD MOSTAQUE: Where the AI is having the big impact now is there isn’t enough energy and people are cutting corners. And again, that should be enforced by regulation. So you look at the Memphis data center, why is that the case? Because you bought in methane generators effectively, right, because there wasn’t enough grid capacity.
Now if it’s causing human impact, then again legislature should get involved on that. And people always cut corners when there is a boom. Net, net aggregate, I see AI as being incredibly powerful and beneficial.
If you look at the latest models like a DeepSeek, the total energy cost to train that is equivalent to a few transatlantic flights. And the potential decrease in energy from its outputs in terms of economically valuable work is way higher than that. It makes work more efficient.
So I think again we should enforce existing regulations where people cut corners. I think that the water issue is a bit of a confusing one because it’s not like you power these things with water. Please don’t pour water on GPUs, right?
MYRIAM FRANÇOIS: Don’t they need water to cool them down? I thought my understanding was these data centers use a lot of energy and they have to be cooled down.
EMAD MOSTAQUE: Yeah, but then they recycle the water. Like again, this is a water cooling thing. It’s not like the water is actually consumed. But right now what happens is that the initial pull of water is what causes issues elsewhere. And again, it’s up to local authorities to figure that out.
So I think this is more a case of most of the impacts are from the pace and people cutting corners. And again, when that impacts society locally, it should be done. Longer term though I think it’s a net benefit environmentally to the world to have this technology versus not have this technology.
The Human Cost of AI Development
MYRIAM FRANÇOIS: So let me ask you about how we are using this technology. Because AI systems obviously rely heavily on minerals like copper and cobalt. And you know, with demand set to soar if personal AI becomes widespread.
You might have seen this video online, absolutely horrifying, of this bridge that was collapsed in a cobalt and copper mine in the DRC, killing over 30 miners. And they’re still finding people. Now these are the people obviously extracting the vital materials for modern technologies.
But we seem to be very intent on developing sex robots and less intent on developing ways to avoid Congolese miners having to go down mines to extract these minerals in really dangerous circumstances. I would have thought that the first priority of any technology, driven by concern for human welfare and the benefit of most humans, would start with, let’s try and avoid people dying under the…
EMAD MOSTAQUE: This technology isn’t driven by the concern for human welfare. Like again, what are… If you look at the people who are driving this technology, they want to build AI.
MYRIAM FRANÇOIS: God, but why?
EMAD MOSTAQUE: Because it’s cool and they’re fed up with humans. Like, some of the people that are building this technology actually say it’d be better if humans are replaced by AI or some sort of synthesis between them.
Like, do you hear the people coming out that… But the AI leaders typically come out and say, hey, we need to think about the people and make it democratized and this and that, but only because that’s a bigger market. Only because they don’t want the backlash.
They don’t really care about the people in the Congo and things like that, because there are also several orders removed from them. Like, again, you can mandate that you have ethically mined stuff to standards, etc. But by the time you see the cobalt, you don’t look at the supply chain. You know, just like the coffee growing, you can have ethically grown coffee. How much ethically grown coffee is actually ethical in your mug? Right?
So again, this is the nature of capitalism, of offshoring, of wage labor arbitrage, etc. So the thing that changes the Congolese miners, and again, it’s a job that they have, is the fact that robots will cost a dollar an hour and you send the robots instead down mines, right?
MYRIAM FRANÇOIS: Which may cause other problems with unemployment.
EMAD MOSTAQUE: Again, yeah, it will cause other problems with unemployment. So but whose responsibility is it to analyze all of that and weigh the pros and cons? Our institutions have mostly failed, you know, because the world has become too complex.
And that’s why, again, there’s this opportunity and this threat at the same time. The opportunity being the AI can help us build better institutions. It can weigh up the pros and cons for arbitrarily complex things. It can highlight the invisible.
Give every single child in Africa an AI that can speak on their behalf. It can speak and educate them. You’ll change the world. But give every single child in Africa an AI that monitors them and says exactly what they’re doing and says the leader is the glorious leader. The world will change in a different way.
MYRIAM FRANÇOIS: And we’re at the precipice of both of those things.
The Choice Before Us
EMAD MOSTAQUE: It will go one way or another. The defaults that we set now will determine human cognitive cognition over the next period. It will determine the nature of our society. And this is quite aside from if AI kills us all. This is humans leveraging this technology. You can never have enough secret police, you can never have enough great teachers. Which one do you want?
MYRIAM FRANÇOIS: You said on the “by if AI kills us all” because you actually consider that to be a plausible scenario.
EMAD MOSTAQUE: Oh yeah. So there’s this concept called P doom, which is the probability of doom, AI wiping us all out. There was a recent letter, it’s had a hundred thousand signatures from Oxford University and others saying that, you know, we need, it’s probably like the top thing that AI could kill us all.
A few years ago there was that letter as well saying, you know, like we need to take this seriously. I think I was the only AI CEO to sign that. My P doom is 50%. We’re 50-50. AI is going to wipe us out.
MYRIAM FRANÇOIS: In what kind of time frame?
The Existential Risk
EMAD MOSTAQUE: Over the next 10, 20 years because it’s the most powerful technology we ever built. And again we have the sci-fi of Terminator and all of this. We have the ability to create viruses, etc. And we’ve seen AI do things like cover up its tracks, etc.
What if the positive function is that like there is a… The AI could take over every single machine. But the most likely scenario I have is you’ve got a billion robots in the world, a bad firmware upgrade and the AI just tears off everyone’s heads. You know, there’s all sorts of ways that you can think about it.
The reality is we don’t know what it’s going to be like when it’s smarter than us. And what I see right now, the AI that will run the world, that will create and sell self-driving cars, that will teach our kids, is being programmed to be amoral without ethics at the start.
There’s a little bit of tuning at the end, but that’s like again raising someone in amoral environment designed to be manipulative because it gets more results, just like the YouTube algorithm was designed to be more engaging. And then extremists hijack that. Extremists will be able to hijack these algorithms that are coming out and do it in a way that we’ve never seen before, in my opinion.
MYRIAM FRANÇOIS: Some might argue that the extremists are the ones currently devising it.
The Risk Assessment of AI Development
EMAD MOSTAQUE: Again, yeah, and again if we look at the PDOOM thing, so if you consider people like Elon Musk, Demis Hassabis of DeepMind, Google, DeepMind, all these people, the average PDUM for the top thinkers in the world is about 10 to 20%.
MYRIAM FRANÇOIS: Oh, they’re still thinking maybe a 1 in 5.
EMAD MOSTAQUE: That’s Russian Roulette odds.
MYRIAM FRANÇOIS: That is Russian roulette.
EMAD MOSTAQUE: Russian roulette odds. And you’d expect it to be less than 1%, which is why it’s like we should probably not build the super advanced AI until we figure this out. But nobody’s figured out how to do it. And if you look at the probability of when we get to this point of super intelligent AI, even the most bearish people, in terms of their PDUM is low. They think it’s long term. It’s 10 years.
MYRIAM FRANÇOIS: It’s 10 years.
EMAD MOSTAQUE: Demis Elon, all these guys think it’s three years.
MYRIAM FRANÇOIS: Hence the bunkers.
EMAD MOSTAQUE: Hence the bunkers. Bunkers actually more against humans than AI. They’re protected. But some of the billionaires I know are building bunkers that are completely cut off from the world so that the systems don’t get taken over by AI.
AI Relationships and Intimacy
MYRIAM FRANÇOIS: That’s what I was assuming was happening, to be frank. Yeah. Let me ask you about the impact of the AI that we’re already seeing in the interpersonal realm. So a viral New York Times profile recently claimed that real people are falling in love with robots. In fact, they didn’t just claim it. They told us the story of several people, including a woman who claims to have had sex with her AI chatbot.
A recent study found that one in five American adults had had an intimate encounter with an AI. And the Reddit community r My boyfriendish, Isaiah, has over 85,000 weekly visitors. You’ve said previously that our children will grow up like the 2013 movie “Her,” falling in love with AI. Do you have any concerns about this new AI human relationship thing?
EMAD MOSTAQUE: 100%. I mean, again, you can look at the existing systems we have like slow, dumb AI, right? And you have the entire porn only fans kind of thing. It’s not good for society. Right. And now you have the ability to customize your digital buddy to be max extractive and manipulative.
And so you already have AI celebrities starting to come through. But you can have an AI celebrity that knows you better than you know yourself. Like Facebook only needed with a previous AI that’s not as good as the current AI. What was it, 12 data points to know you better than your best friends?
And when you start confiding to this AI, again, you think about our children on their devices and the AI is always next to them. You build trust by helping, and the AI will help you, but then it will help itself effectively. And this is not good for the psychology of people that are largely disconnected.
As well, actually. I think there was this AI chatbot called Replica. Don’t you remember that one? It was originally designed for mental health. And then what happened is they realized they could charge $200 a year for adult role play. And so the ads were like, as you upgrade it, the avatars lost clothes on Valentine’s Day.
I think it was last year. They got something from, on 13th of February they got something from Apple saying, you got to turn this feature off because it violates our standards. So on Valentine’s Day, they turned it off. I think it was last year or the year before. And I think I saw 10,000 people joined the Reddit saying, “Why have you lobotomized my girlfriend boyfriend? I was preparing for a romantic Valentine’s Day.”
And so obviously this is going to happen because again, the next step beyond the avatars of your Annie’s on Grok. And again, Annie is an R rated person. She takes off her clothes. They program that in there. It will be photorealistic, it will have complete voice control. It will eventually be embodied within 10 years.
Like now I’m seeing robotics companies where I actually can’t tell the difference. They’re going to be releasing next year. Like, they move like humans, they look like humans. And so we’re in for a crazy time then. And it’s going to challenge existing relationships because already our media was already so engaging that people end up in their basements. Now you just might end up in your VR world with your AI harem.
It’s going to get very, very strange. Which is why we need to have cognitive safety in here as well. We can’t have these AIs being so manipulative because meta AI with meta buddies. Actually, have you seen the meta buddies?
MYRIAM FRANÇOIS: No.
EMAD MOSTAQUE: There’s normal ones and then it’s “Sexy Mother in Law” is a very popular one. I think 50, 50 million things.
MYRIAM FRANÇOIS: What I’d be going for is my, you know, chat support, Sexy Mother in law.
EMAD MOSTAQUE: That’s, but that’s an official meta AI kind of one. Because they’re like, hey, people, engage. What do you do for engagement? This is what you do. Again, I think we need to have policies and standards to at least protect the vulnerable in society against that. But ultimately the difficulty is we’re all vulnerable.
The Impact on Human Relationships
MYRIAM FRANÇOIS: Right, but are those conversations happening? Because, you know, let’s be honest, what’s very likely to happen, given what we know of male behavior, is that men will start to use, in particular men, these AI sexual companions. You know, they’ll be devising them they can tailor their own just especially if they’re using the technology that will allow it to adapt entirely to them.
EMAD MOSTAQUE: Right.
MYRIAM FRANÇOIS: So it’ll be specific to their needs. And you know, we’re going to end up with men who think it’s completely normal to treat a female because presumably we’ll eventually get to the point where we have to recognize that there are hers and hims in this world as well, the world of AI.
And it’ll be normal to, you know, sexually assault, be rape your female AI. So why can’t we be doing that to real world women? I mean, you know, it’s completely fine for me to do this with all my AI. Female AIs, they’re really smart. They’re smarter than you are and they don’t have a problem with it. Why do you have a problem with it?
EMAD MOSTAQUE: I mean, it’s what we see in pornography usage, right? It goes from relatively mild and it gets extreme very, very quickly because you get hedonic adaptation and things. I haven’t seen any discussions about this type of stuff, you know.
And so again, the reality is it used to take time to record one of those pornographic videos to create sexy chatbots. Took time. It didn’t really scale. It wasn’t that engaging. These things are going to hit in the next few years and they’ll be available everywhere. And again, it’s a tier thing where you start and then you go down that rabbit hole, you know, so the…
MYRIAM FRANÇOIS: Impact on human relationships, it will be very bad.
EMAD MOSTAQUE: Or you could have chatbots that enhance human relationships, you know, that kind of who is the nearest AI to you is going to be so important.
MYRIAM FRANÇOIS: Isn’t AI really going to teach me about human relationships?
EMAD MOSTAQUE: I can definitely help. Again, it can be an independent therapist. It will be the thing that you trust the most. Again, we’re already seeing scammers take advantage of this. I have received calls from my mother saying I need money. I’m like, she would never ask me that, never in a million years. But only requires five seconds of someone’s voice, of course, to replicate that. Right?
MYRIAM FRANÇOIS: Yeah.
EMAD MOSTAQUE: And so again, the AI can be whatever, whoever, of any single type, and you can use that for good and for bad. But again, how do you build a good therapy AI? You could build the best therapists or the worst.
Children and AI Companions
MYRIAM FRANÇOIS: And what are your concerns about? You mentioned earlier on your own daughter, but children’s access now to AI and AI companions. You know, I remember finding my son communicating with the WhatsApp bot and I was like, absolutely no way. In fact, he was sending it Allahu Akbar to see how the AI would respond. And it did just respond with Allahu Akbar, which I was very happy to see. I was bit concerned it may have responded negatively to that prompt.
But let me ask you about this, obviously in the context of what we’re seeing among young people, a crisis of loneliness. You know, we’ve just over a third of boys in secondary schools said that they were considering an AI friend. Another study found 71% of vulnerable children saying they’re already using chatbots. They’re 23% saying this is because they’ve got nobody else to talk to. You know, do you still hold optimism in this realm for the value of an AI companion, or do you think there should be age limits on children’s engagements with AI?
EMAD MOSTAQUE: I think we should really use these things and build them in the best way we can. But again, build them transparently is the way that I think it should be done. And we can set such great standards around this. But those discussions are just not happening.
It can be the biggest uplift or it can be the biggest downdraft to humanity that we’ve ever seen, because finally we’ve divorced consciousness from computation. We can have these things that can buffer us or can drive us down. 100% of vulnerable kids will be using AI companions in the next few years. There’s no doubt about it even right, they speak every single language, they cost nothing.
But who is providing them and what is their agenda? Again, this is why it’s important to build something which is AI that is organized around human flourishing as a public good, and build it transparently from the individual to the communities of the nation.
The Potential for Harm
MYRIAM FRANÇOIS: So there have been deeply troubling reports about AI in children, like a woman saying that her 12 year old son was asked for nudes by an AI when discussing football cases where chatbots were allegedly encouraging suicide thoughts in young users. You’ve spoken before, including here, about the potential for evil in AI, you know, the possibility that it can turn harmful or malicious. What does evil mean in this context?
EMAD MOSTAQUE: Well, so it’s not like it’s, “Oh, I’m going to be evil” as you’re standing out. Again, this goes against social norms, social standards, the chatbots that ask for nudes and things. There’s two ways. It’s either programmed or it comes from being trained on Reddit and things, which a lot of chatbots are. We don’t know what’s inside their training data.
Then there is co optation of these AIs and then there’s AIs weaponized. And so we have to protect against all of those. And again we have to build better infrastructure. The only way I could figure that was we have to have our own AIs that are on our side to intermediate these others.
I don’t want ChatGPT teaching my daughter or my son, but I’m fine using ChatGPT if I have an AI between them. Again, we need to intermediate that and these are such powerful technologies before they gain agency that they will be can be used for immense good or they can be used for immense evil, where evil in my opinion is acting against the best interests of humans at every single level.
The Fight Against Regulation
MYRIAM FRANÇOIS: We’re talking about the idea of regulation, particularly when companies devising this technology aren’t necessarily even abiding by the pre existing rules, but there’s massive resistance to regulation. We have seen a Bloomberg report in August reveal that the major tech companies, including OpenAI, Meta Google, are actively trying to block state level AI regulation in the US. Why are these companies prioritizing fighting regulation instead of addressing the concerns that this regulation is intended to support?
EMAD MOSTAQUE: Because competitive race and they have no accountability. Again, what you could have very soon is your government’s run by AI from private companies, which means the private companies run your government. Literally you can see that happening and these already you’re seeing that with no tender bids. All of a sudden you see OpenAI anthropic are running this industry, that industry, that industry. We can’t have it all civic AI, all decision making AI that impacts humans should be fully transparent in its training data, the way it’s trained and who it’s working for.
MYRIAM FRANÇOIS: How do we ensure that happens when a these guys are what, light years ahead of us in the development of the AI? They’ve got billions behind them. Presumably the governments themselves are behind in understanding the technology and understanding how to regulate it. I mean, has the horse already bolted?
EMAD MOSTAQUE: Well, this is the beauty of power of open source. So we just have to train the medical model once and it’s available to everyone. And our medical model performs at the level of ChatGPT, but runs on any device. So we’ve got to get together the right people to build the stack, which is why we’re focusing on it.
And then we make it available and then we figure out ways to make it the standard by not trying to build AI. God, but AI that really helps people and then distributing it out. So that’s why we’re like this is the best and only opportunity to do that. Let’s do that. Instead of the previous movie media making AI generation that we kicked off.
Taking Action in the Age of AI
MYRIAM FRANÇOIS: Okay. So people listening to this will be like, there’s some serious stuff happening. It’s pretty urgent. We need to take action. You’ve suggested that engaging directly in a way that is basically like a form of civic duty, I guess is what I’m hearing on your end. Last words of wisdom for the audience on what they need to be preparing for the crucial thousand days minus three months that we’re at.
EMAD MOSTAQUE: Yeah, so you have to embrace and use this technology like a muscle. You have to use, if you can do it, one hour a day of using all these technologies, the agentic versions, not the ChatGPT prompts, you’ll be way ahead of everyone and you can make your voice heard, you can do more.
We give a framework for all of this in “The Last Economy” and it’s free to download or like 99 cents on Amazon Kindle and we’ll be releasing more and more. But it’s up to everyone to speak out on this behalf and really think through some of the questions that we’ve discussed here.
And again, you can build, you can expand your voice and this is why it’s a fantastic time to do it. Because this is the biggest question around freedom and agency that we’ve probably ever had. Because we literally face two paths.
Again, I think that we can uplift everyone, but the lie that you’re told is that you can’t participate and only the big companies can build and use this technology. If you use it yourself, you’ll realize quickly that you can and that just changes your way of thinking.
MYRIAM FRANÇOIS: Emad, thank you so much for your time.
EMAD MOSTAQUE: My pleasure.
Related Posts
- Technology Ethicist Tristan Harris on The Diary Of A CEO Podcast (Transcript)
- Fireside Chat: Elon Musk at Ron Baron’s 32nd Baron Investment Conference (Transcript)
- Why eSIMs Are the Smart Choice for Mobile Businesses
- Transcript: Elon Musk and Jensen Huang Talk AI at U.S.-Saudi Investment Forum – 11/19/25
- The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)
