Skip to content
Home » Transcript: Karen Hao on Sam Altman, OpenAI & the Quasi-Religious Push for AI

Transcript: Karen Hao on Sam Altman, OpenAI & the Quasi-Religious Push for AI

Read the full transcript of journalist Karen Hao’s interview on Democracy Now! with host Amy Goodman on “Sam Altman, OpenAI & the “Quasi-Religious” Push for Artificial Intelligence”, July 4, 2025. Karen Hao is the author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

AMY GOODMAN: This is Democracy Now!, democracynow.org, the War and Peace Report. I’m Amy Goodman. In this holiday special, we continue with the journalist Karen Hao, author of the new book “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.” She came into our studio in May. She talked about how AI will impact workers.

The Impact of AI on Jobs and Workers

KAREN HAO: One of the things that we have seen is this technology is already having a huge impact on jobs. Not necessarily because the technology itself is really capable of replacing jobs, but it is perceived as capable enough that executives are laying off workers.

We need some kind of more guardrails to actually prevent these companies from continuing to try and develop labor automating technologies and try to shift them to producing labor assistive technologies.

AMY GOODMAN: What do you mean?

KAREN HAO: So OpenAI, their definition of what they call artificial general intelligence is “highly autonomous systems that outperform humans in most economically valuable work.” So they explicitly state that they are trying to automate jobs away. I mean, what is economically valuable work but the things that people do to get paid?

But there’s this really great book called “Power and Progress” by MIT economists Daron Acemoglu and Simon Johnson who mention that technology development, all technology revolutions, they take a labor automating approach not because of inevitability, but because the people at the top choose to automate those jobs away. They choose to design the technology so that they can sell it to executives and say, “You can shrink your costs by laying off all these workers and using our AI services instead.”

But in the past we’ve seen studies that, for example, suggest that if you develop an AI tool that a doctor uses rather than replacing the doctor, you will actually get better health care for patients. You will get better cancer diagnoses. If you develop an AI tool that teachers can use rather than just an AI tutor that replaces the teacher, your kids will get better educational outcomes. And so that’s what I mean by labor assistive rather than labor automating.

Understanding AI Replacement

AMY GOODMAN: And explain what you mean, because I think a lot of people don’t even understand artificial intelligence. And when you say replace, what are you talking about?

KAREN HAO: Right? So these companies, they try to develop a technology that they position as an everything machine that can do anything. And so they will try to say, “You can use this, you can talk to ChatGPT for therapy.” No, you cannot. ChatGPT is not a licensed therapist.

And in fact, these models actually spew lots of medical misinformation. And there have been lots of examples of actually users being psychologically harmed by the model because the model will continue to reinforce self-harming behaviors. We’ve even had cases where children who speak to chatbots and develop huge emotional relationships with these chatbots have actually killed themselves after using these chatbot systems.

But that’s what I mean. When these companies are trying to develop labor automating tools, they’re positioning it as, “You can now hire this tool instead of hire a worker.”

Sam Altman and the Formation of OpenAI

AMY GOODMAN: So you’ve talked about Sam Altman, and in part one, we touched on who he is. But I’d like you to go more deeply into who Sam Altman is, how he exploded onto the US scene, testifying before Congress, actually warning about the dangers of AI. So that really protected him in a way. People seeing him as a prophet. That’s a P-R-O-P-H-E-T. But now we can talk about the other kind of prophet, P-R-O-F-I-T and how OpenAI was formed. How is OpenAI different from AI?

KAREN HAO: OpenAI is… I mean, it was originally founded as a nonprofit, as I mentioned, and Altman specifically, when he was thinking about, “How do I make a fundamental AI research lab that is going to make a big splash,” he chose to make it a nonprofit because he identified that if he could not compete on capital, and he was relatively late to the game, Google already had a monopoly on a lot of top AI research talent at the time.

If he could not compete on capital and he could not compete in terms of being a first mover, he needed some other kind of ingredient there to really recruit talent, recruit public goodwill, and establish a name for OpenAI. So he identified a mission. He identified, “Let me make this a nonprofit and let me give it a really compelling mission.”

So the mission of OpenAI is to ensure artificial general intelligence benefits all of humanity. And one of the quotes that I open my book with is this quote that Sam Altman cited himself in 2013 in his blog. He was an avid blogger back in the day, talking about his learnings on business and strategy and Silicon Valley startup life. And the quote is, “Successful people build companies, more successful people build countries. The most successful people build religions.” And then he reflects on that quote in his blog saying, “It appears to me that the best way to build a religion is actually to build a company.”

The Quasi-Religious Nature of AGI

AMY GOODMAN: And so talk about how Altman was then forced out of the company and then came back. And also, I just found it so fascinating that you were able to speak with so many OpenAI workers. You thought there was a kind of total ban.

KAREN HAO: Yeah, exactly. So I was the first journalist to profile OpenAI. I embedded within the company for three days in 2019, and then my profile published in 2020 for MIT Technology Review.