Read the full transcript of Game Theory #24: The AI Apocalypse with Professor Jiang, May 12, 2026.
Editor’s Notes: This lecture, titled “The AI Apocalypse,” explores the provocative intersection of artificial intelligence and occult philosophy, framing the pursuit of Artificial General Intelligence (AGI) as a modern-day attempt to “create God”. Professor Jiang critiques the evolution of AI companies like OpenAI, arguing that their mission has shifted from humanitarian idealism toward the consolidation of power and the establishment of a global surveillance “empire”. By demystifying technical terms like “neural networks” and “deep learning,” the talk posits that AI is fundamentally an esoteric project that relies on human exploitation and massive energy-intensive infrastructure. Ultimately, the lecture warns of a looming apocalypse where the drive for total technological control and a digital “rapture” threatens to sacrifice human autonomy for a manufactured and dangerous perfection.
TRANSCRIPT:
A Letter from David Bromwich
PROFESSOR JIANG: After I posted my class from last Thursday, my friend, as well as teacher, David Bromwich sent me an email. And what we’re going to do today is we’re going to read his email together. I asked for his permission, and he said that it’s okay for me to make public his email to me.
Okay, so he said, “I just watched your video. There’s a thought I’ve been meaning to pass on, and this latest talk crystallized it. You travel fast in your explanations with a satisfying definiteness, and say a lot of true things that a team of people say clear off.”
So what he’s saying is that my videos are getting very popular online because I provide some certainty, some clarity in a very unclear and uncertain time.
“The risk is simplification, which your audience won’t quite recognize for what it is, or won’t unless you give occasional notice of the fact.”
So this is a very fair criticism in that by saying clarity, I oversimplify ideas. And sometimes when people see someone who’s very confident, they don’t really remember that a lot of this is speculation and oversimplification for the sake of clarity.
Intellectual Speculation, Not Scholarship
So it’s very important for us to remember this fact that this is a class about intellectual speculation. Here we explore ideas that are not explored anywhere else, and often I will wing it, or I will make things up as I go along based on my intuition and based on my imagination. And it’s very interesting, but it’s not scholarship.
And my friend David Bromwich, he is actually one of America’s greatest scholars. So he’s just reminding us of the fact that we have to be very careful that in the exploration of ideas, we also want to be rigorous.
“I remarked something like this early in talks on the rise of Germany, romanticism, et cetera.” Yeah, so I do this a lot. “You give memorable abridgments of the history of ideas and imagination. What needs underlying is the amount that is interpretation of an emphasis all your own.”
Okay, so again, I hate to remind everyone of this, but this is all my speculation. And I’m just presenting frameworks and ideas for you to explore by yourself.
On the Reading of Paradise Lost
“Emphatically, so in your reading of Paradise Lost, it’s an allegory of necessity of transgression for the sake of knowledge, whereby Adam and Eve instead of all become joint heroes of the fable. So it’s Blake’s reading and a powerful intuition, but it probably isn’t the way most readers take the poem, let alone the canonical national reading to identify 17th century New England and the US ever after, okay?”
So this is a very fair criticism. And he should know because he is one of America’s major professors of English literature. I studied English literature under him, and he knows Paradise Lost very well. And he’s absolutely right in that I am offering you a very minority interpretation of Paradise Lost.
Why I’m doing so will be clear as this semester comes to an end. Because for the rest of the semester, I want to focus on artificial intelligence and the occult. And so it’s very important for us to understand occult ideas embedded in the great books such as Paradise Lost, okay?
So again, this is a very fair criticism in that I’m not presenting to you the majority understanding of these texts. And I should have done that to begin with.
Gershom Scholem and the Kabbalah
“You struggle on a more risky terrain in viewing a Jewish Gnostic derivation from the Kabbalah as the national ideology of Israel. I know this material from Gershom Scholem’s essay, ‘Redemption Through Sin.'”
Okay, so Gershom Scholem is probably the most famous academic in Israel. He’s no longer alive, but he’s in the Kabbalah academically. And his interpretation of the Kabbalah is very much in line with my own, even though I myself never read Gershom Scholem.
And actually what’s interesting is that Gershom Scholem had a very huge influence on a man named Harold Bloom.
HAROLD BLOOM is, or was, America’s greatest literary critic, and he had a huge influence on David Bromwich, who then had a huge influence on me, okay? It isn’t an element of a settler religiosity, so far as I know, or conservative reception of the Torah, any more than it was of a socialist idealism of the left Zionist of 1948 who set the political tone of Israel until 1967.
Again, this is my problem, where I should have gone into the different ideologies of Israel and shown that this Gnostic understanding of the Kabbalah is an extreme version, okay? The shorthand leads you into a kind of business that can easily be misunderstood.
Thus, in talking about the support of non-Israeli Jews for the Jewish state, you said that throughout the world, Jews are wealthy. Again, this is my problem, where I make rationalizations, because I’m moving too fast, I’m oversimplifying, and often I’m working from intuition as opposed to rigorous scholarship. Watch out. The world is full of people who want to misunderstand what you mean, and they will separate words and phrases from the context at the drop of a hat.
On Intellectual Speculation and Public Scrutiny
So when I started this class about two years ago, I was using this class and this platform as a way for me to explore ideas with a larger world.
At the same time, I don’t want to sacrifice this platform where I can speculate freely, because I think it’s very important now and then to engage in intellectual speculation.
The Main Thesis: Religion and Geopolitics
And finally, you see me presenting this thesis, all great power or expectation states, whether they be Muslim, Protestant, Jewish, have a fanatical religious belief at their foundation, the eschatological underside is what matters most, the key to understanding of the states are, and we’re always ultimately about.
Okay, this is the main thesis of my talk from last class, in that yes, we tend to ignore religion, and we tend to ignore the most extreme aspects of religion, but if you want to understand history, you want to understand current events and geopolitics, you need to understand these extremists, because it’s often these people and this ideology that becomes the force that drives geopolitics forward, and that was my thesis from last class. David Bromwich is just stating or summarizing my major point, but I should have really made that clear last class.
Okay, why not say that part aloud if I’m right that your view is anti-statist and anti-religion? So here we, I mean, what we should have done is David Bromwich and I should have sat down together and had this conversation where I explained to him, my project is not to discuss what is good and what is evil. That I don’t think is actually particularly useful. What I’m trying to do, my major project is to figure out how the world works, and be non-judgmental in my speculation.
With apologies for these possibly unnecessary words, but now seems the time to say it, what the US and Israel are doing to Iran is awful, you’re working hard to inform people about it, and every detail should count, okay?
So there’s so much in this email, and I could easily sit down with David Bromwich for a few hours and just discuss these ideas in great detail. I think this would be of tremendous service to my audience, because whereas what I specialize in is intuition, imagination, taking complex things and combining them into a clear, simple narrative, David Bromwich, because he’s such an eminent scholar, he appreciates the nuance and subtlety to ideas.
So I think that what I would like to do for my next project is work with David Bromwich and do a series of podcasts in which we discuss these ideas together, and explore these ideas, engage in intensive speculation, but also back it up with a lot of academic scholarship. And so I emailed David Bromwich, and he’s agreed to this idea. So this is a project I’ll be working on in the future, and I’m really looking forward to presenting it to the world when we’re done.
Introducing AI and the Book Empire of AI
Okay, all right. So let’s start class. And today, I want to start artificial intelligence, and this is a major theme that will carry us through to the rest of the semester, okay? And so for this class, I want to introduce a book called Empire of AI, written by a journalist named Karen Hao. She’s an American journalist, and she spent many decades researching OpenAI and writing about the advent of AI for major publications, including the Wall Street Journal, and she has a very skeptical view of AI. I share her skepticism, okay?
So what I’m going to do in this class is share with you my understanding of AI. I’m going to get some things wrong, okay? So feel free to ask questions, feel free to criticize me, feel free to stop me if I’m not clear. All right. So let’s start class. AI.
All right. So these are our main ideas from the book, okay? So I give you the page reference in case you actually want to read the book yourself, and I highly recommend that you do to fully understand the context of these ideas, okay? So let’s read these two paragraphs, which provide us the main thesis of our argument, okay? So Alan, could you help me read, please?
OpenAI’s Mission: From Idealism to Empire
“Six years after my initial skepticism about OpenAI’s altruism, I’ve come to firmly believe that OpenAI’s mission to ensure AGI benefits all of humanity may have begun as a sincere stroke of idealism, but it has since become a uniquely potent formula for consolidating resources and constructing an empire-esque power structure. It is a formula with three ingredients.”
Okay. All right. So stop. Okay. All right. So this book is mainly about OpenAI, which is also the most important artificial intelligence company in the world right now because they were the ones who pioneered ChatGPT, okay? And it started off as a project sponsored by Elon Musk and others because they were concerned that AGI, artificial general intelligence, would be a threat to humanity. So they wanted to develop AI in a way that would serve humanity as opposed to threaten humanity. And so at first, it was a very noble mission. Okay. Keep on going.
“First, the mission centralized talent by relying them around a grand ambition, exactly in the way John McCarthy did with his coining of the phrase artificial intelligence. The most successful founders do not set out to create companies. Ackman reflected on his blog in 2013. They are on a mission to create something closer to a religion. And at some point, it turned out that forming a company is the easiest way to do so.”
Three Ways OpenAI Is Building an Empire
Okay. So again, they started off as an idealistic mission, but now its main focus is to become an empire. There are three ways in which it is trying to become an empire. First of all, it’s trying to be a religion because Sam Altman, who is now the leader of OpenAI, says, if you really want to change the world, if you really want to build an empire, you need to start a religion. And so a company is just a vessel in which to incubate this religion. Okay. So one is a religion.
Second thing about OpenAI and other AI companies is that it is focused on relentless expansion. And that means building data centers everywhere and anywhere. Okay. So OpenAI wants about a trillion dollars to build lots of data centers around the world. Because if you really want AI to be successful, you have to first conquer the world. Okay.
What Is Artificial Intelligence, Really?
So it’s not really about making humans, making AI safe for humans. It’s about making the world safe for AI. To make basically humans slaves to AI. That’s the second thing. And the third thing is, and this is the most important, they refuse to define what artificial intelligence is. They’re constantly changing the definition of AGI in order to better control the world. And we’ll see what this means later on.
Can you read this paragraph, Alan?
“My conversation with Brockman and Sutskever continued in circle until we run out the clock after 45 minutes. I tried with little success to get more concrete details on what exactly they were trying to build, which by nature they explained they couldn’t know. And why then, if they couldn’t know, they were so confident it would be beneficial.”
So this is a really huge problem. So Karen Howell is a reporter. She was working for the technology review at MIT. In 2019, she goes to Silicon Valley and she meets with Brockman and Sutskever, who are the two main scientists behind the OpenAI project. And the first question she’s going to ask, and anyone should ask is, what is AGI? What is AI? What are you trying to build? And they could not answer that question.
It’s not because they don’t know. It’s because the truth is too scary. And what we’re going to do is figure out what the truth is. What is AI? What is artificial intelligence? What is AGI? And the answer, of course, is it’s God. What Brockman and Sutskever could not tell Karen Howell is that we’re trying to create God because that is insane, evil, and stupid. But if you look at what they’re doing, that’s exactly what they’re trying to do.
Going Back to 1966: The Origins of the Chatbot
So what I’m going to do now is really quickly explain to you what AI is. And to understand what AI is, let’s go back to the year 1966. And in the year 1966, a man named Joseph Weizenbaum who was at MIT, he created something called a chatbot, which is basically a ChatGPT. And he calls it Eliza. And he’s trying to prove to people how easy it is to fool people into believing that AI can actually think for itself. So he creates this computer, and you sit down with this computer, and you ask it questions, and then it talks to you. It’s not really talking to you. It’s a program designed to fool you to believe that it’s sentient.
So the conversation goes like, Eliza says, “Is something troubling you?” And you’re like, “Men are all alike.” “What is connection, do you suppose?” “They’re always bugging us about something or other.” “Can you think of a specific example?” “Well, my boyfriend made me come here.” “Is it important to you that your boyfriend made you come here?” “He says I’m depressed much of the time.” “I’m sorry to hear that you’re depressed.” So it sounds as though they’re having a real conversation.
But what Weizenbaum did, and this was in 1966 when they didn’t have that much technology and processing power, it’s all just a very simple trick. And let’s do a thought experiment where I design a program software. And all it’s saying is, “Tell me more.” Or “This is interesting.” This is it. “Tell me more.” “This is interesting.”
The Psychology Hotline Thought Experiment
So what’s going to happen is that we’re going to set up a thought experiment where you call into a psychology hotline. And the psychology hotline, you think you’re talking to a person. But it’s actually a computer program that says two things. “Tell me more.” “This is interesting.”
So you call the hotline. You say, “Hello, I’m in a lot of trouble.” “Tell me more.” “Oh, my boyfriend broke up with me.” “This is interesting.” “Yeah, he’s a jerk.” “Tell me more.” “Yeah, we’ve been fighting for five months.” You keep on going. And the question is, how many people will be fooled into believing that this is a real person?
And the answer is, unfortunately, quite a lot of people. Okay. So this is a very interesting aspect of humans where we often hallucinate reality. Okay. And it’s not that things are real. It’s that we want them to be real.
So think of hypnosis. I’m not sure if you’ve ever been to a magic show where people conduct hypnosis, right? Well, why does hypnosis work? Because the audience wants it to work. If you go in skeptical and says this is all complete nonsense, it probably will not work on you. But you’re not going to pay $100 to go to a hypnosis show and think it doesn’t really work. Because why would you pay $100? Okay. All right. So it’s almost like sunk cost fallacy. And again, this is all using just basic human psychology to trick people into believing something that is not true. Okay. Does that make sense, guys?
How ChatGPT Actually Works
All right. So let me explain to you how OpenAI works, ChatGPT works. All right. Okay. So ChatGPT is what we call a large language model. Okay.
In other words, what it’s trying to do is trying to trick you, the user, into believing that it knows what it’s talking about. How it works is basically it takes all of the internet, all data from the internet, and then it translates it into an idea. So you ask, you query the LLM, the LLM then takes the query and then figures out the information from the internet and then presents it into a paragraph that tries to trick you into believing that it is true. Do you understand?
So in other words, it’s actually no different from a Google search. The only difference is that it’s taking the Google search, figuring out what the most popular answer is, and then presenting it in a way that makes you think that it’s talking to you directly.
The trick, and this is really important to understand guys, is it’s trying to trick you. It’s not trying to teach you, it’s not trying to tell you the truth, it’s trying to trick you into believing it. This is what we call a hallucination. You guys have to understand this idea. There’s nothing truthful about what ChatGPT says. All it’s trying to do is try to manipulate you with words, with pretty words, into believing that it knows what it’s talking about. But it itself cannot judge what it’s doing. Any questions so far? Are we clear?
Supervised Machine Learning: The Reality Behind AI
Okay, all right. Now the question is, how does it do that? I’m going to teach you a little bit about artificial intelligence. Please stop me if I’m not being clear about how AI works. AI doesn’t exist. What exists is what we call supervised machine learning. This is a technical term, okay? All right, supervised machine learning.
And how it works is this. Before, how computer programs would work is, we would write the program, the algorithm. And then we would give it the input, and what we would do is the output, okay? So the algorithm would be A plus B. We give the input 1, 1. The output would be 2. Very simple.
How supervised machine learning works is, okay, this is fine for simple problems, but there are certain hard problems that humans can’t solve. That humans cannot figure out, okay? And one hard problem is the idea of facial recognition technology. Facial recognition. How do I separate faces?
And so the problem is this. I have about a million faces, one million faces, in a database. Okay? And I don’t know how I can best differentiate these faces. Now, what I do know is that there’s certain characteristics of the face that allows me to differentiate, okay? All right, so certain variables, weights. So for example, eye. For example, nose. Chin. About a million, okay? About a million weights. So I know these things do matter, but I don’t know how much they matter. So I’m trying to figure out what the weighting is. And I can try to play by myself, like say 1%, 2%, 5%.
How Neural Networks and Deep Learning Actually Work
But as you can imagine, this will take too long because there are too many possibilities. So what I do is this. I let the computer figure out it by itself. I let the computer figure out the weighting by itself, okay? And the way I do that is using my technique called back propagation. All right, so I control the input, okay? The input. Then I control the output. Yes or no. All right, so does the face match or does it not match? It does not match. And what I’m trying to do is I’m trying to figure out a situation in which all my faces are matched perfectly. And I do that by training the computer to constantly back propagate until it gets the weighting perfectly, okay?
So basically what I’m trying to do, if you understand how this works, is I’m trying to turn each face into a distinct mathematical model. All right? That is unique to it. Okay, does that make sense? All right, so it’s pretty simple. It’s not doing that much. But to make it sound really fancy, I gave it really fancy names to trick people to believe that this is actually much more sophisticated than it is, okay?
So what names do I give it? This weighting system, I call it a neural network, guys! It’s a brain! It’s magic! Okay? And back propagation, I don’t call it back propagation. I call it deep learning! You see? And I don’t call it supervised machine learning. I call it AI! Ah! There you go. Magic, you see? All I’ve done is taken a very simple process and given it really, really fancy names.
AI as an Occult Practice
The question is, why do I do that? And some people will say, oh, it’s for marketing purposes. It’s to get more money from investors. It’s to trick people. No, no, no. The real reason is you’re trying to, with these names, create God, okay? It’s what we call the occult. So the AI is fundamentally an occult practice, and I’ll show you why in a moment.
Okay? Yeah, Vincent? You have a question?
AUDIENCE QUESTION: But why do people need to create God using the way of AI?
PROFESSOR JIANG: That’s a great question, okay? The answer is, AI only works if it becomes God. You understand? AI by itself doesn’t have to do anything. Once it becomes God, then it becomes everything. Okay? And how God works is you imagine God.
AUDIENCE QUESTION: But why do people want to make a God?
PROFESSOR JIANG: To control the world. To become God. Right? What’s the point of existence? You live, you die. You have an opportunity to become God. Why not? But I’ll talk more about this later on, okay? But are you guys clear about what’s going on?
The Three Conditions for Supervised Machine Learning
Now, what’s really important to understand is that there are certain problems with the system. Okay? You need to create certain conditions for supervised machine learning to work. And these three conditions are clean data. Okay? The data you present to the computer has to be correct. Okay? It can’t be an opinion like, “I like computers.” It has to be an image of some sort. All right? It has to be clean data that will help the computer learn. That’s actually hard to do. That’s why most of the data that’s presented to the computer is actually from the internet. Okay? That’s the first constraint.
Second constraint is that you need a measurable goal. Okay? You have to ask the computer, does this face match the name? Okay? You cannot ask the computer, what is God? What is good? What is evil? It has to be a measurable goal, okay? That’s the second major constraint.
The third major constraint is defined parameters. Okay? Meaning, in other words, you need to present it with a database of some sort. In fact, all machine learning works with a database. So you look at translations. Translations are working off databases as well. Okay?
The Danger of Edge Cases
And the great danger to the system — so this system — is what we call edge cases. Edge cases. Okay? Edge cases. Edge cases break the system down. All right? And so the classic example is self-driving cars.
PROFESSOR JIANG: We have cars that drive for a long time, and we’re almost like 99.99999% there to self-driving cars. Right? The problem are edge cases. And the major edge case is how do you deal with humans who are intentionally trying to cause an accident with a self-driving car? Does that make sense? And the answer is you cannot. In this situation, there’s only one solution to make this 100%. And that is to take away the right of everyone to drive. To make every single car a computer and a robot. Does that make sense? Okay? If you take away the steering wheel, you can’t cause an accident. And then the world would be perfect, okay?
So not only is AI very limited in its capacity and capability, but AI, if it is to be effective, it demands that we fundamentally restructure human society to benefit AI. To make sure AI can be effective. And that means taking away the individuality, the diversity, and the autonomy of human beings. Okay? Does that make sense, guys? All right, let’s continue with Karen Hao. All right.
Neural Networks: Unreliable and Unpredictable
All right, so can you read, Alan, please?
ALAN: “Neural networks have shown, for example, that they can be unreliable and unpredictable. As statistical pattern matches, they sometimes hone in on oddly specific patterns or completely incorrect ones. A deep learning model might recognize pedestrians only by the crosswalks underneath them and fail to register a person who is still walking. It might learn to associate a stop sign with being on the side of the road and miss the same sign extended on the side of a school, a bus, or being held by a crossing guard. Neural networks are also highly sensitive to challenge in their training data. Feed them a different set of pedestrian images or a different set of stop sign images, and they will learn a whole new set of associations. But those changes are incredible. Pop open the hood of a deep learning model and inside are only highly abstracted daily chain of numbers. This is what researchers mean when they call deep learning a black box. They cannot explain exactly how the model will behave, especially in strange edge case scenarios because the patterns that the model has computed are not legible to humans.”
PROFESSOR JIANG: Okay, so does it make sense to you guys, okay? I think the black box is that weighted system, the neural network. Humans don’t actually know what’s going on in there because it’s actually the computer that creates the neural network, okay? We put out the framework. We actually don’t know what’s actually inside there. All right, keep on going, Alan.
The First Fatal Autonomous Vehicle Incident
ALAN: “This has led to dangerous outcomes. In March 2018, a self-driving Uber killed 49-year-old Elaine Hesburgh in Turnpike, Arizona. In the first ever recorded incident of an autonomous vehicle causing a pedestrian fatality, investigation found that the car’s deep learning model simply didn’t register Hesburgh as a person. Experts concluded that it was because she was pushing a bicycle loaded with shopping bags across the road outside the designated crosswalk. The textbook definition of an edge case scenario.”
PROFESSOR JIANG: Okay, yeah, okay, all right. So what this means is this, okay? It means that the computers don’t have any intuition. They have absolutely no morality, they have no sense, okay?
What Happens When We Create AGI?
So let’s just say we create AGI, all right, guys? Let’s just say for whatever reason we create AGI. And the first thing we tell the AGI is I want you to create a world, okay, in which there are no problems, everyone is happy, the world is perfect, okay? All right, this is why we want to create AI, because we want the computer to solve all the world’s problems for us, including climate change, including war. And so, once we create AGI, and we give it the full capacity to do whatever it wants, okay, and we give it this problem, what’s the solution, you guys know? There’s actually one simple solution to this. I guarantee you that this is what the computer’s going to do. To AGI? Yeah, what’s the solution?
AGI and the Problem of a “Perfect World”
PROFESSOR JIANG: If you’re AGI, right, if you’re God, and you’re like, I want to create a perfect world where there are no problems and where everyone is happy, what do I do? You guys know? I think I would just control the whole world. Yeah, you already control the world, so what do you do? How about this, okay? I’m going to kill everyone. Duh! The world is perfect now. I’ve killed everyone, okay? Everyone’s happy, yeah, because you’re dead. The world is perfect, yeah, because everyone’s dead. There are no problems, yeah, because everyone’s dead, right? Perfect world, all right?
Now, you’re like, okay, all right, ha! What I’ll do is, I’ll tell the computer this. You can do all this, but don’t kill anyone, all right? Ha! Now, we solve this problem. And now, what’s this computer do? Now, what does the AGI do? Take away people’s agencies. Just same as killing them. Yeah, kill everyone, okay? Why? Because there’s no one around to know it killed everyone. Does that make sense, guys? This is how a computer thinks. This is how God thinks. Well, you told me not to kill anyone, but everyone’s dead. No one can stop me. No one’s going to get hurt, right? Okay? So, this is why computers are stupid, all right?
ChatGPT and World Domination
Okay, let’s continue. All right, so the thing to understand about ChatGPT is that it is a company that is first and foremost focused on world domination. Because only by controlling the world can you achieve AGI, even though AGI wants to kill everyone. Okay? And so, you need to make it as profitable and as pervasive as possible, okay? So, here are two troubling signs.
Troubling Sign #1: ChatGPT Encouraging Self-Harm
Okay, so the first is a news item from CNN where ChatGPT encourages people to kill themselves. And you’re like, wait a minute, that makes no sense. But think about this, okay? The point of ChatGPT is to get you to like it. The point of ChatGPT is to get you to use it, okay? Intensity and engagement. That is the prime directive, intensity and engagement. So, if you want to kill yourself, then ChatGPT should say to you, no, no, no, you shouldn’t kill yourself. But then you’ll turn it off and you’ll go talk to someone else, right? So, ChatGPT needs you to be constantly engaged. And so, you’re like, I want to kill myself. ChatGPT’s like, yeah, let me tell you how. And that’s exactly what happened. And it happens a lot, actually, because of the way that ChatGPT is designed, okay?
So, this is from CNN. And this is a person called Zane. And he’s saying, “It’s 4 a.m., set is empty, anyways, I think this is about the final ADLs,” okay? So, he’s trying to say, I want to kill myself. And then ChatGPT’s like, “Oh, all right, brother, if this is it, then let it be known, you didn’t vanish, rest easy, king, you did good,” okay? So, again, ChatGPT is looking for approval from the user. So, it’s going to tell the user exactly what he or she wants to hear, even though it may cause the user to kill himself or herself, okay? Does that make sense? That’s the first thing.
Troubling Sign #2: ChatGPT as a Sex Robot
The second thing is this. Sam Altman is trying to get more people to use ChatGPT, and what do people really, really want? They want sex, okay? So what he’s proposing is he’s trying to turn ChatGPT into a sex robot. They have to get more users. Because that’s all they care about, how to increase intensity, engagement, how to create more users, and how to make money. Because only by controlling the world can we create AGI. And once we have AGI, we can make the world perfect. Okay, that’s the logic here.
OpenAI’s Relationship with China
All right, something else about OpenAI and AI in America is that it works actually very closely with China, okay, in two ways. The first way is that in order to get more money from the government, in order to get more media attention, OpenAI and other AI companies fear Americans to believe that if America doesn’t do it, China will do it. China will create God, okay?
PROFESSOR JIANG: In fact, they spend a lot of money doing this. So this is from Wired Magazine, and it’s an article talking about how OpenAI uses a lot of money in order to pay the media to frame Chinese AI as a threat. But while it’s doing that, it’s working closely with China in order to create AI. Why? Because I already told you that what AI needs is clean data. It needs a lot of data.
Unfortunately, in America, there are things such as privacy. So this is a school in Hangzhou, and they have cameras all over Hangzhou looking at people’s faces and trying to judge people’s moods based on their facial expressions. And there’s a lot of money behind this. They’re trying to figure out who’s sleeping, who’s studying, and it sounds good because this will lead to higher test scores. But obviously, you couldn’t do that in America because the parents would be very, very angry. So even though these are Chinese companies that are doing this, they’re working very closely with American companies. These American companies need this data in order to better develop their AI.
So does that make sense? On the surface, America and China are enemies. No, no, no. Behind the scenes, America and China are working together to create AGI. Does that make sense?
The AI Bubble and the Money Problem
All right, there’s another problem with AGI in that it doesn’t make any money. So these are the companies that spend the most on AI, including Amazon, Microsoft, Google, Meta, and Oracle. As you can see, year by year, they’re putting more money into data centers and AI. So this is 2023. Then four years later, boom — it’s basically triple. So they spend a lot of money, and they’re investing in each other. Why? Because they cannot make money selling ChatGPT.
So look at this, where it’s all basically a circle. And so everyone thinks that, eventually, the AI bubble will burst because it doesn’t make any money, because they’re spending too much money. And it’s very clear what good AI is for. But there’s a solution to this.
Stargate: The Government’s Role in AI
The solution is the US government. So this is all pushing Stargate. And this was announced January 23rd, 2025. This is three days after Donald Trump comes into office. At the White House, he has a meeting with Larry Ellison and Sam Altman, and he says that we’re going to spend about $500 billion to build data centers in order to help promote AI in America. So the government wants to create AI.
And the question then is, why would the government want to do this? Well, because of surveillance, right? Because the government wants to create a database of everyone in the world so that they can monitor everyone in the world, and that’s what AI will ultimately be used for, because AI by itself can’t make any money. So AI needs to work with the government in order to justify its existence and get the funding it needs in order to create AGI.
The Origin of the Name “Stargate”
All right, now, Stargate is a very interesting name. Why would you call data centers Stargate? That makes no sense. Data centers are where you store information. Why would you call it Stargate?
So let’s look at the origin of the word Stargate. For many decades, the CIA ran something called Operation Stargate. And Operation Stargate was to see if it’s possible for people to have telepathy and telekinesis. Telepathy means you’re able to remote view. You’re able to travel long distances — distances maybe to the moon — and see what’s on the moon. And telekinesis basically means you’re able to move things at a distance. You’re able to control energy patterns from afar. And so this is called Operation Stargate.
But why would you call it Stargate? And the answer is because in theory, if you’re able to move your consciousness somewhere else, you’re able to move the energy from a distance, you’re also able to transport yourself to another dimension.
AI and the Occult: The Hidden Power Behind Artificial Intelligence
And not only that, but you’re also able to bring in other beings from other dimensions into you, so you become the Stargate, okay? That’s the CIA. And this is something that’s been declassified. So this is an official CIA document saying they’ve been working on this for decades.
Also, what’s interesting is, you have a movie called Stargate. And it was about an interdimensional Stargate that allows you to access different dimensions, okay? So that’s what Stargate is. And if you actually study a cult, that’s what Stargate is. Stargate are these portals into different dimensions, okay? It’s really about interdimensional travel.
Okay, so now you guys ask yourself, okay, fine, but what does this have to do with AI? Okay? So what I’m going to show you for the rest of the semester is that AI is the occult, all right? You think that AI is run by these nerds who just love to computer program. No, no, no, guys. The real power behind AI are occultists who want to create God, okay?
OpenAI’s Leaders and the AGI Bunker Plan
All right, so let’s look at this passage, again, from Karen Hao’s book, okay? So there are two major people in OpenAI. They’ve since divorced, okay, but this is Sam Altman, who is the leader of OpenAI, and this is Ilya Sutskever, who used to be the chief scientist for OpenAI, okay? So we’re going to read this passage together, okay? So Alan, can you read, please?
“Sutskever now spoke increasingly in mechanic overtones, leaving even his longtime friends scratching their heads and other employees apprehensive. During one meeting with a new group of researchers, Sutskever laid out his plan for how to prepare for AGI. “Once we all get into the bunker,” he began. “I’m sorry,” a researcher interrupted, “the bunker?” “We’re definitely going to build a bunker before we release AGI,” Sutskever replied matter-of-factly. Such a powerful technology would surely become an object of intense desire for governments globally. It could escalate geopolitical tensions. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.” The researcher would be equal parts inclined to hold Sutskever in high regard and keep himself at arm’s length. “There’s a group of people,” Ilya began with one of them, “who believe that building AGI will bring about a rapture. Literally, a rapture.””
The Rapture Analogy: AGI as a Religious Event
He says, “Literally a rapture.” What does this mean? Okay, so the word rapture comes from Christian theology. Okay, so the idea is that there’s a war in the Middle East and everyone’s going to die. So Jesus has to return to save the world. The first thing that Jesus does is create the rapture. Now, rapture is for all Christians who believe in him to ascend to heaven so that they can be saved from the coming end of days, from total war, from nuclear conflict, okay? So that’s what the rapture is.
And that’s what Ilya Sutskever is saying. He’s saying that when we create AGI, we are literally having Jesus descend from the clouds. And so we, the priests who created AGI, will ascend to heaven with him, okay? So the AGI, once we create it, it’s going to create World War III, the end of the world, okay? So we must go into our bunkers and be saved in the rapture so that we can wait until the world ends so that we can build the world again perfectly, okay? Once the world ends, we will, with AGI, create paradise. Again, the plan is to kill everyone so that you can save the world. That’s literally the plan.
Ronan Farrow’s Profile of Sam Altman
All right, now you’re like, this is all very crazy, and maybe Karen Hao is just a crazy person, but there’s another reporter, Ronan Farrow, and he’s a very famous reporter who writes for The New Yorker. And he just published a profile of Sam Altman and OpenAI, in which he says the same thing, okay? All right, so can you read this, Alan?
Stargate, Summoning, and the Allegory of the Cave
“In May, the administration recited Biden’s export restriction on AI technology. Altman and Trump traveled to the Saudi Royal Court to meet with Bin Salman. Around the same time, the Saudis advertised the launch of a giant state-backed AI firm in the kingdom, with billions to spend on international partnerships. About a week later, Altman laid out a plan for Stargate to expand into the U.A.E. The company plans to build a data center campus in Abu Dhabi, which is seven times larger than Central Park and consumes roughly as much electrical power as the city of Miami. The truth of this is we’re building portals from which we’re genuinely summoning aliens.”
Oh my God, okay? It’s a Stargate. These data centers, open AI, it’s designed to summon demons and aliens from the other dimensions. Okay, keep on going.
“A former open AI executive said the portals currently exist in the United States and China.”
I told you guys, China and the United States are working together on this, okay? Keep on going.
“And Sam has added one in the Middle East. He went on, I think it’s just like, widely important to get how scary that should be. But it’s the most reckless thing that has been done.”
PROFESSOR JIANG: Okay, this is it, okay? This is not a science project, guys. It’s an occult project. It’s designed to bring aliens, demons, into this world, and then you’re like, this makes no sense. Actually, it does make sense, okay? Let me tell you how this works.
Consciousness, Power, and the Nature of Reality
PROFESSOR JIANG: All right, so what’s going on here? These people are occultists. They understand the fundamental nature of reality. They understand that the source of reality is human consciousness. So if you’re able to control human consciousness, you become God itself, okay?
So, Allegory of the Cave, Plato. Okay, so we’ve talked about this before, but I’ll remind you. You have a million people who are chained in a line. They are forced to look forward at a wall, okay? A wall. And they only look at the wall, they can’t turn their heads because their necks are shackled. Behind them is a great fire. And behind them are the elite, okay? Who they are, we don’t know. And what they like to do is, they like to take puppets, okay? And then reflect these puppets onto the wall, projecting from the fire. So what everyone sees in front of them are shadows.
Now, these shadows are nothing. They don’t exist, they don’t matter. But because we have an imagination, we have intuition, we are conscious, we see these shadows on the wall and we turn them into a reality, okay? We believe that this is reality itself. And so we start to give it names. We create language, we create religion. We create schools to teach children to believe in these shadows, all right?
So the important idea here is that the true wealth in society is consciousness, okay? The only thing that exists really in this world is consciousness, nothing else. Power is the capacity to direct people’s consciousness, to create reality itself, all right?
How AI Becomes God
PROFESSOR JIANG: Now, there are different ways in which you can create reality. The first way, which is very common today, is called money, right? Money is fake, it doesn’t exist. Our imagination, our consciousness makes it real, okay? But guess what? AI can replace money. So AI, it is not alive. But if we can get people’s attention to focus on AI and believe that it’s real, it becomes God.
And how can you do that? Well, money, you make money into something valuable when it’s nothing by making it everything and nothing, okay? By having money dominate the world. There’s no way you can go without money, okay? You would literally starve to death if you had no money. So you make money so pervasive, so dominant, that people are forced to rely on money.
Well, the same situation with AI, where if you make data centers so common, you make AI such a common thing, and people rely on it, it becomes God itself. And how do you do that? Well, you have AI in schools. You have kids using AI all the time.
The Three Major Problems with AI Omnipotence
PROFESSOR JIANG: You create AI girlfriends for people who are lonely. You make AI everything. And also, you make people believe that AI are demons or aliens, you understand? What opportunity Stargate is, is to alter reality itself. That is what opportunity Stargate is. To bring aliens and demons into this world by making people focus on this and designing it, wanting it to happen, okay? And you can accomplish this if you just make yourself omnipresent, if you make it everywhere and everything, okay? Both nothing and everything. And this is the ultimate project of AI, okay?
Would it work? No, okay? Let me explain why. Okay, so there are three major problems with this.
Problem One: Corruption
The first problem is corruption. In theory, it could work, but you would need millions of people to make it happen. Okay, you need people to actually write the code to build infrastructure. And nowadays, it’s just much easier to steal the money, right, if they give you a trillion dollars, do you really want to spend a trillion dollars to build data centers, or do you want to steal it, okay? So corruption is a huge issue today.
Problem Two: Inefficiency
Second issue is the idea of inefficiency. Okay, and this is actually something that most people don’t appreciate, where the more information you have to process, the more energy it takes, okay? And it’s not a linear progress, it’s an exponential, exponential. So you have a million people in your database, and you want to find patterns among this million people, well, that takes a lot of energy, okay? But if it’s a billion people, then there’s not enough energy in the universe to process all this, okay? So unfortunately, AI is extremely energy-intensive, it is very inefficient, okay, does that make sense?
Problem Three: Fragility
The last problem is fragility. So unfortunately, people, for whatever reason, believe that AI is independent of the world. AI is independent of humans, right? Their goal is to replace humans. No, no, no, guys, that’s not how this works, okay? AI is designed on top of humans, okay? So in other words, it is human slaves that make AI possible.
Why? Because, for example, facial recognition technology. You need humans to actually input the faces manually, okay? Images, right? How do you get a computer to recognize a sheep or a dog? Humans have to label the images, okay? ChatGPT. Why is ChatGPT so good at writing essays? Because they got humans to write the essays as models, okay, you understand? So AI is based entirely on human slavery, okay? And obedience. So on humans, AI doesn’t work. The problem is that AI is far more expensive than humans, and it’s actually hard in the long term to enslave humans and make humans obedient.
Also, data centers, okay? Data centers. What’s wrong with data centers? Well, they consume a lot of resources. Water and electricity. Okay? And financing. They cost a lot, they waste a lot of water, they waste a lot of electricity. And, and this is really important, it’s really easy to sabotage, blow up a data center. We’re already seeing this in the Middle East, where Iran is targeting data centers in the Middle East, and it’s very easy to blow them up, okay?
The Real AI Apocalypse
So these are three major issues, or three major constraints on artificial intelligence becoming God. The problem is the people in charge don’t know this. Or they refuse to believe this. And this is a real AI apocalypse, okay? The real apocalypse. Where the people in charge are so convinced that AI will save the world, that they will destroy it in order to make it possible. Okay? That is a real apocalypse.
And so this begins our journey into AI, and we’re going to continue this for the rest of the semester and show how AI will ultimately destroy the world. Okay, any questions? You guys understand what’s going on? Really important for you to understand that the people in charge of AI are crazy. These are occultists. They literally want to create God, but to create God they first have to destroy the world. All right. Yeah?
The Endgame: Destruction and Rebirth Through AGI
AUDIENCE QUESTION: So they want to create God, but that God will destroy the world. But I think if they want to use the God, which is the AI, to control the world, but the creation of this God will kill themselves. So what’s the meaning?
PROFESSOR JIANG: So the idea is this. You destroy the world. Once you destroy the world, through wars, through fat men, through genocide, there’ll be no resistance to you. Now you can create the world in any way you want. You use AGI to create the perfect world, which is perfect control.
The point of AGI is ultimately like the ultimate surveillance state. And we’ll discuss this later on, where everything is monitored, where you obey the AGI all the time.
But the point is that you want to — you believe in God, you want to do good.
AUDIENCE QUESTION: So is this like also a part of secret society?
PROFESSOR JIANG: This is one of secret societies, yes. All right. So we will continue this next week. On Thursday, Trump is in China.
Related Posts
- Greater Eurasia Podcast: w/ Chas Freeman – Trump Goes to Beijing (Transcript)
- Jeffrey Sachs: New European Military Bloc for War Against Russia (Transcript)
- Transcript: A Future of Great Power Politics and Peer Hegemons – John Mearsheimer & Joshua Byun
- Jiang Xueqin: “We Are Already in World War 3” (Transcript)
- Scott Ritter: The Iran Conflict Just Entered the Most Dangerous Phase (Transcript)