Here is the full transcript of entrepreneur Palmer Luckey’s talk titled “The AI Arsenal That Could Stop World War III”, at TED2025 on April 8, 2025. The talk is followed by Q&A with technologist Bilawal Sidhu.
Listen to the audio version here:
PALMER LUCKEY: I want you to imagine something. In the early hours of a massive surprise invasion of Taiwan, China unleashes its full arsenal. Ballistic missiles rain down on key military installations, neutralizing air bases and command centers before Taiwan can fire a single shot. The People’s Liberation Army Navy moves in with overwhelming force, deploying amphibious assault ships and aircraft carriers, while cyberattacks cripple Taiwan’s infrastructure and prevent emergency response.
The Chinese rocket forces’ long-range missiles shred through our defenses. Ships, command and control nodes, and critical assets are destroyed before they can even engage. The United States attempts to respond, but it quickly becomes clear. We don’t have enough. Not enough weapons. Not enough platforms to carry those weapons.
American warships, too slow and too few, sink to the bottom of the Pacific under anti-ship missile swarms. Our fighter jets, piloted by brave but outnumbered human pilots, are shot down one by one. The United States exhausts its shallow arsenal of precision munitions in a mere eight days. Taiwan falls within weeks, and the world wakes up to a new reality, one where the world’s dominant power is no longer a democracy.
This is the war U.S. military analysts fear most, not just because of outdated technology or slow decision making, but because our lack of capacity, our sheer shortage of tools and platforms, means we can’t even get into the fight.
The Global Stakes of Taiwan
When China invades Taiwan, the consequences will be global. Taiwan is the undisputed epicenter of the world’s chip supply, producing over 90 percent of most advanced semiconductors.
If those factories are seized or destroyed, the global economy will crash overnight. Tens of trillions of dollars in losses, supply chains in chaos, the worst economic depression in a century.
And the danger is more than economic. It’s ideological. China is an autocracy, and a world where China dictates the terms of international order is a world where individual freedoms erode, authoritarianism spreads, and smaller nations are forced into submission. And before anyone shrugs this off as a plot of Michael Bay’s latest movie, we’ve seen this film before. Just ask Ukraine.
From VR to Defense Innovation
At this point, you might be wondering why a guy in a Hawaiian shirt and flip-flops is up here talking about potential World War III.
My name is Palmer Luckey. I’m an inventor and an entrepreneur. When I was 19 years old, I founded Oculus VR while I was living in a camper trailer, and then brought virtual reality to the masses. Years later, I was fired from Facebook after donating $9,000 to the wrong political candidate, and that left me with a choice. Either fade into relative irrelevance and islands, or build something that actually mattered.
I wanted to solve a problem that was being ignored, one that would shape the future of this country and the world. Despite the incredible technological progress happening all around us, our defense sector was stuck in the past. The biggest defense contractors had stopped innovating as fast as they had before, prioritizing shareholder dividends over advanced capability, prioritizing bureaucracy over breakthroughs.
Silicon Valley, which was home to many of our top engineers and scientists, had turned its back on defense and the military writ large, betting on China as the only economy or government worth pandering to. Tech companies that once partnered with the military had decided that national security was someone else’s problem.
The result? Your Tesla has better AI than any U.S. aircraft. Your Roomba has better autonomy than most of the Pentagon’s weapons systems. And your Snapchat filters? They rely on better computer vision than our most advanced military sensors.
Now, I knew that if both the smartest minds in technology and the biggest players in defense both deprioritized innovation, the United States would forever lose its ability to protect our way of life. With so few willing to solve that problem, I decided that I would try my best.
Building Anduril: A Different Kind of Defense Company
So I founded a company called Anduril, not a defense contractor, but a defense product company. We spend our own money building defense products that work rather than asking taxpayers to foot the bill. The result is that we move much faster and at lower cost than most traditional primes.
Our first pitch deck to our investors, who are very aligned with us, said it plainly. We will save taxpayers hundreds of billions of dollars a year by making tens of billions of dollars a year.
Now, while we make dozens of different hardware products, our core system is a piece of software, an AI platform called Lattice, that lets us deploy millions of weapons without risking millions of lives. It also allows us to make updates to those weapons at the speed of code, ensuring we always stay one step ahead of emerging and reactive threats.
Another big difference is that we design hardware for mass production using existing infrastructure and industrial base. Unlike traditional contractors, we build, test, and deploy our products in months, not years.
That approach has allowed us in less than eight years to build autonomous fighter jets for the United States Air Force, school bus-sized autonomous submarines for the Australian Navy, and augmented reality headsets to give every one of our superheroes superpowers, to name just a few.
We also build counter-drone technology, like Roadrunner here, which is a twin turbo-jet-powered reusable counter-drone interceptor that we took from napkin sketch to real-world combat-validated capability in less than 24 months. And we did it using our own money.
Deterrence Through Innovation
Now, coming from a guy who builds weapons for a living, what I’m about to say next might sound counter-intuitive to you. At our core, we’re about fostering peace. We deter conflict by making sure our adversaries know they can’t compete.
Putin invaded Ukraine because he believed that he could win. Countries only go to war when they disagree as to who the victor will be. That’s what deterrence is all about, not saber-rattling, making aggression so costly that adversaries don’t try in the first place.
So how do we do that? For centuries, military power was derived by size. More troops, more tanks, more firepower. But over the last few decades, the defense industry has spent far too long handcrafting exquisite, almost impossible-to-build weapons. Meanwhile, China has studied how we fight, and they’ve invested in the technologies and the mass that counter our specific strategies.
Today, China has the world’s largest navy, with 232 times the shipbuilding capacity of the United States, the world’s largest coast guard, the world’s largest standing ground force, and the world’s largest missile arsenal, with production capacity growing every single day.
We’ll never meet China’s numerical advantage through traditional means, nor should we try. What we need isn’t more of these same systems. We need fundamentally different capabilities. We need autonomous systems that can augment our existing manned fleets. We need intelligent platforms that can operate in contested environments where human-piloted systems simply cannot. We need weapons that can be produced at scale, deployed rapidly, and updated continuously.
The Power of Mass Production
Mass production matters. In a conflict where our capacity is our greatest vulnerability, what we really need is a production model that mirrors the best of our commercial sector—fast, scalable, and resilient.
We know how to win like this. We rallied our industrial base during World War II to mass-produce weapons at an unprecedented scale. It’s how we won. The Ford Motor Company, for example, produced one B-24 bomber every 63 minutes.
But to actually achieve the benefits of these mass-produced systems, we need them to be smarter. This is where AI comes in. AI is the only possible way we can keep up with China’s numerical advantage. We don’t want to throw millions of people into the fight like they do. We can’t do it, and we shouldn’t do it.
AI software allows us to build a different kind of force—one that isn’t limited by cost or complexity or population or manpower, but instead by adaptability, scale, and speed of manufacturing.
Now, the ethical implications of AI in warfare are serious, but here’s the truth. If the United States doesn’t lead in this space, authoritarian regimes will. And they won’t be concerned with our ethical norms. AI enhances decision-making. It increases precision. It reduces collateral damage. Hopefully, it can eliminate some conflicts altogether.
The good news is that the U.S. and our allies have the technology, human capital, and expertise to mass-produce these new kinds of autonomous systems and launch a new golden age of defense production.
A Different Future for Taiwan
With all that information in mind, let’s go back to Taiwan, but imagine a different scenario. The attack might begin the same way. Chinese missiles streak towards Taiwan, but this time, the response is instant. A fleet of AI-driven autonomous drones, already stationed in the region by allies, launch within seconds. Storming together in coordinated attacks, they intercept incoming Chinese bombers and cruise missiles before they ever reach Taiwan.
In the Pacific, a distributed force of unmanned submarines, stealthy drone warships, and autonomous aircraft that work alongside manned systems strike from unpredictable locations. Our AI-piloted fighter swarms engage Chinese aircraft in dogfights, responding faster than any human possibly could. On the ground, robotic sentries and AI-assisted long-range fires halt China’s amphibious assault before a single Chinese boot reaches Taiwan’s shores.
By deploying autonomous systems at scale, this type of autonomous system, we prove to our adversaries that we have the capacity to win. That is how we reclaim our deterrence. To do so, we just have to stand with our allies across the world, united by the shared values and common resolve that we’ve shared for the better part of a century.
Our defenders, the men and the women who volunteer to risk our lives, deserve technology that makes them stronger, faster, and safer. Anything less is a betrayal, because that technology is available today. This is how we prevent a repeat of Pearl Harbor. We could be the second greatest generation by rethinking warfare altogether. Thank you.
Q&A with Bilawal Sidhu
BILAWAL SIDHU: Thank you, Palmer. You painted a very vivid picture of the future of warfare and deterrence. I want to ask you a couple questions. I think one that’s on a lot of people’s minds is autonomy in the military kill chain. With the rise of AI, are we contending with fundamentally a new set of questions here? Because some advocate that we shouldn’t build autonomous systems or killer robots at all. What’s your take on that?
PALMER LUCKEY: I love killer robots. The thing that people have to remember is that this idea of humans building tools that divorce the design of the tool from when the decision is made to enact violence, it’s not something new. We’ve been doing it for thousands of years. Pit traps, spike traps, a huge variety of weapons, even into the modern era. Think about anti-ship mines. Even purely defensive tools that are fundamentally autonomous.
Whether or not you use AI is a very modern problem. It’s one that people who haven’t usually examined the problem fall into this trap. There’s people who say things that sound pretty good. You should never allow a robot to pull the trigger. You should never allow AI to decide who lives and who dies.
I look at it in a different way. I think that the ethics of warfare are so fraught and the decisions so difficult that to artificially box yourself in and refuse to use sets of technology that could lead to better results is an abdication of responsibility. There’s no moral high ground in saying, I refuse to use AI because I don’t want mines to be able to tell the difference between a school bus full of children and Russian armor.
There’s a thousand problems like this. The right way to look at this is, problem by problem, is this ethical? Are people taking responsibility for this use of force? It’s not to write off an entire category of technology and in doing so tie our hands behind our backs and hope we can still win. I can’t abide by that.
BILAWAL SIDHU: You’re right. If the information is available to you, why not create systems that actually take advantage of it? If you blind yourself to it, the result could be far more catastrophic.
PALMER LUCKEY: Precisely. People say things, usually non-technical people, like why not just make it all remote control? They don’t recognize that the scale of these conflicts we’re talking about, they don’t lend themselves to a one-to-one ratio of people to systems. To say nothing of the fact that if you’re a remotely piloted system, all you have to do is break the remote part and everything falls apart. There’s no moral high ground either in saying all you have to do is figure out how to jam us and you win.
BILAWAL SIDHU: It sounds like a lot of defense systems that exist today kind of have this type of autonomous mode.
PALMER LUCKEY: This is another point. It’s usually not one that I make on a stage, but I’ll get confronted by journalists who say, oh, well, you know, we shouldn’t open Pandora’s box. My point to them is the Pandora’s box was opened a long time ago with anti-radiation missiles that seek out surface-to-air missile launchers. We’ve been using them since pre-Vietnam era.
Our destroyer’s Aegis systems are capable of locking on and firing on targets totally autonomously. Almost all of our ships are protected by closed-in weapon systems that shoot down incoming mortars, incoming missiles, incoming drones. We’ve been in this world of systems that act out our will autonomously for decades.
The point I would make to people is you’re not asking to not open Pandora’s box. You’re asking to shove it back in and close it again. The whole point of the allegory is that such cannot be done. That’s the way that I look at it.
AR/VR in Military Applications
BILAWAL SIDHU: I’ve got to ask you one more question going back to your roots. Many folks were obviously introduced to VR because of Oculus. In a twist of fate, Anduril recently took over the IVAS program, essentially building AR-VR headsets for the U.S. Army. What’s your vision for the program and what does that feel like?
PALMER LUCKEY: We need all of our robots and all of our people to be getting the right information at the right time. That means they need a common view of the battlefield. The way that you can present that view to a human is very different from the way that you present it to a robot.
Robots are great. They have very, very high I.O. They have very low error rates in connectivity. People, we have to try to figure out how to strap stuff onto our appendages like our hands and our eyes and our ears and present information in a way that allows us to collaboratively work with these types of tools.
So superhuman vision augmentation systems like better night vision, thermal vision, ultraviolet vision, and hyperspectral vision, those are the things that people focus on when they look at IVAS. But there’s a whole other layer, which is that we need to be able to see the world the same way that robots do if we’re going to work closely alongside them on such high-stake problems.
BILAWAL SIDHU: I love it. Human plus machine intelligence. Super Luckey everyone.
Related Posts
- Why eSIMs Are the Smart Choice for Mobile Businesses
- Transcript: Elon Musk and Jensen Huang Talk AI at U.S.-Saudi Investment Forum – 11/19/25
- The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future (Transcript)
- Transcript: NVIDIA CEO Jensen Huang’s Keynote At GTC 2025
- Mo Gawdat: How to Stay Human in the Age of AI @ Dragonfly Summit (Transcript)