Mr. Elon Musk visited West Point in August 2024, where he conducted a fireside chat with BG Shane Reeves as part of the Academy’s launch of its newest annual theme, “The Human and The Machine: Leadership on The Emerging Battlefield.” Read the full transcript here:
Listen to the audio version here:
TRANSCRIPT:
BG SHANE REEVES: Welcome to Inside West Point, ideas that impact. I’m Brigadier General Shane Reeves, the dean of the United States Military Academy at West Point. Through a series of discussions, we will show you a different side of West Point where we will make even our most complex initiatives accessible to broad audiences and give you an inside view to our cross-disciplinary work, which is being applied throughout the world. Mr. Musk, thank you very much for being here, sir. We appreciate it.
ELON MUSK: It’s an honor to be here.
BG SHANE REEVES: We’re really excited for you to help us kick off our intellectual theme, “Human and Machine,” which is leadership on the emerging battlefield. We want to make sure that the academy and the cadets are focused on not tomorrow, but the next twenty, thirty, forty years.
Underlying the entire theme is an emphasis on the importance of preparing cadets for future warfare and really where humans and machines intersect. It’s as if your background was made for this theme. However, I actually talked to quite a few people, and they really don’t know who you are. So neither do I.
ELON MUSK: You’ll make a company that figures out, sir.
Elon Musk’s Background
BG SHANE REEVES: Alright. So if you don’t know you don’t know, and this is ridiculous. Of course you know, but I’m going to do it anyway. Elon Musk sitting next to me cofounded and leads X, Tesla, SpaceX, Neuralink. Here’s my favorite as an academic, The Boring Company.
ELON MUSK: That was good. I started it as a joke. And that’s real.
BG SHANE REEVES: Well, it’s awesome. By the way, you have a new company also, XAI.
ELON MUSK: Yeah.
BG SHANE REEVES: And that’s all of them unless you started one this morning I don’t know about. But, basically, what this means is everybody in this room has the opportunity based on these companies to drive a futuristic electric truck through a gigantic underground tunnel while using a digital connection in their brain to start a rocket while simultaneously getting updates on army football.
Your innovations have revolutionized electric vehicles, batteries, space exploration, advanced human-machine interactions, made information instantaneously accessible and have started to help integrate AI throughout our daily lives. After the convocation, I’m hoping to have a little bit of time so you can give me some personal tips because somehow you have founded and lead multiple companies. You’re a father to multiple children, and I’m exhausted after two hours of coaching one kid’s sport. So whatever you can do to help me out, I’d appreciate it.
West Point’s Expertise
BG SHANE REEVES: We have cadets and staff and faculty who can speak on multiple disciplinary perspectives that would interest you: drone swarms, electric batteries, molecular brain science, engineering psychology, philosophy, law, Chinese language. But that’s just the start. I think this one’s particularly relevant, because we can even cover the intense science of boxing with our world-class Department of Physical Education, just in case you never know if some random head of state challenges you to a fight.
ELON MUSK: You know, I did challenge Putin to one-on-one combat.
BG SHANE REEVES: Did he take it?
ELON MUSK: No. And then I actually, on X, formerly known as Twitter, I said, “I hereby challenge Vladimir Putin to one-on-one combat.” And I made sure to use his name in Russian Cyrillic, and then as the stakes are Ukraine, I used Ukrainian Cyrillic. And then people thought I wasn’t serious. No, I’m absolutely serious. I mean, he does have, you know, he’s good at judo, I hear. And, I think it would be… I mean, the pay-per-view alone on that would be incredible.
BG SHANE REEVES: I could get everybody in here to start chanting “Two men enter, one man leaves.”
ELON MUSK: I can’t watch that, and I’m in it.
BG SHANE REEVES: I will tell you that our 31st superintendent, Douglas MacArthur, once said, “There’s no substitute for victory.” And when it comes to fighting, it’s not just our military, but it’s also the whole country and the whole industrial base.
ELON MUSK: That’s so important.
The Future of Warfare
BG SHANE REEVES: You’ve innovated across so many areas, whether it’s beneath our surface, outer space, and everything in between. And we’re, again, truly grateful for you to be here as we start to talk about some of these things that you’ve been working on. As you look into the audience, I just want to give you a bit of context.
A lot of these are the leaders who will face our nation’s most complex and difficult challenges going forward. We have our cadets who will serve as army officers leading hundreds and eventually thousands of soldiers through this complexity that we talk about. And we also have our faculty, who are preparing them to do just what I just said, lead through these complex situations. Many of our faculty will also reenter the army and be required to lead. Thank you for taking the time to help us think deeper and inform us as we start to inform our cadets on how we can be successful, not just fighting, but winning in the contemporary and future battlespace.
So let me start with this broad question. How do you see warfare transforming in the future?
ELON MUSK: I mean, the biggest effect, I think, by far is AI and drones. So the next well, in fact, the current war in Ukraine, is very much a drone war already. It’s sort of a contest between Russia and China to see who can deploy the most number of drones.
Now if there’s a major power war, it’s very much going to be a drone war.
It’s going to be drones and AI. And, you know, it’s just sort of… I mean, I do worry about the existential risk of AI, which is that if you employ AI and drones, do you go down this path where eventually you get to Terminator? You know? Try to avoid that.
BG SHANE REEVES: That would be good.
ELON MUSK: Yeah. Minimize the Terminator risk. But I mean, essentially, when you’re making military drones, you are making Terminators. And I think you’re somewhat forced into giving the drone localized AI, because if AI is far away, it can’t control as well as localized AI.
BG SHANE REEVES: What do you mean by localized AI?
ELON MUSK: Meaning it’s an autonomous scaling machine. Completely autonomous. Well, you give it the okay in a particular arena, and it goes. With certain parameters, hopefully.
BG SHANE REEVES: Do you think our adversaries will have the same type of concerns or limitations?
ELON MUSK: Well, yeah. I mean, it depends on how much existential risk there is in these wars. So if it’s a regional war, I think it’ll be more tempered. If it goes beyond the regional war, then it’s all bets are off. And, you know, then you start playing things that you really wouldn’t want to deploy. So hopefully, that doesn’t happen.
BG SHANE REEVES: But you said it, and I would agree that if you just look at the contemporary conflicts that are taking place, you would agree that machines aren’t just disrupting warfare. They’re now commonplace.
ELON MUSK: Drones are going to be overwhelmingly what matters in any conflict between powers that have significant technology. It’s so… my personal belief is that it’ll actually be, I think, probably too dangerous to have humans at the front. It’s drones at the front. It’s too dangerous to have humans.
BG SHANE REEVES: Because of the lethality, though. It’s too dangerous to have humans at the front.
ELON MUSK: Yes. I mean, if you’ve seen some of the computer-controlled sniper rifles, I mean, they just don’t miss. So you’re fighting a machine that’s going to, you know, aim with micro-level accuracy, and never gets tired.
Leveraging Technology for National Defense
BG SHANE REEVES: How do you think the United States should be leveraging technology to further our national defense?
ELON MUSK: Well, I think we probably need to invest in drones. The United States is strong in terms of technology of the items, but the production rate is low. So it’s a small number of units, relatively speaking, but we’re… basically, I think there’s a production rate issue. Like, how fast can you make drones? Let’s just say if there’s a drone conflict, the outcome of that drone conflict will be how many drones each side has in that particular skirmish, times the kill ratio. So if you’ve got a set of drones that have a high kill ratio, but then the other side has far more drones. If you’ve got a two-to-one kill ratio, but the other side has four times as many drones, it’s a little bit loose.
BG SHANE REEVES: Do you think our industrial base can scale to make the volume of drones that you’re talking about?
ELON MUSK: I think that’s going to be the biggest challenge. It can scale, but it is not currently scaling.
BG SHANE REEVES: Why would that be?
ELON MUSK: I think the procurement is… I mean, I read a lot of military history and actually, I go to sleep listening to an audiobook on military history of one kind or another. So I find the subject very interesting. And one of the things that tends to happen is that countries pretty much are geared up to fight the last war, not the next war. And it’s hard to change.
I mean, you look at the uniforms at the start of World War I and the tactics and strategies they used at the start of World War I, they were not significantly different from the Napoleonic era. You know, the French were marching into war with brightly colored uniforms… look great. That’s not what you want to be. You know, when somebody’s trying to point a gun at you, you don’t want a great-looking uniform. You want a uniform that blends in.
So there’s a tendency to be gearing up to fight the last war. The last war the US fought was kind of the Cold War, I guess. So, it usually takes some kind of shock factor to adjust. I would recommend adjusting now. And you are seeing some startups like Anduril and a few others that have a different mindset. But it’s really… it’s going to be, can you make a lot of drones and what’s the core issue? That’s what it comes down to.
BG SHANE REEVES: There was recently a report that said President Zelensky said by February 2025, there’ll be a million drones produced by the Ukrainians. So it seems like it’s doable, and this might be a process question. And we’ll talk about process in a second.
The Future of Human Enhancement
BG SHANE REEVES: As you were just talking, I was thinking about what you said – that you can’t have humans at the front. And so you haven’t created a company that’s solved aging yet, have you?
ELON MUSK: No.
BG SHANE REEVES: Okay. So in a hundred years, I wonder whether we should solve aging.
ELON MUSK: That’s a great point. Yeah. I’d like to wrap it up sometimes.
BG SHANE REEVES: Well, it’s like… how long do you want Putin and Kim Jong-un to live? That’s a great point.
ELON MUSK: Yeah.
BG SHANE REEVES: But let’s say you go forward a hundred, hundred fifty years. How do you envision this evolution? And I think this might get to Neuralink. How do you see this evolution between the human who maybe can’t be at the front any longer, the technology’s at the front, yet keeping them centered, integrated and synchronized? How is that going to work in your mind?
ELON MUSK: I mean, communications is essential. It is actually very important to have space-based communications that cannot be intercepted, which is what Starlink offers. Starlink is the backbone of the Ukrainian military communication system, because it can’t be blocked by the Russians, essentially. So on the front lines, all the fiber connections are cut. The cell towers are blown up, and the geostationary satellite links are jammed. The only thing that isn’t jammed is Starlink.
And then GPS is also jammed. GPS signal is very faint, but Starlink can offer location capability as well. So it is a strategic advantage that’s very significant. And when you’re trying to communicate with the drones, the drones need to know where they are, and they need to receive instructions. So if you don’t have communications and positioning, then the drones don’t work.
BG SHANE REEVES: But do you find it important that there’s still that communication between the human and the machine or the drone?
ELON MUSK: Yeah. Yes. And this is like a different question of, where are things right now versus where will things be in ten years. I have to say I do look at the future with some trepidation. I have to have some deliberate suspension of disbelief to sleep sometimes, because I think we’re headed into a pretty wild future.
And I’m actually an optimistic person. But AI is going to be so good, including localized AI. I mean, at the current rates, you’ll have some of that sort of rock-level AI probably that can be run on a drone. And so you could literally say, you know, this is the equipment that the drone needs to destroy. Go into that area. It’ll recognize what equipment needs to be destroyed and take it out.
BG SHANE REEVES: Because a lot of your work with Neuralink, though, is that… I’m… what you’re saying is that AI is going to quickly surpass, at least in your estimation, the human’s ability to control it. Yeah?
ELON MUSK: I mean, I’d like to say no, but the answer is yes.
The Evolution of AI and Human Control
BG SHANE REEVES: So how long until you think that happens before the AI has evolved to the point where… and I know the AIs can start working together even relying on computers, like, in a decentralized way and therefore surpasses the ability for the human to be able to influence how it’s working.
ELON MUSK: Well, I think humans will be able to influence how it’s working for a long time. This is an esoteric subject that, you know, goes into pretty wild speculation. I think to some degree that the AIs, I think, will want humans as a source of will.
So if you think of how the human mind works, there’s limbic system and the cortex. You’re sort of your kind of base instincts and the sort of thinking and planning part of your brain. But you also have a tertiary layer already, which is all of the electronics that you use, your phones, computers, applications. So you already sort of have three layers of intelligence. But all of those – the cortex and the machine intelligence, your sort of cybernetic third layer – is trying to make the limbic system happy because limbic system is a source of will.
So there’s some… you know, it might be that the AIs just want to make the humans happy. And part of what Neuralink’s trying to do is improve the communication bandwidth between the cortex and the digital tertiary layer because our bandwidth… upward bandwidth of a human is less than one bit per second per day. And there’s 86,400 seconds in the day. You don’t output 86,400 tokens.
So, you know, it’s… but, like, the number of words that I can say at this forum, say, just like I’ve just looked at it from an information theory standpoint. How much information am I able to convey? Not that much because I can only say a few number of words. And in order to convey an idea, I have to take a concept in my head. I have to compress it down to a small number of words, try to aspirationally model how you would decompress those words into concepts in your own mind. That’s communication.
So your brain is doing a lot of compression, decompression, and then has a very small output bandwidth. Neuralink can increase that bandwidth by several orders of magnitude. And, also, you don’t have to spend as much time compressing thoughts into a small number of words. You can do conceptual telepathy. That is the idea behind Neuralink.
It is intended to be a mitigation against AI existential risk.
AI Alignment and Safety
BG SHANE REEVES: You talk about alignment. Can you explain what you mean by alignment to help everyone understand?
ELON MUSK: Yeah. Just is AI going to do things that make civilization better, make people happy, or will it be contrary to humanity? Will it foster humanity or not? Will it be against humanity? So, obviously, we want an AI that will foster humanity.
I think in developing an AI to foster humanity… because I’ve thought about AI safety for a long time. Like, I’ve had probably a thousand hours of discussion about this. And my other conclusion is that the best course for AI safety is to have an AI that is maximally truth-seeking, and also curious. And if you have both of those things, I think it will foster… it will naturally foster humanity because if you want to see how humanity develops, humanity is more interesting than not humanity.
You know, I like Mars. I’m a big fan of Mars, obviously. And I think we should become a multi-planet civilization. I guess that’s very important. But the purpose of SpaceX is to make life multi-planetary. That’s the reason I created the company, and that’s the reason that we have the Starship development in South Texas. That rocket is far, far too big for just satellites. It’s an attempt to establish life on Mars, not just to send astronauts there briefly, but to build a city on Mars that’s ultimately self-sustaining.
But getting back to AI, if you’ve got a true-seeking AI that is naturally curious, my neural net, my biological neural net says that that’s going to be the safest outcome. Because if, like, if you wiped out Mars, this… you… Mars is not as interesting as Earth because there’s no human civilization there. Or thought of another way, if you want to render Mars, rendering Mars is pretty easy. It’s basically red rocks that look kind of like some parts of Arizona. You know? There’s not a lot of people. It’s just… it’s easy to render. It’s rendering, like, to Mars.
But rendering human civilization is much harder, much more complex, much more interesting. And so I think a curious truth-seeking AI would foster humanity and want to see where it goes.
Building Trust Between Humans and Machines
BG SHANE REEVES: That relies on requires trust between the human and the machine. And that’s where I want to ask you a question on this. So the army, leaders in the army are no strangers to implementing new technologies. Think about how GPS, for example, transformed navigation. It’d be unheard of not to use GPS today. But when I was a lieutenant, no one used GPS.
So recently, I was watching this incredibly important and realistic documentary, called “Top Gun: Maverick.”
ELON MUSK: Yeah. It’s really good. It’s, like, it’s… I mean, if you don’t want to think about the plot too closely, but it’s a great movie.
BG SHANE REEVES: It’s a fantastic movie. I learned that Tom Cruise is actually not an actor. He’s, like, a pilot apparently. But, he taught me something really important in it. He says it’s about the pilot, not the plane. And that’s right before he defeats a fifth-generation fighter with a 1972 F-14. Right?
ELON MUSK: Yeah. Like… yeah. Just go with it. Yeah. Tom Cruise could do it.
BG SHANE REEVES: Well, in it, you know, it’s a bit of a cynicism or a cynical view of the need for technology. It’s like, hey. Technology is superfluous. Humans can do it. But we know that’s… I mean, no. I don’t question Tom Cruise. I don’t ever question Tom Cruise. No. I’m just kidding. I do that.
I guess the question is how do we get humans to be able to trust the machine? Because there are a lot of stories. For example, we just recently had a conversation where Apache pilots were given new technology, and they’re like, we’re not going to use it because we don’t really trust it. Okay. And so how do you get… how do we… when new technology is implemented, we have to be able to trust, especially if it’s going to be the difference maker to win. So how do we do that? How do we build the trust between the human and the machine?
ELON MUSK: Well, I don’t think we shouldn’t just automatically trust these things. I think you want to test it out, maybe a lot of testing, and see how it actually works in the conflict at small scale and then scale it up if it’s effective. But, yeah, I mean, I have to say, like, I cannot show, for example, that there is a… unfortunately, this is not an air force gathering, but there’s… I’m not sure there’s a lot of room opportunity for fighter pilots. Because I think if you’ve got a drone swarm coming at you, the pilot’s a liability in the fighter plane, to be honest.
So, you know, if you compare a drone versus a fighter plane, how easy is it to make a drone? It’s an order of magnitude, maybe a hundred… at least ten, maybe a hundred times easier to make the drone, and you can afford to sacrifice the drones. Whereas pilots, you don’t want to sacrifice the pilots. So my guess is that actually the age of human-piloted fighter aircraft is coming to an end.
Ethical Considerations of Autonomous Weapons
BG SHANE REEVES: If that’s the case, then there’s a question that is oftentimes debated in law and ethics debates about killer robots. And, really, are these things that… should we be willing to lean so forward with the technology that we start to supplant the human pilot with the technology and where does that go? And so what are your thoughts as we talk about technology replacing humans on the battlefield?
ELON MUSK: Well, I guess what I’m saying is that at the front of the battle line, it’s just going to be just drones, and any humans caught in the crossfire are going to get killed. So it’s… it’s irrelevant. It’s just going to be the way military operations take place. There isn’t going to be… if you make the choice to be there, then you’re at a significant disadvantage.
The Future of Warfare: Drones and AI
BG SHANE REEVES: Yeah. I mean, I think it’s… so just think about it. You got drones that are constantly scanning. They’re scanning infrared, scanning visible. If there’s thousands of them or tens of thousands, you mentioned a million that Ukraine’s going to make.
ELON MUSK: This is good. You got a million drones coming at you. Do you want to be there trying to take out drones with an assault rifle? It’s not going to be a good situation. I mean, I think that there is something where, if you go fully analog, where if you can do sort of an EMP, like electromagnetic pulse explosion of some kind that could take out all electronics. But then your electronics are going to go too. So you’re going to go either fully analog or fully digital.
So I think that actually would be a role for a fighter plane if it was fully analog and had mechanical controls, because then you could do an EMP sort of blast, take out the drones, and the analog… I mean, that could be another Tom Cruise movie maybe. You know? He just goes with a fully analog aircraft, and all the drones fall out of the sky because of an EMP bomb.
Building Trust with Industry and Society
BG SHANE REEVES: How do you reply to those in, say, industry that would say, “We don’t want to contribute to the development of technology that could be used by the Department of Defense”? Like, basically, we need to build trust with the industrial base and with society. Maybe it’s something we’re doing. How do we do that?
ELON MUSK: Well, I’m very pro-military, just to be clear.
BG SHANE REEVES: It’s good. Your audience will like that.
ELON MUSK: Yes. But I think what’s… if there’s a significant conflict, the US industrial base will switch quickly to voluntary active military production just as it did in World War II. Is it quick enough? I don’t know. But that’s what will probably happen. But, yeah, AI and drones. That’s the future of warfare.
The Role of Space in Warfare
BG SHANE REEVES: And, I mean, tell me if I’m missing something here. Where do you see the domain of space?
ELON MUSK: Space is… I mean, space is ultimate high ground. So it really goes… space is big. Real big. It’s like, woah. If you ever see Earth just to scale with the sun and the… you know, it’s like, wow. We’re just like a tiny little dust mote floating around space. That’s Earth. But space is becoming increasingly militarized.
BG SHANE REEVES: And so how do we see that, especially as it relates to land warfare? Like, what’s your thoughts on the space domain as it relates to land warfare and what are things that we should be doing to start to gain those advantages that are necessary?
ELON MUSK: Well, I mentioned… I mentioned the space-based communications is critical. Like, if you can’t communicate, you don’t know what’s going on, you can’t receive orders, you can’t report information. And, whether it’s a human or a drone, they need communication. So you’re going to have communications… any ground-based communications like fiber optic cables and cell phone towers will be destroyed. So it’s basically only… all you’ve got are basically analog radios and then for any kind of data communications is space-based.
And then while GPS has been effective for a long time, GPS jamming at this point is pretty easy, because the GPS signal is a weak signal. So it’s easy to do GPS jamming. So having sort of a next-generation system that can provide positioning is going to be very important. Space can also probably offer, you know, the ultimate weapons where you just have tungsten cannonballs from orbit.
BG SHANE REEVES: How about offensive weapons in space? Do you see those?
ELON MUSK: That’s what I mean by rods from god. So if you have, like, you know… they talked about this in the Star Wars program in the ’80s. This is certainly something that can be done, which is you have just kinetic weapons from space or space-based lasers. Starlink system technically does have lasers, but they’re low power lasers, for now.
Leadership and Innovation
BG SHANE REEVES: So let me ask you about back to this question about process. So I like military history also. So in 149 BC, there was the Third Punic War ongoing, and the Roman legions are outside Carthage. And they lay siege to Carthage. And it’s not going very well. The proconsuls that are in charge are passive, risk-averse, and they’re losing. And there’s a young guy who’s from the famous Scipio line of proconsuls, and it is Scipio Aemilianus, who is the adopted grandson of Scipio Africanus. And so Scipio is the only one who’s doing something.
And so Cato the Elder is sitting in the senate, and he says this. He says, “He alone still thinks. The others wander about in the shadows.” And his basic argument was, I want Scipio in charge. And the problem with Scipio is he’s too young. You had to be 42 to be a proconsul. And so Cato’s like, I don’t care. He’s the right guy. And then what does Scipio do? He goes in and he puts juice into his innovation. And obviously, we know how the Third Punic War ends because we know about Rome and not Carthage.
So, what Cato was getting at is this need for innovative and creative and entrepreneurial leaders. That’s what is necessary. And so processes are only as good as those who lead it. And so what are the traits you look for in those who lead your various businesses and enterprises?
ELON MUSK: Well, I am very much in technology. So for me, if somebody is going to lead something in technology, they must themselves be good at technology. Meaning that if they’re going to lead something that involves complex engineering, they must themselves be good at engineering. They don’t necessarily need to be the best engineer on the team, but they need to be very confident in their field. This is incredibly important.
To me, if somebody’s leading a given engineering field or engineering department and they are not good at that, then that would be like a cavalry captain who can’t ride a horse. Problem. Great leader in every way except can’t ride a horse. And then you got to charge into battle and cavalry captain falls off the horse. You know? It’s not inspiring. So, cavalry captain must be able to ride a horse. They don’t need to be the best horse rider, but they must be confident in this regard. Otherwise, they cannot evaluate the talent of the team, and they don’t understand the technology that’s being developed.
This may seem like a simple thing, but it is often not the case that this is overlooked. You know, I don’t want to pick on the CEO of Boeing, but he’s got a degree in accounting or something, which I think… you know, you want to have someone who knows how airplanes work, run an airplane company.
BG SHANE REEVES: I guess I had crossed out my job at Boeing CEO.
ELON MUSK: Yeah. Can’t do that. What I mean is… it’s like you want to… if you’re running an airplane company, you should know how airplanes work and how they fly and how to design airplanes. I think that’s pretty important.
BG SHANE REEVES: But how do you create innovative intuition in those that work for you? I mean, you’re famous for trying to gain efficiencies, create better processes, pushing to try to gain those not just efficiencies, but effectiveness. So how do you… is it possible? Can you build this innovative intuition in a person?
ELON MUSK: Well, it’s… I think it is possible to learn to be innovative. You know, a lot of times for any given thing, you have to say, did you try? This may sound obvious, but actually try. Like, somebody might wonder, well, can I be innovative? Well, have you tried? Just try thinking of interesting ideas.
I do find a good source of innovation is if you read about a whole bunch of fields, you can cross-fertilize ideas from one field into another. And so you can synthesize… SpaceX and Tesla. The automotive industry is very good at manufacturing. It’s just manufacturing complex machines at volume. The automotive industry is the best. Now the rocket industry, space industry is very good at advanced materials and making things very light. And so taking advanced materials and mass optimization concepts from the space industry, applying it to automotive and taking automotive mass manufacturing techniques and applying it to space was kind of like a superpower.
BG SHANE REEVES: But when you think about it, when you’re talking about innovating, though, and you said people can try, that means you have to be willing to let them fail.
ELON MUSK: Yes.
BG SHANE REEVES: And so where do you draw the line between recklessness and being overly cautious?
ELON MUSK: If you’re not failing at least some of the time, you’re not trying hard enough. You have to fail some of the time. So, you know, I bet it’s more like a batting average. Somebody should have a good batting average, but nobody bats a thousand. But if somebody bats zero all the time, I mean, okay. You know, you got to take them off.
So, I do have this sort of simple first principles algorithm that I think could be quite helpful. And I sort of say it as a mantra to myself because I’ve made this mistake so many times.
So the first element is for any given thing, make the requirements less dumb. So whatever problem you’re solving, make the requirements less dumb. And whoever gave you those requirements, even if they are the smartest person in the world, they’re still dumb. So this is where, say, military procurement, it goes wrong right at the outset with excess requirements. So you’ll get sort of this giant document of requirements, that actually should be, like, one page.
So step one, make the requirements… simplifying it, just make the requirements less dumb. Because if you don’t do that as a first step, then you can get the right answer but to the wrong question. If the question’s wrong, it doesn’t matter.
Then step two is delete the part or process step. Delete. And if you’re not putting in… if you’re not adding back 10% of what you deleted, you haven’t deleted enough. This stuff sounds maybe very obvious, but it’s very effective, because the idea is, if some of the ideas that you’re doing don’t fail, you’re not trying hard enough.
And then only the third step is to optimize the thing. And if I say, like, what’s one of the mistakes that I see smart people making all the time, especially fine engineers, is optimizing a thing that should not exist. Sounds obvious. You know? Like, you could try to make the world’s best cloth biplane. I’m like, well, actually, no. We should have jet airplanes instead. You know? So you should not optimize things that should not exist.
And then step four is go faster. Again, this sounds really obvious, but people just don’t try going faster. And the first step would be to automate something. But only automate it once you’ve done those other four things.
Now the reason I have this mantra is because I personally, many times, automated something, sped it up, optimized it, and then deleted it. And I’m like, wait. I’m tired of going backwards here. So, if you run that simple algorithm, in many arenas of life, you will be shocked at how effective it is.
Conclusion
BG SHANE REEVES: So, shockingly, we are already running out of time. So let me ask you this. If you could choose one attribute, just one attribute that’d be critical for our future officers to have to be successful, what would it be?
ELON MUSK: Curiosity. As long as you’re not a cat. But curiosity, try to read as much as possible, learn as much as possible, and in many different fields, and apply critical thinking to anything that you’re told.
BG SHANE REEVES: Thank you. So, I’d like to say on behalf of Lieutenant General Gilland, and the entire academy, we’re really thankful that you’re here. We’re really thankful you took the time to help us celebrate the excellence of the faculty and the cadets, and really sharing some wisdom with us because we’re really thinking about what do we need to do to be successful, because we have a very important mission, which is a no-fail mission, which is, we have to fight and win, and we’re laser-focused on that.
ELON MUSK: Well, I mean, in my view, or I think probably a lot of people’s views, you know, America is like Atlas holding up the free world, and you are the arms of Atlas. So thank you.