Read the full transcript of author Chris Miller’s lecture titled “Chip War 2.0: The Global Battle for Semiconductor Supremacy” at World Knowledge Forum 2024.
Listen to the audio version here:
TRANSCRIPT:
CHRIS MILLER: Well, welcome everyone. What I’d like to do over the next 40 minutes is explain to you why it is that semiconductors today are at the core, not just of advances in technology, which we’re used to, but also at the center of global debate in economics and in international politics. And to argue that you can’t understand the world around you without putting analysis of semiconductors at the center.
Behind all artificial intelligence systems today is computing power, more and more compute used to train the most successful AI systems. And this is a trend that has lasted not just for years, but for decades. Most of the advances in artificial intelligence have stemmed from using more data to train artificial intelligence systems, which in turn requires more computing power to use in the training of that data.
This chart shows you the most advanced AI systems of their day over the last 70 years. And what you find is that there’s been a consistent, extraordinarily rapid rate of increase in the amount of data used for training AI systems.
The amount of data used for AI training doubles on a regular basis. And what’s more, over the past 15 years, the rate of increase has only accelerated. This is what AI researchers, people at AI labs like OpenAI or Anthropic, refer to as the scaling laws. The intuition is that if you want a better AI system, you need a bigger AI system, a system trained on larger and larger quantities of data, which is why the AI systems that today undergird chatbots like ChatGPT are trained on almost all of the text that exists on the internet.
And it turns out we’re just getting started.
Because it’s in vast data centers full of the most advanced servers with the most advanced chips inside of them that all AI systems are trained. This is why the world’s biggest tech companies are more focused than ever before on getting access to the cutting edge AI chips that are used in training these extraordinarily capable systems. And it isn’t only tech companies that are focused on computing power as a core metric of success in the world of artificial intelligence. Governments are too.
China, for example, has articulated a goal of increasing the computing power in China by 50% over the coming years. Countries from India to the United Kingdom have set out similar goals. Because they believe, just like all the world’s large tech companies believe, that better AI systems require more computing power. And therefore, access to computing power, access to the cutting edge chips that enable the training of big AI systems will be critical not just for technological advances, but for their economic and their political future.
Distribution of Computing Power
If you look today at the distribution of computing power around the world, what you find is that there are two countries far ahead of the rest. Here’s a chart put together by the Blair Institute of this server installation base by country. How many servers every country has installed. And what you find is that there are two leaders by far.
First, the United States, far out in front of the pack. Second, China, clearly in second place, but also clearly ahead of the rest of the competitors. And it’s no surprise, looking at charts like this, that there are also two countries that are setting the terms of the race to build AI systems. These are the countries that have access to the extraordinary volumes of computing power that are needed to train AI systems.
And these are the countries that are competing right now to gain access to the cutting edge chips that advanced AI training requires. And it’s data centers like this that they’re focused above all on building. Data centers, vast warehouses full of thousands and thousands of chips and servers into which data is surged as AI systems are trained. Data centers like this are rewriting the wiring of the global economy.
They’re reshaping investment in many different countries. They’re attracting billions and billions of dollars from some of the world’s largest investors who are betting that access to cutting edge data centers will be economically critical in the future because data centers like this are not only used for training AI systems, they’re also used for deploying them. That’s why AI data centers are being built not only in the United States and in China, but in many other regions of the world too. In Malaysia, the center of the data center boom in Southeast Asia and the Middle East, where Saudi Arabia and the United Arab Emirates are trying to become AI hubs in the developing world.
Many governments are focused on data centers like this as a tool of power in the modern world. And it’s forcing changes in how big tech companies operate. Because if you want a cutting edge data center like this, you need extraordinary volumes of energy to power it. More energy than data centers have ever consumed before.
Tech Companies as Energy Investors
And this is why some of the world’s biggest technology companies are now also becoming some of the world’s biggest energy investors. Whether it’s Microsoft or Meta, Alphabet or Amazon, they’re spending billions, and in some cases, tens of billions of dollars, building and buying power directly. Because in certain regions, like in the United States, we’re seeing the first increase in demand for electricity that we’ve seen in decades. And it’s being driven, above all, by data centers.
It’s artificial intelligence that’s demanding more power than ever before, which is forcing tech companies to look harder than they’ve had ever to look before to guarantee they can get access to the power that they need. That’s why, for example, many of the leading AI entrepreneurs have themselves personally invested in projects to build fusion reactors. It’s why Amazon recently announced that it was buying a data center located right next to a nuclear power plant, giving you a sense of just how critical power supply is for these cutting-edge data centers. But of course, the biggest challenge in building an AI data center is not getting the land or constructing the building or even accessing the power.
Challenging, though, that can be. The biggest challenge is getting the chips that go inside. Because at the core of artificial intelligence systems are many different types of ultra-complex, cutting-edge semiconductors, which make it possible both to train and deploy the types of AI systems that are now being rolled out. And today, it’s hard to get access to all of the chips that you need.
Not just hard for people like you or me, hard for people like Sam Altman, founder of OpenAI, who just a couple months ago was tweeting his apologies that he had to slow down access to certain chat GPT services because even OpenAI couldn’t get access to all of the advanced GPU chips that it needed. It’s these chips, the GPU, graphics processor units, and the high-bandwidth memory that they’re coupled with, which are the currency of the AI era, the most important commodity, and also a commodity that, until very recently, has been in extraordinary shortage.
And if you want to understand the economics of artificial intelligence, or the geopolitics of AI, you’ve gotta trace AI all the way back to the chips that AI systems are trained on. And you’ll find, as I’ll discuss in the remainder of my remarks, that the supply chain of companies that produces these critical chips is extraordinarily specific and concentrated, which creates great monopolies and extraordinary profits to the companies that play a role in producing the computing power that AI requires, but also creates extraordinary vulnerability, given that many of these critical chips are produced by a single company, something that,
in an era of intensified geopolitical competition, is, in a lot of ways, a very dangerous risk.
NVIDIA’s Rise
For now, the boom in investment in data centers has been an extraordinary benefit to the number one producer of GPUs, NVIDIA, a company that, until recently, was best known for producing chips used in video games and computer graphics. But over 15 years ago, NVIDIA realized that the exact same chips that can render images beautifully on your computer or in video games can also be used to run the math that undergirds artificial intelligence.
And so, for a decade and a half, NVIDIA and its extraordinary CEO, Jensen Huang, has been betting on artificial intelligence being the real use case of the GPU chips that his company specializes in. They spent this time building out a software ecosystem to enable programmers and developers to build on top of GPUs in an efficient way.
They benefited from academic research, which showed that GPUs were far more capable of training AI systems than prior types of chips had been. And it’s turned NVIDIA into a company valued at trillions of dollars, one of the most valuable companies in the world, and also a company whose products are something that the AI world can’t live without. It’s estimated that today, 90% of advanced AI systems, basically every AI system that’s not trained by Google, is trained on NVIDIA’s chips. It’s an extraordinary market position at the center of the world’s biggest technological transition.
And it’s unsurprisingly created concern among other technology companies that their AI ambitions are dependent on a single company and a single type of semiconductor. And that’s why, as AI has become more important, as billions of dollars have been poured into developing AI applications, that many other tech companies have started designing their own GPUs, too. It’s not just other chip firms like AMD or Intel that are building AI processors. It’s tech companies like Amazon, Meta, Alphabet, and Microsoft that are also trying to design their own AI accelerators to compete with NVIDIA, both because they don’t wanna have to pay the prices that NVIDIA charges, but also because they’re worried that their most important technology might rely on a single provider, a single point of failure, which could threaten their ability to deploy the AI capabilities that they believe their future depends on.
Elon Musk has weighed in on this issue, too. He said that it’s harder to get access to GPUs than it is to get access to drugs. Now, those of you who follow Elon closely might wonder how meaningful that statement actually is. But in fact, Elon has spent the last year trying to acquire over 100,000 GPUs to build a massive new data center in Memphis, Tennessee, where he plans to train a new AI system that’s supposed to compete with OpenAI and Anthropic.
And if the richest person in the world is struggling to get access to GPUs, you know that supply is indeed extraordinarily tight. So why? Why is it that one company has an almost monopolistic position in the production of these ultra-critical semiconductors? Why can’t anyone else design comparable chips that can fit into the exact same supply chains?
The Complexity of Cutting-Edge Semiconductors
Well, the reason is that when you start digging into semiconductors like these, you find that they’re among the most complex manufactured goods that humanity has ever made. In fact, nothing really comes close to the complexity of cutting-edge semiconductors, whether the chips in your phone or the chips in your PC or vast data center chips like the one pictured here. The manufacturing at nanometer scale, that’s billionths of a meter, is basically unparalleled in every other segment of manufacturing. And the fact that chips like these can have billions or tens of billions or hundreds of billions of tiny transistors, little electrical switches that turn circuits on and off, mean that there’s nothing else that comes close to the complexity of the number of components involved.
And this supply chain of companies that’s necessary to produce chips like this is also the most complex of any that exists in any segment of the economy in all of human history. Nothing else really comes close. Because NVIDIA, of course, doesn’t actually manufacture any semiconductors. It never has.
It’s made no effort to manufacture. It only designs and relies on a series of other companies to do the manufacturing for it. And that’s inevitable, I think. It’s inevitable that you need a complex supply chain to manufacture the world’s most complex products.
No company can do it on their own. But it means that supply chain management is essential for a company like this and anyone who uses their products. And it means that understanding the supply chain, understanding the choke points in the supply chain is critical for making sense of how AI systems are actually produced. And indeed, the supply chain of semiconductors like those that power AI stretch across the world.
A typical semiconductor might be designed using software produced in the United States, use intellectual property from companies based in the US and Europe, rely on chemicals and materials from Japan, be produced with manufacturing equipment from the Netherlands, United States, and Japan, be actually manufactured in Taiwan before being sent to China or Malaysia or somewhere else for assembly into a final product. There’s scarcely a single chip in the world that’s produced entirely in one country.
Almost every semiconductor that exists requires international trade and exchange, the use of intellectual property, of tools, of components, of materials from lots of different countries. And it’s understandable why we rely on an international supply chain like this because the complexity is just too large, too extraordinary for a single company or a single country to do on its own.
Just think about some of the materials that are involved in chip making, whether it’s the silicon or many of the chemicals that are used in the manufacturing process. They often have to be 99.99999999% pure because misplacing just a handful of atoms in advanced chip making can cause defects in how circuits work. Well, there aren’t that many companies that know how to produce a material with that many nines of efficiency. And you don’t just need one material in chip making, you need dozens.
You don’t just need one machine in chip making, you need many different types of machines. You don’t just need one type of software to produce an advanced chip, you often rely on multiple types of software. And the same is true for the intellectual property libraries that you draw on. This is why no one today is self-sufficient in semiconductors.
Not a single company, not a single country. And it means that the semiconductor supply chain is more than any other segment of the economy reliant on the ability to transfer products and technology across borders. Which means that there’s also no other segment of the economy that’s more exposed to the intensification of geopolitical conflict between China on the one hand and the United States on the other. So let’s pause for a minute and look at the shape of the semiconductor supply chain.
The Semiconductor Supply Chain by Country
Who does what in the process of making advanced semiconductors? Here’s a table that shows you the revenue share of different segments of the semiconductor supply chain by country. United States, South Korea, Japan, Taiwan, Europe, China, and a handful of other countries on the far left-hand side. And the rows tell you the supply chain steps that are necessary in producing a chip.
First, the EDA, the Electronic Design Automation Software, the ultra-specialized tools that are used to design semiconductors. Second, the core intellectual property, the libraries of IP that are used in the design process as well. Third, the actual manufacture of silicon wafers, which just a handful of companies can do with the requisite level of purity. Fourth, the production of the fabrication tools, the extraordinarily complex tools, more on them in a moment, that are necessary to produce advanced semiconductors.
Then the design of semiconductors, the fabrication and the process of assembly, test, and package. These are the, in broad strokes, the key segments of the semiconductor supply chain. And you’ll find, looking at the chart, that no one does it all. The US doesn’t do it all. Taiwan doesn’t do it all. Korea can’t do it all. Japan can’t do it all. You’ll find that there’s specialization in different countries for every part of the supply chain.
And you’ll also find, somewhat surprisingly perhaps, that China, the world’s manufacturing superpower, plays a surprisingly small role. In fact, China’s spent as much money each year over the past decade importing semiconductors as it’s spent buying oil. Think about that. China spends more money buying chips than it spends buying oil.
In fact, in all of world trade, there’s no larger flow of goods than the flow of chips into China. Chips from Korea, chips from Japan, chips from Southeast Asia, chips from the US, chips, perhaps most importantly, from Taiwan. And these chips are used both to assemble devices in China that are then sold to the rest of the world, like phones or PCs, or used for China’s internal consumption. And if you put yourselves in the shoes of China’s leadership, this is an extraordinary, a staggering vulnerability.
The world’s largest manufacturer can’t produce all the chips that it needs. And yet, manufactured products every year rely on more and more semiconductors. That’s why the trend in China has been importing more and more semiconductors from Korea, from Taiwan, every year over the past decade. Without these chips, today China’s manufacturing base would freeze.
It’s impossible to produce a car, for example, without semiconductors, and most of the chips in the cars made in China are imported. The same is true for computers. The same is true for most of the phones in China. The same is true for medical devices and construction equipment and agricultural equipment.
China imports most of the chips that it relies on. And it doesn’t like this status quo. It doesn’t like the status quo for economic reasons. Chip making is a very good business, as Chinese leaders have noticed as they’ve looked around Asia, where countries like Japan and Korea and Taiwan have gotten wealthy in no small part because of their mastery of the chip business.
But China also doesn’t like it for political reasons, because China’s leaders realize that they import critical semiconductors largely from countries that are, at best, competitors, and at worst, geopolitical adversaries. If you were sitting in Beijing thinking through worst-case scenarios, one of your worst-case scenarios would look like this. What happens to your manufacturing base if the supply of chips flowing into China gets cut off? Your manufacturing base would freeze up.
China’s Self-Sufficiency Drive
And so you can understand why, over the last 10 years, China has tried to become more self-sufficient in semiconductors. In 2014, President Xi Jinping identified chips as what he calls a core technology. And since then, almost all of China’s industrial policy programs, like Made in China 2025, have identified semiconductors as a priority, a sector in which China wants to buy less from abroad and produce more at home. More because this is an industry with advanced technology and high margins.
There’s simple economic reasons. But also because China thinks that self-sufficiency would give it political benefits. It’d be less reliant on its neighbors, less reliant on Taiwan, less reliant on Korea, less reliant on the United States. And so one of the key questions overhanging not just the tech industry, but the entire global economy, is to what extent China’s self-sufficiency drive will succeed.
If it does succeed, the consequences for international trade are profound. The biggest flow in international trade could well be reduced if China’s able to produce more of the chips that it needs domestically. And so I think this bears very careful watching for anyone interested in the shape of the global economy. There’s actually nothing more important, I think, in all of international trade than this question, because there’s no flow of goods that’s larger.
The problem that China faces, the challenge is that it’s hard enough to become self-sufficient in the manufacturing of chips, but there’s the rest of the supply chain that matters too. If you can manufacture all the chips you need, but you can’t manufacture all the components that go into all of the chips that you need, are you really self-sufficient? And as my table previously showed, today no one is self-sufficient. Not the US, not Taiwan, not Japan, not Korea.
And so self-sufficiency is an extraordinarily ambitious goal. If you wanna produce all the materials, all the software, all the designs, and all the machine tools yourself, that’s a whole lot of R&D you need to master, a whole lot of capital expenditure you need to undertake, and a whole lot of extraordinarily complex technologies that you must learn how to produce. And it’s machines like these that are the biggest challenge that China faces. Pictured here is an extreme ultraviolet lithography machine produced by just one company, ASML of the Netherlands, which has 100% of world production of these advanced lithography systems.
The Complexity of Manufacturing Tools
It took ASML around three decades to learn how to produce tools like this, which today are critical to the production of advanced chips at any sort of scale in an economically efficient way. And these tools are, without any question, the most complex machine tools that have ever existed. The most cutting-edge tools that ASML produces cost over $300 million a piece to purchase. They involve components like the flattest mirrors humans have ever made, the most powerful laser deployed in a commercial device, and an explosion happening constantly inside the machine as a tiny ball of tin is pulverized by the laser and explodes into a plasma at a temperature 40 times hotter than the surface of the sun.
If you were looking for a tool to learn how to replicate, this is not the one you’d choose. There’s nothing more complex than this. And there’s not just one company that produces this tool. ASML assembles it, but it in turn relies on its own supply chain of mind-boggling complexity.
There are so many components in these tools that ASML does not know how many components are inside. But the number’s at least in the hundreds of thousands, relying on precision-manufactured parts from Europe, from the United States, and from Japan, which would be extraordinarily difficult to replicate. And indeed, China’s trying. Since 2018, it’s been illegal to transfer these tools to China.
And yet six years later, ASML is still the monopoly player, with no one, in Japan, in the United States, or in China, having succeeded in replicating what ASML can produce. And this is not the only machine you need to make advanced chips. This is one of several. There are other machines that can lay down thin films of material just a couple of atoms thick.
Other machines that can etch tiny canyons into silicon just a couple of atoms wide. Yet more machines that can inspect the semiconductor once it’s been manufactured, and identify nanometer-scale errors in the manufacturing process. And you need all of these to make chips at the cutting edge, at scale. When you start to dig into equipment like this, you can understand why no one is self-sufficient in semiconductors.
Why not a single country in the world has learned to produce all of the equipment, all of the materials, all of the software that’s necessary to make advanced chips. It would be extraordinarily expensive, complex, mind-bogglingly demanding to undertake a self-sufficiency drive. And it illustrates the challenges that China’s own self-sufficiency drive involves. Because when U.S. government officials look at the future of artificial intelligence, and look at the chips that have made advances in artificial intelligence possible, they want those technologies first to go to friendly countries before adversaries. Which is why, since 2018, it’s been illegal to transfer these tools to China. And why, since 2022, the U.S. has restricted the ability of NVIDIA, a U.S. firm, to sell to China, which used to be its second-largest market.
The bet the U.S. government making is simple. It’s the exact same bet that Google’s making, the same bet that Sam Altman of OpenAI is making, the same bet that Anthropic is making, and Mark Zuckerberg of Meta is making.
The bet that if you want better AI systems, you need bigger AI systems. If you want bigger AI systems, you need better GPUs. And if you want better GPUs, you need tools like this to produce them. And that’s why the U.S. has injected a new level of geopolitics into the semiconductor supply chain. If this bet is correct, that better chips are critical for better AI, then tools like this are choke points. Choke points in the ability to produce the chips that advanced AI requires.
And it’s this word, choke point, that has also spurred China into action. Spurred China into trying to develop its own domestic capabilities. Spurred China into trying to further substitute out imported products and replace them with Chinese components. In other words, China’s strategy and America’s strategy are built on the same basic assumptions.
China thinks that AI will be very important, that tools like these will be critical, that the shape of international trade will be determined by whether or not China can be self-sufficient semiconductors. And so China’s poured tens of billions of dollars a year into trying to build self-sufficiency. The U.S. has the exact same assumptions, and for the exact same reasons is trying to cut China off from these tools and the advanced chips that they enable. And the reason that both Beijing and Washington are fixated on AI as critical to the future is not primarily about ChatGPT. It’s not about who will create the next trillion dollar tech company. It’s not about whether AI will transform the way we write emails or post on social media.
It’s because they’re both betting that AI will be critical to some of the core elements of national power. And indeed, in the Indo-Pacific region, both governments perceive an arms race underway. An arms race that will be measured in some of the traditional metrics of military power, like the number of ships that are built, the number of missiles that are deployed, but also an arms race that will be shaped by artificial intelligence. And this isn’t a projection about the future.
This is a statement of reality. It’s already the case that next generation military systems that are being fielded right now are built on the assumption that AI and autonomous action will be at the center of how militaries fight and how spy agencies operate. We see it, for example, in the Russia-Ukraine war, where both the Russians and the Ukrainians have poured resources into building out their drone programs, drones which, in some cases, are quite simple, but in other cases are increasingly autonomous in their action. And nearly every military that’s looked at the Russia-Ukraine war has concluded that future conflicts involving more technologically advanced countries will rely even more heavily on autonomous systems.
Well, an autonomous system is an AI-enabled system. And just like if you want your car to drive autonomously, you need a whole lot of computing power, so too if you want your drone to fly autonomously, it requires the exact same computational inputs. In other words, computing and drones are two sides of the same coin. And it’s not just drones.
AI and Intelligence
The same is true for intelligence. It’s now been nearly a decade, at least we know from public sources, that the U.S. military has been using AI to make sense of all the intelligence that its spy satellites gather and its intelligence agencies pick up.
There’s too much data being collected by satellites right now for humans to sift through. So you need computer vision algorithms, for example, to identify what’s a truck and what’s a tank. Everyone is doing it. And as the cost of launching satellites decline, the amount of data that’s being collected increases, it’s not only the world’s biggest militaries that will rely on AI, it’s already the case that medium-sized ones are too.
In other words, AI is about the future of national power. It’s about the future of how militaries fight. It’s about the future of how intelligence agencies operate. And therefore, when both Beijing and Washington think about AI, they’re thinking not about chat GPT, they’re thinking about systems like this.
And they believe that whichever country has access to the best AI ecosystem, the most trained experts, the most advanced data centers, the most sophisticated semiconductors, is likely to have a head start in developing AI-relevant military technologies, AI-enabled intelligence systems. And I think that’s not a crazy bet to make, which is why it’s not just one country pursuing that strategy, it’s every country pursuing that strategy. And that’s precisely why the US has turned to GPUs, the critical chip in building AI systems, as the centerpiece of its strategy to maintain the technological lead of the West against China. It’s as simple as that.
And you can track who’s buying GPUs, who’s getting access to the advanced chips that NVIDIA produces. And you’ll find that most of them are going to US firms, and Chinese firms are far behind in the position of number two. Well, this is exactly what US policymakers want. They want Chinese firms to have trouble getting access to computing power they need.
They want the Chinese ecosystem to feel like it can’t get all of the access to training and deployment infrastructure that it would like. They want a world in which the West’s most advanced technologies are primarily benefiting Western firms. And China naturally wants the opposite, two conflicting strategies, but both focused on the exact same core technology. And for now, I think it’s fairly clear that Western firms, the Western AI supply chain that I’ve sketched out, a supply chain that runs through Silicon Valley, but also through Korea and Taiwan and the Netherlands and Japan, is out in front.
Whether you look at AI companies by valuation or the size of systems that AI companies are training, what you’ll find is that it’s the West in the lead. US firms are training AI systems that are an order of magnitude bigger than many of their closest Chinese competitors. Now, this might not be the only metric that matters, but I think it’s clearly a metric that will be quite important. It’s the metric that OpenAI and Anthropic and Facebook and many others are betting on to justify their investments in next generation systems.
And I think it provides a glimpse into where the competition over AI is headed. In other words, you can’t understand the AI landscape today without putting this geopolitical competition at the center of your analysis. You can choose to ignore it if you like, but the supply chain that produces the components that artificial intelligence relies on is being reshaped by it. And what’s more, the two most important governments in this process are both looking at the same information, drawing the same conclusions, and pursuing essentially similar strategies and trying to build up their own AI supply chain and gain an advantage in the process.
AI is gonna be fundamentally politicized by this dynamic. There is no neutrality because the chips that make AI systems are not neutral. They’re manufactured somewhere, using somebody’s software and someone’s machine tools. And as much as companies themselves would like to be neutral, like to be able to sell to all geographies, like to not have to be concerned about where they’re buying components from, the reality is that governments are gonna make them choose sides.
China’s gonna make companies choose sides. The United States is gonna make companies choose sides. And indeed, this is already happening. And it’s not gonna stop with chips.
It’s already extending to cloud computing, the offering of the computing services that enable both training and deployment of AI systems. Now the US is considering, according to media reports like this, imposing rules on the provision of cloud computing services by US firms like Google and Microsoft, Azure, and AWS to customers from China and other countries. The US wants to use its advantage in this sector to make sure that its friends get access to the computing they need, and its adversaries do not. And so we should expect the politicization of semiconductor supply chains, as both China and the US try to build out their own ecosystems to extend far beyond semiconductors into cloud computing and also AI model provision.
The West, the existing semiconductor supply chain, is in a fairly strong position. China’s a large producer of semiconductors, but most of the chips that China produces are relatively low-end. Chinese firms like SMIC, the most advanced chip maker in China, as well as Huawei are trying to move up the value chain, but they faced real challenges, which is why today China’s still importing low-end NVIDIA chips, even though NVIDIA has specifically designed these chips to be less capable than the chips that are sold to the rest of the world. You’d only buy low-end NVIDIA chips if you couldn’t produce enough high-end semiconductors at home, yet that’s the position that China finds itself in.
And I think this basic dynamical persists as long as the scaling laws persist, as long as it’s still the case that the most advanced AI systems are trained on more and more data every single year, and so long as we continue to have the same rate of advance and improvements in semiconductors. Moore’s Law, the promise that chips will double in processing power every two years, is of course not a law of physics. It’s just been an empirical reality.
But so long as it’s the case that chips get vastly better every year, and that vastly better chips are critical to training more capable AI systems, I think we should expect that understanding semiconductors is going to be critical to AI, critical to understanding where profits will be made, critical to understanding how technology will be developed, but also critical to understanding how AI will be politicized, something that is of intense focus both to Washington but also to Beijing.
Thank you very much for having me. Thank you.
Related Posts
- Entering The World of Altcoins: What Is Worldcoin and The Real Human Network?
- Transcript of The U.S. Deficit Will ‘Overwhelm This Country’: Larry Fink
- Transcript of The Jewish Parent’s Guide to Money, Work and Family: Moishe Bane
- Transcript of Ken Rogoff’s Interview on A Charlie Rose Global Conversation
- Transcript of In Conversation With Yanis Varoufakis at 2025 QEF