Here is the full transcript of NVIDIA GTC 2025 Fireside Chat titled “Quantum Computing: Where We Are and Where We’re Headed”, where NVIDIA founder and CEO Jensen Huang hosted industry leaders, (April 9, 2025).
Listen to the audio version here:
NVIDIA GTC 2025 Fireside Chat – April 9, 2025
JENSEN HUANG: Welcome to Quantum Day at GTC, the first of its kind. Good morning. Welcome. Welcome to Quantum Day at GTC, the first of its kind. This is going to be a very special event.
As you know—well, you might not know—I’m a public company CEO, and every so often, someone asks me a question. Most of the time—well, some of the time—I’m going to try to lower the bar here—some of the time, I say something right. And sometimes it comes out wrong.
What happened was somebody asked me how long before a quantum computer will be useful. Remember, this is from somebody who’s built a computing platform. To me, building NVIDIA and building CUDA and turning it into the computing platform that it is today has taken us over twenty years. So the idea that time horizons of five, ten, fifteen, twenty years is really nothing to me.
Quantum computing has the potential and the hope—all of our hopes—that it will deliver extraordinary impact, but the technology is insanely complicated. The idea that it would take years to achieve was something that I would expect because of the complexity of it and the grand impact it would have.
When I said the answer, next day, I discovered that several companies’ stock—apparently the whole industry stock—went down 60%. Then I’m starting to learn about this, and my first reaction was, I didn’t know they were public. How could a quantum computer company be public? Anyhow, I discovered they were public companies. I’m very happy for them.
Bringing the Quantum Industry Together
So I said, listen, the world’s got this wrong. Let’s invite all of those companies and more, the quantum computing industry. To the extent that they don’t bring cabbages and apples and stuff that they can throw at me, this would be an extraordinary moment where we can learn about the state of the art of quantum computing.
There are so many different approaches: trapped ion, neutral atoms, superconducting qubits, topological qubits, quantum annealing, photonics. There’s so many different ways of addressing this that I thought, wouldn’t it be amazing if the CEOs, the technology leaders, the companies that are leading this pioneering technology were coming together for the very first time to talk about it? And of course, in the process, could explain why I was wrong. This is the first event in history where a company CEO invites all of the guests to explain why he was wrong.
NVIDIA’s Role in Quantum Computing
NVIDIA doesn’t make quantum computers. But we dedicate ourselves to creating accelerated computing stacks to enable quantum computers. We do the same with self-driving cars. NVIDIA is probably more integrated into the world of automobiles and autonomous vehicles, and we work with just about everybody in some way to advance autonomous vehicles, and yet we don’t build cars.
NVIDIA has a broad range of technologies and product offerings and libraries and computers—we call it the three computer solution—to help advance robotics in all forms: facility robotics, factory robotics, factories that are going to be robotics to build orchestrate robots that are going to build products that are robotic. Incredibly complicated set of computing and libraries and algorithms and models, and we approach it in a way as if we are deeply integrated into the ecosystem and industry, and we care deeply about them, and yet we don’t build robots.
We don’t build quantum computers, but we are deeply integrated into the quantum computing industry, and we create libraries. CUDA Q is a programming model for hybrid classical accelerated quantum. We have CUE Quantum libraries that help you simulate quantum circuits and DGX Quantum to do error correction of quantum computers. We partner with them. We support them. We help them in any possible way.
We care deeply about this ecosystem, and I’m really happy to bring our partners, many of our friends. There are many that are not here, and the reason for that is because we had to do this in three panels. There were so many people.
The quantum computing industry, as you know, a new computing platform is not easy to create. We didn’t create CUDA computing ourselves. We created the architecture. Of course, we created computers. We dedicated ourselves to a very long road map of compatibility and dedication to helping developers and creating libraries and tools and evangelizing. But in the final analysis, the CUDA accelerated computing ecosystem was built by all of us. That’s what GTC is about.
In a lot of ways, this is just the beginning of the quantum computing ecosystem, and it’s really terrific to be able to celebrate it with all of our friends and partners.
NVIDIA’s Quantum Research Lab Announcement
I’m making an announcement today that NVIDIA is starting—now I’m saying this as if this announcement is coming as a result of… I just want to let you know that we were going to make this announcement before this. Okay? So you don’t make it the cause and effect, physics matter here. The cause and effect is, in this case, completely unrelated.
We’re announcing that NVIDIA is going to open a quantum research lab in Boston. It will likely be the most advanced accelerated computing hybrid quantum computing research lab in the world. And it’s going to be located in Boston so that we could partner with Harvard and MIT. Some of our partners are going to be in there initially, but many others will be working in this quantum research lab in the long term. Quantinuum, Quantum Machines and QR are going to be the inaugural partners with us to build this quantum research lab. I’m very happy about that, and we’re going to get that going as soon as possible.
Introducing the Quantum Computing Leaders
And so now what I’m going to do is I’m going to introduce some of my CEO colleagues.
Different Approaches to Quantum Computing
JENSEN HUANG: So listen, each one of you have chosen different approaches to quantum computing. Maybe what we could do is we start by having each one of you take a moment and talk about your approach and why you did it. And since Mikhail, since I almost left you behind, why don’t you start?
MIKHAIL LUKIN: Thank you. First off, thank you for hosting us. It’s amazing to be here. And also, thank you for your contribution to this emerging quantum ecosystem.
We are building quantum computers literally from single atoms, and we assemble them and control them using arrays of laser beams, using basically techniques like holography, techniques similar to those used to project computer images to the big screens.
The key advantages is that the atoms are basically God-given qubits. They are all identical. They’re extremely well isolated. We can preserve quantum states for a very long time, but also we can use light, we can use lasers to control these atoms, basically position them at will and move them around, including during the computation process itself.
In particular, it allows us to build a processor where connectivity is basically a living organism. It evolves during computation itself and this is very special. This allows us to build systems now which have thousands of qubits. It allows us to deploy for the first time these techniques of error correction, which you mentioned and execute algorithms with the so-called protected logical qubits.
JENSEN HUANG: That’s great.
SUBODH KULKARNI: Thank you for the opportunity. At Rigetti Computing, we develop superconducting gate-based quantum computers. We are based in Berkeley here, and we also have a fab in Fremont.
Why superconducting gate-based quantum computing? Gate-based because that’s how we know how to do the broad world of computing. That’s how classical computers run. Why superconducting? That’s an area where along with us, there are many other companies, including some big companies like IBM, Google, Amazon, as well as the government of China is investing heavily in superconducting gate-based computing.
The reason we like superconducting is primarily because of its advantages in scalability and gate speeds. We are using fundamentally a silicon chip, so we know how to scale up because of the leverage of semiconductor industry and five decades of experience there. And because we are dealing with electrons, gate speeds are intensive nanoseconds, and that makes it very easily compatible with the CPU-GPU ecosystem, which is the way we think quantum computing is going to come along.
We feel very good about scalability and gate speed. The challenge and Achilles heel, if you will, of superconducting quantum computing has always been fidelity. We get noise because of these intrinsically engineered devices in the chips, just like CMOS technology. Historically, that was in the low nineties and mid-nineties when the qubits entangled with each other, the two-qubit gate fidelity is what we call it.
What’s been exciting is in the last few months, ourselves along with some other companies in superconducting like IBM and Google have made some very important strides and now we are in the 99 to 99.5 percent two-qubit gate fidelity, which is comparable with the best over here in pure atoms and other areas. So we maintain the advantages of scalability and gate speed, but now we feel very good about where we are with fidelity, so that makes it a lot more attractive.
Within this space, we differentiate ourselves by an open modular approach. So we have designed our stacks so if we find a more creative solution out there, whether it’s an error correction from a company like Riverlane or CUDA Q from NVIDIA or quantum machines for control systems, we can easily integrate that into our stack as opposed to some other companies like IBM or Google who have designed it in a mainframe approach.
Our flagship system is an 84-qubit system with 70 nanosecond gate speed. It’s available to anyone on AWS and Azure right now. We believe it’s one of the best. But frankly, it’s not good enough yet for any practical use as we talked about earlier.
JENSEN HUANG: Don’t give up yet. Rajeeb, why don’t you start?
RAJEEB HAZRA: Thanks, Jensen, for hosting us. At Quantinuum, we build quantum computers using the trapped ion modality as it’s called and a particular architectural approach called QCCD or Quantum Charge Coupled Device.
The beauty of this approach is it produces industry’s highest fidelities, triple 9s and beyond. There are a lot of challenges in quantum computing, as you know. However, coherence time, scalability, fidelity, these are some of the most challenging properties. And as you’re thinking about and listening to these different quantum computing approaches, just listen for those words. I think they’re consistently being used, and it helps understand the pros and cons or the challenges and the opportunities associated with each one of the technologies.
Basically, we have the industry’s highest fidelities. Our approach is to extend the QCCD architecture into higher levels of scale. So we will have 50 logical qubits, so highly reliable qubits this year, 100 logical qubits in about eighteen months and we see a clear path to millions of qubits early in the 2030-2032 timeframe. Another part of our approach is not just building the science of it.
Industry Leaders Share Their Approaches
JENSEN HUANG: Our approach is we work on really, really challenging problems. So it’s like living on the edge. We work on them with hardware and software we build and with customers. So we aren’t trying to create a solution looking for a problem, but start with the notion of what big hairy problem are we going to lunatically go attack and then build our capability across hardware and software to be able to do that. And I hope we get a chance to talk a little bit about what else is out in the industry today that is a beautiful company.
Well, to GTC. We have a lot of hairy problems here. All right. So thank you, Rojji. Yeah.
LOÏC HENRIET: Yeah. Thanks a lot for hosting us. At Pasqal, we build quantum processors that leverage the neutral atom technology. So quite close to what Mikhail Lukin and QuEra is doing.
This technology has several key advantages like scalability. We can trap and control many of those qubits right now, thousands of them. And also, it’s easy to control them with laser light. And it’s a relatively recent technology compared to trapped ions and superconducting qubits to develop quantum computers. Actually, it’s more recent, but there is a lot of progress and a lot of momentum in terms of gate fidelities and scaling of this approach.
At Pasqal, what we are also committed to doing is working very strongly on engineering of those devices to turn them from lab experiments to real industrial products. I think we all agree here that usage and adoption is very key for the entire quantum community right now. And at Pasqal, we really want to deliver on that promise. So over the past eighteen months, we’ve delivered—we’ll have delivered four machines worldwide, including one in France in CAA Agency and another one in Germany, UD Supercomputing Center. So that’s about it for Pasqal.
We work with neutral atoms and focusing very strongly on engineering.
JENSEN HUANG: That’s terrific. Thank you. Go ahead, Peter. Please tell us about the company.
PETER CHAPMAN: So I’m the Chairman for IonQ. We’re a trapped ion company like Quantinuum. Trapped ions actually were used back in 1995 when they were looking at atomic clocks. Atomic clocks our technology have a huge overlap. And back in 1995, a team at NIST and one of the co-founders of IonQ did the first ever quantum logic gates.
And all of this craziness that you see coming all started in 1995 from that experiment. So we’re now thirty years into this investment in trapped ions. So very similar to other modalities, we use individual atoms and we use lasers to do computation. We’re down at 0.02 nanometers. So when you look at the silicon industry, they’re way, way up there compared to where we play.
We play with individual atoms. So the advantages to our technology is one is you can have a room temperature quantum computer. So it can fit in a rack and then to be honest, it looks kind of boring compared to what you probably imagine because it’ll be rack based room temperature. The other advantage because we use optics and lasers is that you can network them together to do distributed quantum computing to get to larger numbers of cubits and you can reuse the existing infrastructure of the internet using fiber optics. And then as has been mentioned, trapped ions have the best average two cubic gate fidelities.
And so, in that sense they lead in and that’s a fairly large advantage because it means that the amount of error correction that you need to do is will be less than maybe some other modalities. But each one of, and I just say since 1995, it’s amazing how many different modalities have showed up for Qubits and the amount of progress that’s been made. So it’s really quite exciting from that point of view and it’s great to see the leading companies on stage here today with us.
JENSEN HUANG: Thank you, Peter. Alan, go ahead.
Quantum Computing Applications and Achievements
ALAN BARATZ: Thanks, Jensen. So we are a superconducting company similar to Rigetti. We believe that superconducting provides the best balance between qubit fidelity or the quality of the qubits and gate speeds, time to compute. But we’re actually quite unique from everybody else on this panel and pretty much everybody else in the industry because we use annealing technology as opposed to gate model technology. Without going into the details, annealing is a much easier technology to work with.
It’s easier to scale. It’s much less sensitive to noise and errors. And probably the best proof point for that was in a paper that we published in Science last week where we performed a useful computation properties of magnetic materials that would take nearly a million years to compute classically. And then this week we actually put a paper on the archive where we showed how to use that computation to create quantum proof of work in a blockchain. So the idea would be that you would use the quantum computer to create the hashing function and you would use the quantum computer to validate the hashing function.
And we now have this running on four of our quantum computers as the first distributed quantum application where in fact we’re generating hashes, we’re validating hashes and we think this could be a very interesting, much lower energy consumption approach to blockchain.
JENSEN HUANG: Every time that somebody in the quantum computing industry achieves a milestone, it stirs up a fair amount of controversy among the others. Did you stir up any controversy in your achievement? Because at the moment, I think it’s I think the achievement to controversy ratio is literally one to one.
ALAN BARATZ: So the answer to the question is that I’ve received a lot of positive feedback from my colleagues in the industry. It was exactly like me when I did. That having been said and only since you asked and I don’t like to name names but I will, There was a paper that came out of some researchers at the Flatiron Institute in New York. This is a really solid research team and what they were able to do was advance the state of the art on a classical computation in tensor networks and what they’ve been able to do is to show that for the smallest instances that we computed, which we also computed classically on GPU clusters, they could do it a little bit faster. Now they made some claims about how that undermines the results but not at all. I mean, we computed multiple lattices, multiple size lattices, multiple evolution timeframes, multiple properties on the lattice. And so this is a very strong result. It’s actually been in the public domain for over a year now.
JENSEN HUANG: Yeah, that’s terrific. So I guess the question that stirs up a fair amount of excitement is really about what is the definition of usefulness of quantum computing and when do we expect that? Before we answer to when do we expect that, maybe we’ll build up to it. What are some of the early applications you think and we’ll start back in Mikhail again. What are some of the early applications you think that’d be worthy of the endeavor of a quantum computer, number one? And number two, how do you define usefulness?
Defining Usefulness in Quantum Computing
MIKHAIL LUKIN: So maybe I will start kind of at a high level. So quantum computer is really a fundamentally new scientific and engineering tool. And if you look at the history of science and technology, whenever you come up with a new tool, the first thing that you use it is to really kind of advance the science and actually make scientific discoveries.
And in fact, quantum computers literally allows us to go into corners of universe where we have never been. And if you go to these corners responsibly, you always find something interesting. And what kind of I mean by that is that I believe there is a huge potential to use the machines which are either existing already now or kind of which are being developed in the near term to really kind of advance this kind of scientific frontier and actually make new discoveries. There has been already discoveries made using quantum computers, but to be maximally honest, there were very few. And the way how many of us kind of are thinking about is that now we are in the era of this kind of quantum discovery where we can use these machines to actually really, mostly kind of exploring physics of complex systems, maybe related to things like also chemistry, material science, and the field is now really ripe for using these machines to kind of really make, to really push into this kind of science directions and really start making discoveries.
Often they are things which maybe are not necessarily directly commercially relevant. For example, understanding properties of systems away from equilibrium. I mean, lot of world around us is not an equilibrium and these are the kinds of things where, you know, I expect a lot of progress will be made within next few years. And often these are the things which actually then translate to applications and often they start new industries in a way which is not possible to predict. And that’s where I think that’s why this field is on a special point right now.
JENSEN HUANG: Yeah. And then so that’s a trapped ion perspective. What about a superconducting—It’s a neutral atom process.
RAJEEB HAZRA: Neutral atom process. So, know, following on what you were saying, I mean, I agreed with every area that you, you know, the fundamental premise that scientific discovery is going to be taken to a new frontier.
But we’re seeing applications today. As I said, we kind of focus on what is the big problem for a customer or a partner we want to solve. We are seeing applications in the area of chemistry on how do you get to new refrigerants, right, that have certain sustainable properties. How do you generate hydrogen from water more efficiently without needing platinum as a catalyst for the reaction. In biology, we are looking at how peptides bind, right?
So these are very specific instances and doing that gives us a good way to understand two things. What algorithms do you need to attack it with? And then what capability do you need in the machines at a certain point in time? And you ask the question of what is the performance standard? Like I come from a classical background, you had performance per watt, then you had performance per watt per dollar.
We’re getting to a point where if you look at it through the lens of big problems you want to fundamentally solve with the figure of merit of either solving it more accurately or solving it more accurately and with less energy and costs, then we are getting to the point of what is your scale of computation, that’s usually qubits, number of qubits, but also what fidelity is an error rates you can sustain to make those qubits useful, right? I’m not saying there’s a perfect ratio of those things, but they’re generally leading us to say how useful is your powerful quantum computer. And that can only be done through the lens of looking at big problems and saying, how do you solve them with the help of a quantum computer, not necessarily replacing a classical computer with a quantum computer.
Reframing Quantum Computing
JENSEN HUANG: Yes. Rajeev, one of the areas that I do wonder is whether quantum computing is just simply poorly positioned. And let me take a swing at it. There are so many things in an industry which is built on fundamental sciences. And quantum computers, in its broadest sense, can be the ultimate instrument for understanding the basic sciences that affects that industry. However, because it was described as a quantum computer instead of a quantum instrument, people have a notion about what a computer is. You know, it should be able to run Excel super fast.
And you know that every respectable computer should be able to run Crisis, the game. And so there’s a common sense about what a computer is, and that’s attached to memory. It’s attached to network. It’s got storage. It should be able to read and write. And there’s a programming model associated with the computer that I wonder if it’s just a wrong mental model. And as a scientific instrument, it’s extraordinary, Mikhail, as you say, and that the opportunity to understand science deeper along the way is extraordinary. But to position it as a quantum as a computer per se and to hold it to the standards of a computer per se that we all understand, you know, what is. You know, I wonder if that could be a reframing and allows this entire industry to be much further along, frankly, in that reframing as a scientific instrument for very important industries. And go ahead.
LOÏC HENRIET: Yeah. I totally agree with what you said. In some sense, the word quantum computer is misleading because people expect that you can replace a computer and it’s classical computer with a quantum one. It’s not like that. It’s more like very complementary.
We like to call our machines like quantum processors, like very specialized machines that you can use in a complex workflow alongside CPUs and GPUs, but really for specialized task. And once everyone I think everyone agrees on that particular way of using quantum computers, quantum processors, it would be easier to work alongside classical and not working towards replacing all the compute capacities are in place right now.
ALAN BARATZ: So I’m actually struggling with the concept. I don’t know how to think of a quantum computer as an instrument when it’s being used for materials discovery, when it’s being used for blockchain, when it’s being used NTT DOCOMO to improve cell tower resource utilization. I mean, it’s true that there are many applications I would never try to run on a quantum computer. But for applications that require extensive processing power, these machines are very powerful. And I think go well beyond just instrumentation or measurement.
JENSEN HUANG: Sorry. I’ll jump in on that. It’s okay.
ALAN BARATZ: Yeah. I’ll jump in on that.
JENSEN HUANG: I actually just trying to help. We saw your help.
ALAN BARATZ: You know, let me tell you the most.
JENSEN HUANG: This whole session is going to be like a therapy session for me. And so a long time ago, a long time ago, somebody asked me, so what’s accelerated computing good for? And I said I said, a long time ago because I was wrong, that that this is this is going to replace computers. This is going to be the way computing is done. And everything.
Quantum Computing: Where We Are and Where We’re Headed
JENSEN HUANG: Everything is going to be better. And it turned out I was, number one, wrong and unnecessarily wrong. It’s better to be narrowly focused on something and be extraordinarily good at it. But the moment you cross that line and start talking about the traveling salesperson problem, then it became unnecessary because that problem is obviously being solved as we know it today. And Uber cars are you know, taxis are showing up.
Maybe they’re three seconds later or thirty seconds later, whatever it is, but they show up. And I do think that I wonder if we hold ourselves to a bar to solve a problem that is unnecessary for quantum computers to solve, quite frankly, to change the world. And it takes focus away from something that you uniquely do and quite frankly, sooner than later. Anyways, that was just my swing at it. Go ahead, Peter.
PETER CHAPMAN: Well, here at the show, we actually have several applications which are showing quantum is now. One of them is with ANSYS and you might know one of their products, which is LS Dyna and runs obviously with GPUs today. We announced that we’ve integrated our quantum computers with LS Dyna and a 12% increase in performance in modeling a blood pump. And so this is the first, I think the first time actually Quantum with production software. We also announced with NVIDIA, AWS and AstraZeneca a 20X improvement in a chemistry application.
What’s amazing about it is that we did that on 36 cubits, our existing system today. By the end of the year, we will have 64 cubits. Every time you add a qubit, you double the computational power. So that’s two to the 28 increase in a single generation of chip, which is roughly 260 million times more powerful. So by the end of the year, one would expect for things like LS Dyna and for chemistry applications to suddenly have huge performance increases.
And so we’re working right now on these applications, which to be honest, probably all of you use to now be able to have significant impact using a quantum computer. I do think that your statement about ten years is we think of ourselves as like ten years where you were ten years ago. We hope that obviously ten years from now, we’ll be up there with NVIDIA as well in a rarefied club. But it does take a long time to go from, it’s kind of a startup to where you are today. And it’s completely fine to sit down and say for the computer industry, for the quantum industry, it’s going to be another ten to fifteen years to get to where NVIDIA and all the other giants are.
It’s just not going to start then. It’s starting today.
JENSEN HUANG: You’re going to be much, much larger than NVIDIA. There’s no way we’re going to be a relic of the past.
The Diversity of Quantum Computing Approaches
JENSEN HUANG: So if it seems like there one of the things that’s really interesting is that there are so many different approaches to quantum computing, it is so diverse in its approaches. Why is it that this industry doesn’t quickly discover a more promising approach as you see each other’s work? Naturally, through evolution, people select the best approach and then everybody started to advance the whole industry in a unified way much more quickly. As I observed this industry, it’s surprisingly diverse and there’s thousands of flowers blooming. When does it become a garden?
PETER CHAPMAN: Just say on, actually, if you look across us today, there’s a number of people you heard are using individual atoms, lasers and all the rest. And so we actually have more commonality than most people expect often. And so I would think that, I would hope that in the future that there is more sharing and maybe even the ability to work together because the promise of quantum computing and what it can do for mankind is so significant. It’s actually larger than any one of the companies that are here sitting on stage today. And so, I think that mankind has a whole range of significant problems to be worked out and we need quantum computing to be able to solve it.
So it is we’re still obviously new ways to be able to build qubits are being found every day. But I think that over the next couple of years, we will start to coalesce to probably two, maybe three different approaches. And some of us will probably come together because we do share basically underlying technology. So it makes sense. And many of the problems that you’ve described, the precise answer is not exactly known because as you know, fluids are quite chaotic and hard to know exactly what the right answer is.
And in those kind of examples, using AI for emulation can give you, you know, tens of thousands of X speed up, orders of magnitude speed up from where we are today with principal solvers. How do you guys think through that? What is the point of solving that problem when classical still has orders of magnitude of progress in the next couple of years ahead of it?
KRYSTA SVORE: Well, there’s orders of magnitude of progress, but at the same time, there are problems that are just impossible to solve classically today. There are problems in the area of drug discovery. There are problems in the area of global weather modeling but even just the application that I shared which is the basis of our paper a week ago which is computing properties of materials. We use Frontier which is basically one of your systems, massively parallel supercomputer at Oak Ridge National Lab and would take millions of years to perform the computation. So the point is there are still hard computational problems that are out of the reach of classical and AI isn’t going to address those problems either. They’re just out of the reach of classical.
JENSEN HUANG: Right, exactly. Go ahead, Rajeev.
Quantum Computing and AI
RAJEEB HAZRA: So I’d like some… yeah, go ahead. It’s interesting that it took us thirty minutes to get to AI, but from our point of view, where we see it quite interestingly is, and it’s an extension to your question on whether we should call it a computer or not, is these quantum devices or tools or instruments, as you call them, are expanding the ability for us to access data to train these AI engines that previously was not possible.
So if you’re going to solve a chemistry problem, today, humankind’s max ability is defined by things like density function theory or other approximations to the quantum space, the world of that information that we haven’t had. That’s like to me trying to train autonomous vehicles by giving them city grids of 500 feet squared and not having the detail of lanes or other things.
So what we see is this concept of computer brings in, I have a computer A versus a computer B. So A must run the thing faster than B for it to be better, at least technically, till you tell them what the price of A is. We don’t see it that way. We see it as what can A and B do if A is established classical, well honed frontier models, how are we training those models and are we enabling those models with the data so it can now continue to be agentic and continue to reason and continue to do things that otherwise we’d be pulled in back to do, right?
And we call that Gen, not AI, GenQ AI and that kind of breaks the paradigm of a computer competing with another computer. It’s two computers now working together two completely different ways, but they are input output for each other. You’re the output of the quantum computer is the input into these LLMs and the training methodologies, so you can have LLMs that actually understand things like ground state energy and ground state configurations of molecules. So you can then use them to start doing perturbation theory on, well, is this molecule going to last inside my body and deliver the drug at the right kinetic paradigm or not. So that is how we see quantum as an addition of a tool or an instrument into what is already a developing and rapidly maturing and improving compute paradigm.
JENSEN HUANG: In the few minutes we have here, what are the things that we could do in the world of accelerated classical to be helpful to all of you so that we could advance your work much more rapidly?
Collaboration Between Classical and Quantum Computing
THÉAU PERONNIN: Thank you. I guess, it’s very important as said earlier, to couple as best as possible, the various compute modalities like classical and quantum. For the moment, it’s not really a matter of bandwidth or really being able to co locate to be able to do things fast because we’re not there yet. It’s not the pressing paradigm.
At some point, it will be a problem, but not now. Now it’s really about identifying the key problems, the key domains on which we can collaborate and leverage the best of both worlds. And I fully agree with what you said about actually using a quantum computer, quantum processor to process and create data on a problem, which is in itself quantum that classical struggles with naturally. And then use that advantage, natural advantage in a workflow that is larger with the CPUs and GPUs and couple that quite well at the software level.
PETER CHAPMAN: I was just going to say, we use your GPUs to design our chips to often do co simulation, to make sure that the quantum computers are working. We look to the future for quantum computing, it’s going to be a set of classical systems sitting right next to a quantum computer. And the two of them are going back and forth. And so it isn’t something where one is replacing the other, they’re working together.
And then if you look at the same things that are doing today, we’re applying machine learning to be able to figure out how to build optimization, not only for the quantum computers itself and how they run. So it is already a synergistic relationship between classical computing. And the strange thing is our quantum computers are almost entirely classical, right? The only quantum part happens to be a little chip and a couple of atoms at the center. The rest of it is entirely classical.
So it isn’t going to replace. I wouldn’t short any NVIDIA stock at the end of this. So I think you have a strong position going forward. But I would expect that in the future, will be a GPU, a GPU and a CPU all working together to be able to.
JENSEN HUANG: Yes. In fact, if I could just add a little something there. I think the so first of all, you probably observed that NVIDIA accelerated computing is the largest volume parallel computer the world’s ever seen, and yet we don’t call it a parallel computer for that very reason. A long time ago, there was an industry called parallel computers parallel computing, and it was opposed to sequential computing. And the mistake of that approach, the mistake of that positioning is, in fact, you know, Amdahl’s Law doesn’t work that way, and there’s no reason to replace something that does an incredibly good job. You should add to it and ride the wave of the momentum that’s been created for it.
And so that’s why we decided to call it an accelerated computing, but it’s still a computer. And that really revolutionized how people thought about us and how we thought about ourselves and how we thought about our work. And I think the idea that this is a quantum computing industry or a quantum computer is less good than a quantum processor that’s going to make every computer better. So I anyhow, go ahead.
SUBODH KULKARNI: I was going to say that some of the paradigms have to change in the way we are thinking about quantum processing or quantum computing. Some people have observed that think of a human brain and how it works. And that’s closer to a quantum computer than our conventional thinking of HPC and how HPC should be integrating with quantum computers. We are dealing with analog inputs, we are dealing with analog outputs, we are dealing with simultaneous multiple variables at the same time, exactly like the way human brain and our neurons work.
So fundamentally, we may be limiting ourselves by thinking of quantum computing in the context of classical computing and may have to start thinking broader and say what are the kinds of things we could potentially envision when a quantum computer is brought in conjunction with HPC and to your question, how can we accelerate quantum computing development with HPCs? At the same time, how can quantum computing help Gen AI get to AGI? Those are the smaller trickier things that we could use quantum computer for.
JENSEN HUANG: Well, is going to be the beginning of a great conversation for the industry and it’s a great pleasure for me to host all of you. And this is just the first of many in our series and I’m looking forward to, Mikhail wants to finish.
MIKHAIL LUKIN: Yeah, I want to, yes, yes. Yeah, go ahead, finish. Yeah. So I think it’s like, these are all great examples. And I want to go back to kind of the point I made before. And so basically quantum computer in a sense of like computation is not a hammer, it’s like a scalpel. And it’s a precision instrument and what you basically want to do with that, and it’s kind of our vision.
If you really want to realize, you know, large scale useful applications, you have to think about this entire problem as a kind of co design, where if you have a problem you want to solve, you want to basically solve as much as possible with the classical computers and identify hard quantum part and then find an algorithm, find a good error correcting code, find the right compiler, find the right decoder. And it all has to be optimized with the specific architecture, quantum architecture in mind. And in all of this process, what you want to do, you want to basically outsource as much as possible, at least at this time to a classical part.
LOÏC HENRIET: And this could be CPUs, GPUs, depending on what you want to do. And of course, at the end, you want to use an output of the quantum data indeed to train your models and improve them. That’s how we see the real value of quantum computers emerging in the next couple of years. Surely one very productive use case.
JENSEN HUANG: Thank you, guys. Okay. How about we just have to be very quick. Next year, this time, what are we going to be talking about? And so let’s just quickly go through.
Go ahead, Alan.
ALAN BARATZ: Next year, this time, what do you hope that we’re talking about so I remember it? How quantum is helping you to do better model training and inference with lower power consumption?
JENSEN HUANG: Okay. Go ahead, Peter.
PETER CHAPMAN: First quantum applications in production helping customers taking workloads. And I hope that we’ll see kind of along the same lines, the first prototypes of a new kind of AGI based on quantum.
LOÏC HENRIET: Talking about the learnings that we got from all the usage of all the computers, the processors that are being deployed today in the field.
JENSEN HUANG: Yeah, Rajeeb.
RAJEEB HAZRA: I agree with the previous speakers’ theme on we will see in the next year first real tangible use cases of an AI agent working with, in conjunction with, a quantum computer doing things it couldn’t have otherwise done before or done with tremendous amount of trial and error.
JENSEN HUANG: Okay. Subodh?
SUBODH KULKARNI: I hope a year from now, we are at a point where there’s a little less skepticism about quantum computing and we start talking about how exactly will it be valuable in a data center and we can show some real life cases.
MIKHAIL LUKIN: So what I want to see is 10 new scientific discoveries in physics, chemistry, and biology, and maybe other areas, which would be delivered by the quantum proceeds.
JENSEN HUANG: Well, guys, let’s go make it happen. All right, Thank you.
Second Panel Discussion
JENSEN HUANG: Our second panel. Thank you. Thank you, guys.
Our second group, Ben Bloom from Atom Computing, Neutral Atom qubits. Go ahead, Ben, come on in. Hey, Ben, I’ll shake all your hands in a moment. Matthew Kinsella, CEO of Infleqtion, Neutral Atom qubits. Hey, Ben, thank you. Thanks for coming.
John Levy from SEEQC, Superconducting Qubits. Hey, man. Nice to see you. Théau Peronnin, CEO of Alice and Bob. Alice and Bob, you have the good pleasure of having to explain the nature of your company name to all of the non quantum people for the rest of your life.
QCI, Rob Schoelkopf, supercomputing qubits. Nice to see you. And then PsiQuantum’s, Pete Shadbolt, single photon qubits. Hey, how’s it going, Pete? Yeah, sit down, sit down.
And so very quickly, how about let’s go through again. Let’s start from this side. And what is your approach and why did you choose it?
BEN BLOOM: Yeah. So my name is Ben Bloom. I’m one of the founders of Atom Computing. We build quantum computers with neutral atoms. You heard a little bit about neutral atoms earlier, so I’ll reiterate some of the good points and hide some of the bad ones.
But generally, we can make very, very large numbers of qubits. So we’re one of the first companies to breach 1,000 qubits and we can do this with very, very high fidelity. So we can also do operations with these qubits that are just very, very, very coherent. And it also allows us to do things like have all to all connectivity, which allows for a variety of quantum error correcting codes and applications to be run on the system.
MATTHEW KINSELLA: First of all, thanks for having us, Jensen. Yeah, it’s great to see you guys. It’s great to be up here with you. This is going to be fun.
At Infleqtion, we are actually also building our quantum computers using neutral atoms. And I think Ben did a great job explaining that and as did folks on the last panel. So I’ll just say that neutral atoms are a highly flexible technology and that’s because they take place entirely at room temperature. And so because we don’t need a freezer, you can actually shrink, you can cost down, and you can field deploy this technology.
And so what we do, and I actually brought a prop here, is we can trap our qubits in these ultra high vacuum cells and then they’re atoms, these qubits are atoms, and then we can arrange them and do interesting things with them with lasers.
And we think, as I think Misha said in the first panel, atoms are nature’s perfect qubits, but they’re also nature’s perfect clocks and nature’s perfect sensors. And so, we actually point this core atom technology at those three areas, clocks, sensors and computers, and you can think of them as sort of a continuum of complexity on what you can do with neutral atoms, with computing being the most complex and clocks being the least complex.
And we’re following a tried and true monetization and market development strategy of monetizing those areas where we actually have true quantum advantage today, like clocks and sensors, and using those learnings, because there’s a lot of leverage, all the underlying components are the same, those learnings and those gross profit dollars to help us push the limits and get to ultimately quantum advantage in the computing world. And so, that’s what we do, and we are doing interesting things in the computer alongside your fantastic quantum team, Jensen, Sam, Elisa, and JinSong and others.
JENSEN HUANG: Yeah, appreciate that. Thank you very much. Thanks, Matt.
JOHN LEVY: So I’m John Levy, the CEO of SEEQC. SEEQC stands for Scalable Energy Efficient Quantum Computing.
And what we’ve heard today is that there are multiple ways of building quantum computers different kinds of qubits. But what we also know is qubits alone don’t build a full stack computer. You need to be able to do readout, control, multiplexing, reset, error correction, GPU integration, the full stack. And so at SEEQC, that’s what we do. We have actually built digitally controlled computers.
This is an example. This is SEEQC Orange, and it’s the world’s first digitally controlled and digitally multiplexed quantum computer. And we’re putting all the core functionality of a quantum computer on a chip. Now the only way that we can do that is if we’re incredibly energy efficient. And you talked in your keynote about the importance of energy efficient systems.
And so if you think about building a regular quantum computer, say, doing superconducting, you might use two to five watts of power to run a quantum computer just to control a single qubit. We’ve gotten that down to three nanowatts of power. So we’re energy efficient. And the last part is that we’re all digital. And so that enables us to avoid one of the major sources of noise, crosstalk, in quantum computers, but it also enables us to connect to other digital chips like GPUs and CPUs.
So our notion is to create a platform for heterogeneous compute, where we basically take this idea you were saying in the previous panel about computing and accelerated computing. And we think that the way to accelerate computing is to seamlessly integrate the way you’ve done it with NVLink and a GPU and a CPU, a QPU. And that’s the infrastructure we’re building.
JENSEN HUANG: Yes. That’s terrific. Thank you.
THÉAU PERONNIN: Yes, thank you. So at Alice and Bob, we’re superconducting chip designers. We design superior superconducting qubits for error correction. And in quantum, error correction is all you need.
So our technology, the CAD qubit, has a first layer of error correction directly built within the qubit. And it’s so powerful, so hardware efficient, it slashes the number of required qubit for impact by up to 200 fold. Think about it for a minute. That’s not only reducing the cost and complexity of the system, it’s shortening the timeline significantly. If you think of it as in terms of Moore’s Law, it’s nearly a decade of head start we’re getting. And so, at Alice and Bob, this is how we’re turning decades into years.
JENSEN HUANG: Yes, terrific.
ROB SCHOELKOPF: Hi, yeah, thanks for having us, Jensen. This is really a fun event. So I’m Rob Schoelkopf. I’m one of the founders and chief scientists of Quantum Circuits which is in New Haven, Connecticut. We’re a spin out from Yale University where a lot of the superconducting folks were trained and some of the main ideas came from.
And I’m glad, Théau, you brought up the issue of error correction. I think that’s a good thing to talk about. At Quantum Circuits, we believe error correction is the key to obtaining useful quantum computations. And we actually have a bit of a different take. It’s somewhat similar to what Alice and Bob is pursuing, but our mantra is correct first then scale.
So we don’t want to make machines with millions of very noisy qubits and then try and figure out how to program those or how to build the error correction as a software layer on top. What we’re doing is we have a new paradigm within superconducting circuits. It’s a thing called the dual rail qubit and that’s got essentially error detection built in at the hardware level.
And so that’s got a couple of advantages. We get all the speed and scalability of superconducting devices, but now we’re starting to see performance that rivals the ions and the atoms and that’s trying to square the circle and have the advantages of both of these different types of technologies.
And so we think that that enhanced fidelity we can get by detecting the errors is going to get us to use cases in the near term that are interesting, especially for this kind of scientific discovery that was mentioned in the previous panel. And it’s also a way for us to scale more efficiently to fault tolerant machines.
So we wanted, scaling is going to be the key, but we want to not just do that in a profligate way, we want to scale in a way that’s really giving value and suppressing the errors dramatically. And I think the main challenge for the field, whatever the technology is, is show that error correction really works and we can suppress things down to levels that have never been seen before with physical qubits.
JENSEN HUANG: That’s terrific. Thanks.
PETE SHADBOLT: Yeah, thanks a lot for having us Jensen. I appreciate it. I really hope they’re paying you well to make sense of all of this complexity.
So I think it’s fair to describe PsiQuantum as sitting on the extreme end of the spectrum of quantum computing companies. In that from the very beginning, we’ve just been singularly pig headedly interested in building the very large scale, universal, fault tolerant, million qubit scale machines that honestly, the whole industry has always known would be required for genuinely, commercially useful applications.
And the approach that we use to build that is we use single photons, so particles of light. We made the first demonstration of two qubit gates by our CEO, Jeremy, twenty plus years ago in Brisbane using those photons. And now we put those on a chip, repurposing the silicon photonics technology that was originally developed for data center applications and which I was really excited to hear you speaking about in the keynote.
We think that gives us profound advantages in overcoming the scaling challenges that face our field: manufacturability, cooling power, connectivity and control electronics. And that leverage has put us in this position where we’re now breaking ground in the next few months on very large scale data center like quantum computers in Australia and in Chicago.
JENSEN HUANG: Yes, that’s terrific. Thanks, Pete. One of the things that’s really quite challenging for people who are working around the industry and certainly observers of the industry is, of course, the science is very different in many of the different approaches, and there are quite a few different approaches.
If there were just two approaches, you could wrap your heads around them, but there’s quite a few different approaches. The science is new. Of course, every aspect of the engineering, the manufacturing, all of it’s new. Even the programming model, how you think about programming these things are new. Comparing them is difficult.
For example, on the one hand, the last audience was already talking about usefulness of their computers in running industrial software. On the other hand, there’s some common sense about the number of qubits necessary to have a productive and functional system. And we’re at 36 or X number of qubits at the moment. And now you’re talking about a million qubits for a fault tolerant machine and a productive machine.
And so how do you bridge that gap from where we currently are? You know, what’s the current state of the art versus where do you think we reach a plateau, not a plateau, but a phase shift? You know, it’s likely not likely to be a very specific point, but the usefulness of these quantum computers will become more and more and more useful over time. When do you kind of see that transition happening? When do you guys all see that? Where are we today?
Where do we kind of likely, you know, where we would say, Yeah, that’s a really good quantum computer. That’s a great, you know, during that time, it’s going to be, we’re going to be having, we’re going to have quantum computers all over the place. And when is that phase shift happening? How do you guys see it?
BEN BLOOM: Well, I think that it’s really important to just keep scaling. I think that Pete’s probably right that some of the biggest problems you have to work on that are actually going to change the world are going to require millions of qubits. And so you have to make sure that you are scaling your quantum computer really, really fast.
We don’t want Moore’s law scaling of factors of two or root two, we want factors of 10 and we want it every few years and that’s what we do at Atom Computing. I think that in the end there are people who are using quantum computers now who are making progress, who are finding useful problems. But I think when you talk about utility scale, these things that are going change the world, you have to get to a million.
Quantum Computing: Scaling and Challenges
JENSEN HUANG: I think it’s important also to define terms, because there’s physical qubits and then there’s error-corrected logical qubits. Error-corrected logical qubits are really the key to the kingdom here. The ratio of physical qubits to error-corrected logical qubits, it once was thought to be 10,000 to one. It’s probably closer to 100 to one or so. So, you’re going to need multiples of the number of physical qubits, and then run error correction software on those to get those logical qubits you need.
But I think to answer your question, the consensus is around a hundred logical qubits. You can start to do interesting things with quantum computers that classical computers can’t yet do.
SPEAKER: You know, it’s interesting to think about how to scale these quantum computers because, for example, there was a really wonderful paper that Google did in late November around error correction. If you looked at the Willow chip and the setup, it’s really a great demonstration of doing error correction. It also required five separate cables for each qubit, if I’m not mistaken. And if you think about trying to scale a system with a million qubits, you’re going to really have 5 million cables.
I’m using this as an example because there are so many things like that in quantum computing that we need to solve. We’re solving that by doing multiplexing and by doing chip-to-chip integration so that we can solve that problem.
But it’s one of a thousand engineering problems. And so it’s really a pick and shovel approach to taking each one of those and trying to solve them. And we can’t just solve one of those problems. We have to take a comprehensive view and solve them all. And I think there’s probably broad agreement that unless we can figure out how to build quantum computers on a chip, we’re never going to get there. So that’s the kind of major goal is to scale on a chip.
THÉAU PERONNIN: I think that the academic community has figured it out pretty straightforward. It’s 100 logical qubits, as you mentioned, with error rates remaining at one per million at most. Because logical qubits are not completely error free. Your classical transistors still from time to time happen to make an error.
Now the thing is not every physical qubit is born equal. And so the size, the complexity of your system to get to 100 logical qubits might vary a lot from one platform to the other, from hundreds of thousands of physical qubits on some modalities to just a couple of thousand on others.
To answer your timeline question, when does this shift happen? I think it’s by the end of this decade for sure, so 2029-2030. And there, this is where you see the inflection of the exponential power. Because an exponential curve, you zoom out, it’s dead flat and then it’s a hard wall. So when this inflection begins, by 2029-2030, you’ll feel the wall climb.
The Evolution of Quantum Computing
SPEAKER: It’s an interesting question. I think it’s a bit of a fallacy that quantum computing is going to be like in development and then there will be a flip of a switch and it’s everywhere and solving all the world’s problems. It’s really going to be more like we’re going to be turning up the volume steadily and we can start to hear the music now and eventually everyone will be able to hear the music.
I think also I liked what you said in the earlier panel about how these are different than regular computers. That’s a completely different paradigm. So a thing that we’re doing now is we’re really learning how to program these machines and we also have to learn how to deal with the errors that are always going to be in quantum computers. We’re never going to have perfectly fault-tolerant digital computers that work as reliably as a GPU or a CPU does because they’re going to be used for special purpose things and you get the answer right one time and then you know the answer to a question you never had.
I think the right analogy is to think of the early days of electronic computing with vacuum tubes. And it was imagined that we’d only use them for cryptography or maybe modeling bombs. And it also wasn’t understood – Von Neumann had to come up with the Von Neumann architecture, right? And the idea of a compiler was new.
So I think we’re in the era now where the machines are powerful enough that we can do that kind of discovery of what it is to program things. And we’re going to see applications of these things that are not what we anticipate today is my guess.
JENSEN HUANG: Well, it’s really great that you said that. I think one of the things that we hope for and we’d like to be able to contribute is to help discover those programming models and help invent that programming paradigm.
I think there is an unnecessary expectation, and it actually sets the industry back, frankly – unnecessary expectation that somehow the quantum computer is going to be better at spreadsheets. And that’s an unfortunate expectation. It’s an unnecessary expectation.
The reason why we’re involved in all this is because we have such incredible great grand hopes for you that we’re going to discover new ways to solve very challenging problems, but not so that we can solve food delivery, okay? Because I really think that’s fine. I really wish my burger would show up an extra three seconds earlier, I could live with it. But there are some things that simply won’t get solved without quantum computing. And I do think that our collective framing of quantum computing is going to be really helpful for the industry.
SPEAKER: I just want to follow-up on Rob’s comment. If you go back to, let’s say for a moment, just as a thought experiment, this is 1946 and somebody dragged you to the basement of the University of Pennsylvania and you wanted to see the ENIAC. Do you think that anybody at that point said, “Oh, I’m going to use this to remotely call a car that’s going to optimize the route, that’s going to be able to pay for me to do it remotely and I’m going to be able to communicate that to my friends in more or less real time.” Like that wasn’t what people were thinking about. They were tracking missile trajectories on application-specific devices.
And I think the idea of creating an open space to explore it and discover it is exactly where we need to be. And that’s what we need everybody to be working on.
JENSEN HUANG: Yes, that’s really terrific.
SPEAKER: I do want to say actually, I think one of the big changes between a classical computer and a quantum computer, at least how we understand it now, really is this idea that a quantum computer is a big compute resource and a lot of the classical computers and a lot of the ways we use classical computers are kind of big data resources.
And I think it’s going to take the combination of classical computers and GPUs and everything to actually understand how do you even use a big compute resource. Like if we succeed, we’re going to build a bunch of supercomputers that are really, really good at understanding the physical world. And we have to figure out how to use those efficiently and actually how to bring them into normal everyday processes.
Scaling Beyond Moore’s Law
JENSEN HUANG: When you spoke earlier about scaling, I’m excited by the fact that you’re not limited by Moore’s Law. Moore’s Law, of course, as you know, it’s not based on a fundamental law. There were some principles that were involved in it.
But one of the things that is really great to see from the industry here is the rate of the scaling is not a factor of two every couple of years because at that rate, it will take thirty years. And we do know we need to scale. And if you look at the past ten years of scaling, it’s not an indicator of how fast you guys are actually scaling now. Because of the new science and the new methods that you guys are using for quantum computing and these quantum processors, you’re scaling a lot faster, frankly.
So can you guys talk about scaling? And where do you guys see what’s enabled you to scale faster? Of course, Neutral Atoms is an architecture that allows you to do that better. But what’s the technology that allows you to scale better today? And where do you see scaling in the next five, ten years?
SPEAKER: I mean, I think the answer is that we can use classical computers. So we’re learning how to build control devices. We’re learning how to use light efficiently and we can just trap more atoms, control more atoms and we start off with clouds of millions of cold atoms. We have the ability to load 10^7, 10^8 atoms per second, that are just ready to be qubits and it’s up to us to go and figure out how do you build the control infrastructure around that.
And that’s classical computing, it’s photonics, it’s RF SoCs, it’s all these pieces of equipment that are now just able to be bought by us. And is the interface between your processor and our processor sufficiently well designed at this point? No.
I mean, I think that every step of the way we’re trying to make our systems faster and GPUs and CPUs will have to get closer and closer to those actuators because at the end of the day, everything is just governed by the speed of light and you want your computation to go really, really fast. So any physical distance between GPUs or CPUs that are understanding the errors that are occurring in the system is just going to slow down the quantum computation.
SPEAKER: And just rounding out the neutral atoms, and then we’ll let the other guys talk about scaling their modalities. But like Ben said, the number of physical qubits for neutral atoms isn’t really the bottleneck because we can put millions of qubits into this little device here. It’s really just our ability to control those precisely with lasers and then basically error correct those codes or error correct those qubits.
And so, it’s interesting in that when you think of scaling, it’s not putting more physical hardware in there. It’s really just more precisely controlling these God-given nature’s qubits with lasers.
JENSEN HUANG: And what’s the latency that we have to achieve? It’s probably associated with some of your coherence time of these qubits and the time it takes for us to do error correction or whatever control algorithm and send some signal back to you, how much time do we have in that round trip loop?
SPEAKER: Well, the good news is really it’s the ratio from how much you can get done to the actual coherence time of the underlying modality. Neutral atoms have quite a long coherence time relative to other modalities, but we’re talking microseconds.
JOHN LEVY: So we actually think that in our technology, which is applicable across all quantum computing modalities in whole or in part, but when we focus on superconducting, we think that you need to be able to have less than one microsecond latency in order to do error correction before the next error can show up.
And so our goal in our ability to connect to GPUs and to your GPUs is to get to the sort of 500 to 800 nanosecond range so that we can actually take advantage of using a GPU for global error correction. So we think we can parse it by doing some on chip, some with a predecoder operating maybe at 100 millikelvin or a Kelvin. And then do the rest with a GPU doing global error correction.
But again, that latency question is really, really critical to being able to do that on a timely basis before the next error shows up. Milliseconds is easy. Microseconds is challenging.
JENSEN HUANG: Milliseconds is hard.
JOHN LEVY: So we’re already in the hard space.
JENSEN HUANG: That’s okay. We like hard.
JOHN LEVY: So the goal again is like, I mean five hundred to eight hundred nanoseconds is really our goal.
PETE SHADBOLT: It’s not just about latency by the way, it’s also about throughput because if you have a large error-corrected computer, you have a pretty big stream of results coming back telling you where the qubits went wrong, how you have to adapt your algorithm to steer it back to the correct answer and that’s a multi-scale problem.
So you probably have to do some amount of classical compute with very tight latency and then there’ll be more complicated computations that you have a little bit more time for because you’re doing multiple rounds of error correction. So it is a very interesting thing to explore and ensure that very hard error correction is a very hard problem.
JENSEN HUANG: Throughput doesn’t scare me. Latency and throughput requirements together scare me.
SPEAKER: An essential thing for any of these modalities to really work and to scale is going to be to build the special purpose classical computer that is the control system and drives it and does all the magic of error correction.
JENSEN HUANG: This alone right here, this control loop error correction control loop right here is really exciting computer science problem. And this is an area that I hope all of us together can make some real breakthroughs in the next several years.
As everybody knows, throughput independent of latency is not an extraordinarily hard problem. Latency independent of throughput is not an extraordinarily hard problem. The two of them combined is an extremely hard problem. And in fact, I was talking about it in my keynote. I mean this is large scale inference of AI is a problem kind of like that, especially when you want to interact with the AI.
PETE SHADBOLT: Jens, I think more than one of us on this stage has got themselves in trouble by making comments on timelines, of course, and found ourselves in difficult positions and so on.
Scaling Quantum Computing: Challenges and Opportunities
PETE SHADBOLT: But like I really appreciated the other piece of your quote, which went missing, which was that we got to scale up by 100,000x, by five orders of magnitude. And I think in our field, of course, that is really hard. And in our field, unfortunately, I think there is a lot of wishful thinking from people desperately—and I understand, I’m sympathetic—desperately hoping that we can make the machine useful before it’s useful. But you have to scale up by about 100,000x. And it’s natural to say that that’s a multi-decade exercise for human beings.
Feels that way in the dead of night. But you, of course, have extraordinary demonstrations that it doesn’t have to be a multi-decade exercise. XAI, Colossus, they built a 100,000 GPU cluster famously in one hundred and twelve days. How did they do that? What miracle did they deploy to get that done?
The answer is the trillion dollars in fifty years that has gone into the semiconductor industry. And I came back a couple of days ago from GlobalFoundries. I’ve been raging about the insane leverage that we can access there for eight years without ever having actually stepped inside the fab. And I was extremely lucky to get in there a couple of days ago. And it’s a religious experience because you see just how insane the capability of the fabs, the contract manufacturers, the OSATs and so on is.
And that has always been our thesis. And to your question on when, like there are basic questions that you should ask. And this is a very complex field. Everyone is arguing for their particular technology. I’m no different.
But there’s a whole separate set of questions in addition to fidelity and so on, which is: Can you tell me who your fab is? Can you tell me who your OSAT is? Can you tell me who’s doing contract manufacturing? And where is the site where you’re building the machine? And these are necessary but not sufficient conditions to actually being ready to build a genuine useful machine.
And it’s very hard, but that is what we’ve really sort of spent our resources towards. And I think it’s very exciting that you now see other players in the field taking steps into that same kind of regime.
JENSEN HUANG: Yes. In fact, Pete, as you were talking, all of the things that you’re mentioning are very sympathetic with us because recently, as you know, we introduced this idea called silicon photonics co-packaged optics. And the semiconductor physics had to be invented, the packaging technology, how you stack it, how you manufacture the entire supply chain.
I introduced a whole bunch of new supply chain manufacturers. We were able to, of course, leverage many existing industries. If not for that and we had to invent everything from the ground up, it would have been impossible for us to do that. And so in a lot of ways, you guys are doing what we did with silicon photonics CPO at a multiscale challenge. The sciences have been being invented in your case.
And so I think this is really a hard problem. However, exactly as you guys are saying, we have to find a way to carve a route, carve a road for you to be successful as soon as possible and almost every day as you scale up to this future of quantum computing. And if you look at the reason why we’re here, we’ve found a way to, one, explain our story in such a way that we didn’t set a bar so high that we couldn’t reach it. On the other hand, select the problems that we could solve in quite a unique way early on.
And in a lot of ways, NVIDIA democratized parallel computing, if you will, but I did that on the backs of computer games. Some we had that was a great decision to find a problem that had a very low bar, if you will. The fact that the three-D graphics we rendered in the beginning were not exactly right, and there were a few missing pixels and there were some gaps in the Z buffer. People kind of accepted it because it was a game, for crying out loud. And so it gave us the opportunity to scale our economics, scale our technology, scale our footprint. And then we selected the right science, the right industries field after field after field.
And I’m hoping that for all of you, that we don’t have—we shouldn’t be held to a standard of computing, which, as you know, is quite high. The robustness of computing, the repeatability of it, the whole industry, everything is very, very high bar. We had to go find an industry where the bar is quite low. And I don’t mean it’s easy. It’s extremely hard.
But because there’s no alternative to computing, the bar is very low. A computer just simply can’t solve it. Like, for example, you couldn’t buy reasonably a personal computer in 1995 and play Quake without NVIDIA in it. You can’t reasonably do that. And so that bar was in fact incredibly low, if you will.
Now, of course, you guys are doing something much harder than that today, but I do hope that together we find a path for you guys to be successful sooner than later.
Finding Commercial Applications
MATTHEW KINSELLA: Matt, you were saying earlier your approach, and I love it. We honestly took a lot of inspiration from you. Find the markets where you can provide true commercial advantage and address those, get feedback from the market, learn how to fulfill customer expectations and then find the next one. And with Neutrolin Adams, we’re very lucky in that they are flexible where we can and we do have three orders of magnitude improvement over current standards in timekeeping and sensors, which are very large markets in and of themselves.
But using that, if you say, you know, timekeeping was your gaming market maybe, all that to actually get real feedback from the market and ultimately develop that commercial muscle that we need as an industry to actually sell these things for real use cases that people were going to earn a return on their investment.
JENSEN HUANG: And I also heard something really, really clever. And of course, I knew about this in the work that you guys are doing, that you would stand on the shoulders of classical computers, extend it, and make it do something extraordinary. You know, we didn’t replace the computer. We added to it.
And the thing that I would explain to people in the beginning is we never—nothing was ever worse. Just turn me off, but I don’t make things worse. Parallel computing, as you know, violated Amdahl’s law and in many applications made it worse. But accelerated computing, we did no harm. And one of the things that I really like by adding these two, GPUs was added to CPUs and QPUs could be added to CPUs and GPUs.
You keep adding something and you made that computer more and more special, more and more capable.
RAJEEB HAZRA: Yeah, mean, it’s interesting that there are people who are now talking about and even whole conferences around quantum AI. And the idea of tightly integrating a QPU into a GPU and a CPU creates that opportunity. So let’s explore the space. Right?
Integration Challenges
PETE SHADBOLT: Yeah. So I think I mean, I very much agree with that sentiment. There is a caveat that I think is that I hope you won’t mind me adding, which is we love to talk about this idea that conventional computers are integrated with quantum computers.
And we absolutely believe in that. There will need to be a large conventional supercomputer, GPU cluster, whatever, preparing Hamiltonians, preparing input to the machine. But when we talk about this as a very seductive idea, which is that the whole will be greater than the sum of the parts. And that idea leads to an interpretation that we don’t need a good quantum computer, that we can take a not very good quantum computer, plug it into a big, as you say, insanely high performance conventional computer and that the whole will somehow be greater than the sum of the parts. And I think you have to be quite careful about that.
I think that what I would ask for is a rationale that the whole will be greater than the sum of the parts. And in some cases, can see a rationale for that. But in some cases, there is no reason to believe that taking a small, low performance quantum computer and plugging it into an incredibly high performance conventional computer is going to make things any better.
JENSEN HUANG: Yes, that’s exactly right. And that was one of the challenges, frankly, of accelerated computing. Because we sat next to a computer, a processor that was getting better by a factor of two every single year. And the R&D budget of that industry, of the CPU industry at the time completely dwarfed the GPU. And so it was a million times larger R&D budget per year.
And so what are the odds that adding a GPU that’s built out of such low R&D budget could add value to a system that sustains such enormous R&D budget? And so the answer to that, Pete, for me anyhow, was I kept narrowing and narrowing and narrowing the application space.
PETE SHADBOLT: Makes sense.
JENSEN HUANG: And I kept lowering the bar, if you will, for myself so that the problems that we solve are so specific, quite frankly. Now the challenge, of course, and this is going to be a challenge for your industry as well.
The challenge is if you end up flying an application space and it’s very specific, then the size of the market is not so large that could sustain your growth. And you have to ultimately find that flywheel in anything that’s in computing, which is what we do for a living, in anything that has a very, very high computation requirement, you need a flywheel to get you there. And so that flywheel starts with solving a problem better than anybody, eventually getting to high volume, which generates more R&D budget, which allows you to build something better, which allows you to get more higher volume. And that flywheel is insanely hard to go get. You guys understand.
But I have every confidence that this way of solving problems that you guys are trying to solve, there are problems that are simply impossible to scale for classical to the extent that we can find a way to lower people’s expectation of us, narrow our own aperture of problems we want to ambitiously go solve in the beginning so that we could catch that flywheel ourselves. I am absolutely certain this is going to happen. And I have every hope and expectation that this team is going to do it. And I really love hanging out with you guys.
Looking Ahead to Next Year
JENSEN HUANG: If we very quickly, what do you think we’ll end up talking about next year? Because I want everybody to come back. This is such a great show. And remember, this is our first time. So we’re a little clumsy, okay? Lower your expectations. But next year, it’s going to be incredible. Yes, let’s go around quickly. What do you guys want to talk about next year?
KRYSTA SVORE: I think there will be some amazing demonstrations with Quantum Error Correction in the next year. And I think that we’re just seeing the industry expand so quickly that we’ll be trying to tamp down expectations like you said. I think there’s going to be just such amazing velocity occurring with quantum error correction.
MATTHEW KINSELLA: I think we’ll hear a lot of great progress on continued increases in logical qubits. And then from our perspective, with our commercialization strategy, we did about $30 million in revenue last year. So we really hope to be telling you about heck of a lot more selling some of these early use cases.
JENSEN HUANG: Yeah, man, that’s real money.
SUBODH KULKARNI: Yeah, thank you. First off, I like the premise of your question that there is going be a quantum day next year.
JENSEN HUANG: Yeah, absolutely.
SUBODH KULKARNI: So like, yeah. And I would say from our perspective, it’s the integration of all core functionality of a quantum computer on a chip, full stack.
THÉAU PERONNIN: Yes, a process would be error correction, better architecture, but most importantly for the audience is novel algorithm. There is so much room for innovation in quantum algorithm. I mean, we’re seeing completely new subroutines emerging every couple of years. And think about it as someone inventing the fast Fourier transform all over again. It’s parting shifting for whole industries each time. And so when we’re looking for those niche you’ve been mentioning, those core algorithms, those kind of routines can completely change the reach and the impact of quantum computers and get this flywheel going.
JENSEN HUANG: In fact, just—that’s really terrific. In fact, there’s a misunderstanding that quantum computers are going to take classical algorithms and just make them go faster. In fact, the whole point is to invent new algorithms and that are ideal for this new form of computing.
ROB SCHOELKOPF: I’m going to talk about error correction as well because then again, that’s the key thing. And we’ve really entered an exciting phase where you can build machines on which you can run error correction routines for a couple of decades, Shor discovered error correction the same year essentially he discovered his factoring algorithm. So we know that it’s possible in principle, we know what the math is, but now this is becoming a practical discipline, right? So we can build and test things and now you can say, oh wait, this is the flaw my hardware has. So here’s a code that’s much more optimal. Oh, if that’s a thing we can do and that makes for efficiencies in scaling, like we can try and adapt our hardware to go in that direction. I think I’m super excited area.
JENSEN HUANG: Yeah. I’m super excited about the whole area. I can just tell, just from the downloads, the accelerating downloads of CuQuantum, the number of people who are using who are discovering new quantum circuits to go simulate and discover new algorithms is growing. And so I’m super excited about that. And so Pete, what are going to talk about next year?
PETE SHADBOLT: Yes. So I mean, now we’re making thousands of wafers of quantum chips in global foundries, a pretty high level of maturity. We’re building large cryostats with no chandelier.
Quantum Computing: Where We Are and Where We’re Headed (continued)
PETE SHADBOLT: We’re stringing together heaps of optical fiber. And I really hope that you’ll have us back next year. Next year, I think I’ll have to wipe the mud off of my boots before I come up on stage here because, as I said, we’re breaking ground on these two very large sites, so like 5 million square foot kind of sites in Australia and Chicago.
JENSEN HUANG: Pete, you look like a builder from Australia.
PETE SHADBOLT: Well, I’m trying my best. But yes, thanks very much for having us.
JENSEN HUANG: All right, guys. Thank you. Thank you. Next yes, just I’ll introduce the next crew. Thank you, guys. Really enjoyed it. Thank you, guys, very much. Really appreciate it, guys. And so this is our last panel.
So it sounds like next year, we’re going to have demos. What do you guys think? Next year, we’re going to do demos. Okay.
Cloud Providers and Quantum Computing
JENSEN HUANG: So the next panel is going to be Simone Severini from AWS and Krysta Svore from Microsoft. Okay. All right. Hey, nice to see you. Nice to see you. Hi, Krysta. Nice to see you. Welcome.
And so we had all these scientists here. And the thing about quantum computing is most of the CEOs I meet and I talk to, I can understand. And the reason for that is because we’re in the computer industry. And of course, there’s always computer science, but there’s not basic science. And basic science is hard, as you know, and quantum basic science is quite hard.
And so most of the time, you’re talking to CEOs who are maybe refactoring the way that a computer is going to be architected or designed, but the basic technology is understandable. And maybe it’s applied in a different way.
But in quantum computing, in this area, the science is new. The science and engineering is new. The manufacturing is new. The software programming model is new. The way you think about algorithms is new.
And of course, one of the areas that I was—you might have noticed, want the industries to succeed, and so I can’t help but try to advise it. Not that I’m giving good advice necessarily, but to narrow its focus on application so that it’s not held accountable to the expectations of other forms of computing.
And now here, the three of us, we work in large companies, but yet you have quantum computing initiatives in your company. How do you guys think about quantum computing in the context of your overall computing, industrial computing business? What do you hope to achieve? What are some of the challenges that you see in ultimately making quantum computing successful?
KRYSTA SVORE: Yeah. So I’ll start. Thank you, Jensen. Great to see you. It’s wonderful to be here and to be with all of you as well.
When we think about quantum computing at Microsoft, obviously we have a large cloud, right? We have many customers and it’s really about that, right? We want to ensure that we are empowering our customers, whether they’re enterprises or scientists, right, practitioners with the most powerful quantum computing at every moment in time. And so that means today and tomorrow, right, we are a platform company looking to bring forward a quantum computing platform to enable new scenarios, new applications, right? Emerging capabilities and disruptive—
JENSEN HUANG: What technology did you choose, did Microsoft choose and why?
KRYSTA SVORE: Yeah. So we have a couple of approaches, right? We both partner with other quantum processing unit providers, right? Quantum hardware providers. And then we also have a long term investment in an approach called topological qubits, topological quantum computing, where we have just had a breakthrough announcement, in fact, in the last month and then also this week at the APS March meeting for Physics, where we shared more data on our approach around the topological qubit.
And the idea here is that you encode the information non locally, meaning our qubit isn’t just a single point. That qubit, the information of the qubit, is spread across a device design where it promises to protect that qubit more, but it also gives a very nice control profile. So we can use digital control instead of analog control, and it can simplify the amount of control requirements in the quantum computer itself. And so this is called the Majorana one chip that we shared last month.
JENSEN HUANG: Okay. We’re going come back to you in just a second about applications and how you think about computing platforms and that kind of stuff. So, Simone, if you could help us understand, you know, approach did AWS select? Why did you guys do that? And how do you see quantum computing in the overall context of your computing strategy?
SIMONE SEVERINI: Sure. Thanks for having me. It’s a great opportunity and this is my first GTC. Has been a wonderful week.
So I’m going to tell you something very boring. You heard it already a few times today. We built quantum computers based on superconducting technology. So we give strong emphasis to error correction. So we believe that error correction is really going to be important for quantum computers to deliver their long term promise.
So I’m not saying that quantum computers built so far are useless. Actually, they’re extremely useful to learn a lot about how to build quantum computers for the future. We announced recently a superconducting chip called OSCELOT. OSCELOT is a kind of wildcat. The name is between Schrodinger cat and Ocelator, so scientists came out with this name.
JENSEN HUANG: That’s clever. That’s almost as clever as NVIDIA.
SIMONE SEVERINI: So Ocelot demonstrates error correction in a scalable architecture. And you have heard this term scalable, scalable scalability. It’s an important term.
So now why superconducting devices? So my mental model hinges on three terms: knowledge, speed, experience. Knowledge, there is a lot of experiments out there done with superconducting devices. In industry, in academia, proof of concept, proof of concept for error correction. So good basic knowledge, good background.
Secondly, speed. They’re fast. They use microwaves, so it’s easier to implement error correction.
And finally, experience. So at AWS, we have a good amount of experience with custom silicon, with semiconductors. Some of this experience can be translated to superconducting technology, at least from the operational point of view. Of course, we are open minded about all other ways of building quantum computers and we’re very excited about progress that is happening across the board.
Applications and Future Directions
JENSEN HUANG: Yeah. And Krysta, back to you. The technology is still developing. It’s making great strides. Frankly, if you just timed the milestone to milestone of the industry, the rate of milestone achievements is accelerating. And that’s really terrific to see. But you still would like to find early applications or ways to think about quantum computer as you’re developing it to make it useful, to find use for it as you go. How are you doing that?
KRYSTA SVORE: Yeah. So I mentioned our work on our Majorana, our topological qubits, where we’re really focused on utility scale there, right? It has the right size, speed and reliability and controllability to reach millions just in the palm of your hand, right, is what we’re focused on in that technology.
We’re also working on types of qubits today that we are building to have 50 logical qubits this calendar year. So for example, with Atom Computing, Ben Bloom was on the stage earlier in this session. With Atom Computing, we are working together to co design the architecture so that we can enable the most and best logical qubits.
And so when you look at upwards of 1,000 physical qubits, 1,000 neutral atoms in this platform, we can work on how we arrange them, how we move them to best enable, say, 50 logical qubits that have better performance characteristics than the underlying physical qubits themselves.
Now with those 50 logical qubits, then we can look towards showing the early applications, right? Around 50 qubits is where you can start to outperform classical computing. And then around at 100 logical qubits, you can start to outperform applications in the space of science, right, where you might be looking at different materials models, quantum magnets.
And so our really, our intention in this next few years is to work with 50 and then 100 logical qubits, then build several hundred, where we’re really pushing the limits on the applications.
JENSEN HUANG: As a 50 cubic area, what application space are you mostly focused on? How are you selecting your applications?
KRYSTA SVORE: Yes. So definitely, the most we believe the most promising set of applications is in chemistry and material science, biochemistry as well. And so in that space, as we look at the rise of AI, right, the AI capabilities that have emerged are just immense and tremendous.
We don’t want to replace AI with a quantum computer, right? And I really view a quantum computer as an accelerator. It is something to accelerate the other compute we already have. We need to integrate it with AI and high performance compute in the cloud, not replace it. And so it’s really about bringing those together.
In the next few years, I think it’s all about using the quantum computer to produce highly accurate data. I used to think of a quantum computer as a standalone solution provider, right? I’m going to run a problem instance. I get a solution out. But that’s not the right way of thinking about it, right?
The other way to think about it is I’m getting classical bits out. What are classical bits? Data. What do I do with classical bits that are highly accurate for what they represent? I use it for training data.
And we know that we can use lots of training data, right? We can use lots of data to train an AI model, and we can augment that model with small amounts of high quality data, and we’ve seen this in different spaces, machine translation, right, other tasks, where small amounts of high quality data can make a huge difference in the task you want to use that model for.
And so in the space of chemistry and material science, I think this is an incredibly promising direction for quantum computers with 50, 100, 150 logical qubits, where those again, those logical qubits have to be better than the physical qubits they’re built from.
JENSEN HUANG: Yes. Krysta, just I’m sure you guys all got it. But one of the areas that’s super exciting is in the area of materials and biology where we would like to train a model. We would like to train a model to train representation of biology. But where does that training data come from? It’s not like we have sensors and instruments. We’re collecting data about biologies and cells and proteins.
Now we could simulate that using a quantum computer and use that as ground truth data to then go train an AI model. And once we have an AI model, it’s a lot more malleable. It’s a lot easier to apply that we could use it to do all kinds of experiments with.
KRYSTA SVORE: Yeah, that’s exactly right, right? The idea is to get a faster, more predictive, more accurate AI model. This is a classical AI model. It deploys in your current infrastructure, right? And it’s fast. And so that’s really the promise, right? You’re using the quantum computer to produce data that you cannot otherwise efficiently get on this planet. That’s powerful.
SIMONE SEVERINI: So I like actually this perspective. So I attended your keynote. You have shown us a slide with four phases: perception, generation, agency and physical AI. Of course, physical AI is about robotics, about autonomous vehicles, but it’s about our physical world, right? There are just a larger container for that.
And as Krysta said, what is the role of quantum computers in that phase, physical AI? Quantum computers are the only instrument we have today that we know so far for accessing that layer of physical reality, which is quantum physics. That layer of physical reality that is governed by certain laws that do not apply to the physics around us that we experience with our senses.
So quantum computers are going to be catalysts for scientific innovation that will allow us to discover certain things that are very hard to predict, of course. And in a way, we must build quantum computers because otherwise that layer of reality will not be accessible to us. And quantum computers will work, of course, in partnership with machine learning and AI.
In my opinion, in the fullness of time, we will do science together with machines. We will ask questions to machines. Machines will ask questions to us. There will be formal verification, formal reasoning, different types of compute and quantum computers fit very well this picture.
KRYSTA SVORE: Yeah. Think the whole, you know, we have not been able to compute like nature computes in many cases, right? Nature is incredibly efficient. And I think of quantum computers as enabling us to take a step closer to computing like nature does, right? Being able to kind of see and understand electrons in a new way, right? We can’t do that in all cases today efficiently. And so it really takes us a step closer. And then combining that with AI, I think it will be revolutionary.
JENSEN HUANG: Yeah. And this paradigm, if you will, of using the quantum computer to get the ground truth to train a classical computer’s AI model, which is much easier to use than the entire software stack and all of the applications and quite frankly, very cost effective. The training data that’s going to come from that quantum computer will not be easy to get. Obviously, it required already the endeavor of humanity to get there. But now that you could take you have that simulator, you could within 50 logical qubits, be able to solve what otherwise is Schrodinger’s equations. Now you can go train a model that we can easily apply. And we’ve now taken this incredible valuable asset and we extracted it and we made it simple to use for all of the computing world.
SIMONE SEVERINI: Indeed. You told us about AI factories. At some point, maybe we’ll have quantum AI factories.
JENSEN HUANG: That’s right. Yeah, that’s really exciting. And so we’re currently at how many logical qubits?
KRYSTA SVORE: Yeah. So interestingly, one year ago, had I been on this stage a year ago, I would have said we had zero logical qubits that were better than our physical qubits that they were built from.
Quantum Computing Progress and Scaling
RAJEEB HAZRA: And within the last year, we went in April, we showed with Continuum four logical qubits. Then five months later, we tripled that number to 12 logical qubits. And then two months later, we again over doubled that, and we showed 28 logical qubits with Atom Computing. And so now, that shows that progression. It goes back to the acceleration you mentioned, Jensen, in the field that we’re seeing.
And now this calendar year, we’re working on the 50 logical qubits with Atom Computing. And the next generation of that system will be a hundred logical qubits on a 10,000, around 10,000 physical qubit machine there.
JENSEN HUANG: Yeah, that’s really incredible. That’s really incredible. And so, Simone, one of the great capabilities of neutral atoms and approaches is the scalability capability of it. And all the things that you said about superconducting approaches, absolutely true. How do you see these approaches at some point merging? Or do you see a grand solution emerging from the industry where everybody said, “Yeah, that’s it.”
Back in the old days, in my generation, we had this thing called TTL to ECL, and we had eventually, we all settled on CMOS, okay? And during my generation, there was a lot of arguments and debates about ECL versus CMOS. And the reason for that is because ECL is essentially static current. It’s always high, but it’s constant. In the case of CMOS, it gets higher and higher, and there’s dynamic power instead of constant current power. And so there was a lot of debate about which one was going to be better long term. But ultimately, CMOS won because it was scalable. And you could solve all of the other annoying issues, which is more transistors.
And is your industry also discovering something similar to that, that maybe all of these challenging, annoying issues ultimately goes away because of scalability and just had a lot of logical qubits?
SIMONE SEVERINI: Yeah. You see, the short answer to your question is that I don’t know. So the longer answer is there is some transition that is happening in the industry. Historically, people ask you how many qubits? So everybody, how many qubits? How many qubits? How many qubits do they have? How many qubits do you have? Right? It’s an obsession, right? You are at the supermarket in the morning in front of tomatoes, a random guy comes and asks you, how many cubits do you have?
So people ask this question less now, which is great because people start—
JENSEN HUANG: I’m laughing because somebody listen, they asked me how many flops I have. And it’s driving me crazy too. It doesn’t matter. Everybody has this obsession, right?
SIMONE SEVERINI: Yeah, that’s right. So there is an interesting signal that is emerging at the moment, which is error correction. Last twelve months, number of different experiments done with different modalities: ions, neutral atoms, superconducting devices, more exotic devices. And this is probably a theme that is going to become stronger and stronger. And probably this will be the theme that will determine which of these modalities is going to emerge in the fullness of time.
Quality vs. Quantity in Quantum Computing
KRYSTA SVORE: Maybe I’ll just say, you mentioned FLOPS. Here, it’s not just more is better. We need better also. So as we talk about logical qubits, right? Not all qubits are created equal. We need qubits that are able to extend how much computation we are doing, right?
And while we’ve had a discussion today about whether it’s a computer or an accelerator, I mean, in the end, we need to do computation. We are not building a storage device, right? That’s not the intent. Our intent is to do computation we cannot do with all the other compute we have on the planet.
And so that means that we need to improve the qubits, right? The qubits, physical qubits fail once every 1,000 operations. 1,000 operations in your computation is not enough, right? We need to do computation that ultimately has more like a quadrillion operations. And so there’s a large gap to close there. We use error correction to close that gap. But when you use error correction to close that gap, you need more physical qubits as well, right? You’re using hardware and software to close the gap.
And so the goal is to move as we look towards, say, 5, 100, one thousand logical qubits, it’s not just the number of qubits that changes. When we say 5, 100, one thousand, it’s also the error rate. It’s how good those qubits are, those logical qubits. That also changes and needs to change by orders of magnitude at each increase.
So when we talk about 1,000 logical qubits, we want those to be as good as one in a billion, you know, only one fault in a billion operations or much better, right? At a hundred, you really want, you know, only one fault in a million, right? 10 to the minus six is what we say. So we really have to increase that and that makes it a little more challenging.
Connecting Quantum Computing to Developers
JENSEN HUANG: Well, one of the things that I’m really excited about today, and I was quite excited to do this event today, but I was also quite concerned about how we would take a conversation that is deeply scientific and very technical and ultimately connect it to developers, which is what GTC is all about, people who are trying to do hard science, people who are trying to create impossible applications and to start the journey of connecting the dots between the science and usefulness for them. And I thought the panel did a fantastic job today.
There were several things that I think was properly and successfully conveyed that the idea of a quantum computer is not to build a computer that replaces computing, but it’s a QPU that is added to a GPU, to a CPU to extend classical computing to do things that otherwise cannot.
There are some domains, some useful domains that we can imagine. You can reason your way through, for example, in the biology and chemistry applications and material sciences applications, where we could use a quantum computer to make a classical computer way better to solve problems that otherwise cannot. For example, to create the ground truth for biology, to create the ground truth for atomic physics. And that enables us to use AI as we understand it today quite reasonably well and improving itself at a million times every couple of two, three years to now amplify the capability of AI to be able to use AI to solve the drug discovery, the material sciences, the biology applications, which if quantum computers singularly addressed, if that’s all it did.
SIMONE SEVERINI: And that’s the reason why we need to build these machines, right? So thinking about timelines, sort of question that you are familiar with, right? So how long it’s going to take? Well, first of all, this is like the space program. The goal is going to the moon, right? But we’re going to discover a lot of things on the way. We discover fireman suits trying to go to the moon, right? This is going to happen with quantum computers as well.
So it’s a grand adventure, but we need to get there. It’s not zero one, right? It’s not, “Oh, we have a quantum computer. We don’t have a quantum computer.” It’s a journey. And lots of things are going to be discovered in science and technology as we get there.
And in terms of timeline, I mean, I myself, I grew up in Tuscany, like in a village in Italy. There is a city called Pisa, and there is a tower which is not straight, it’s leaning. That’s, by the way, a result of quantum computing. Because a classical computer would make it perfectly straight. It took them two hundred years to build this tower. Now if they did a good job or not, it’s possible. But two hundred years, right? Only a quantum computer can make a structure like that stay up for so long. So the point I want to say is that it takes time.
And quantum computers are going to be so impactful that it’s going be a great party in the end.
JENSEN HUANG: We really need to—I think it’s fantastic that Microsoft Azure and AWS is integrating quantum computing into its platform so that developers around the world can gain access to us sooner than later. If everybody had to go build their own quantum computer to be able to enjoy this and the instruments that you build, it would take so much longer. And now you’ve put it in the hands of browsers and anybody could use it.
And I’m looking forward to 50 logical qubits very, very soon and 100 shortly after that. I think the progress of the industry is incredible. We had a great panel today. I know they’re all friends of ours and friends of the industry. And in a lot of ways, although there are many competitive approaches, the industry is working so closely together to want everybody to succeed together. And so I think it’s really great to see.
I want to thank all of you for coming today. And if I had to be wrong to show everybody in the world that quantum computing is worthwhile to do and that the industry is built of amazing people and the work that the industry does is going to make a great impact. And if it caused—if I had to do a mea culpa in order for us to demonstrate that to world, mission accomplished. Thank you all for coming.
Related Posts
- Anthropic CEO Dario Amodei on Pentagon Feud (Transcript)
- Elon Musk on CyberCab, FSD and Optimus @ Brighter with Herbert (Transcript)
- Alexandr Wang’s Remarks @ AI Impact Summit (Transcript)
- Demis Hassabis On AGI, Advice For Indian Engineers, AI In Gaming & More (Transcript)
- Transcript: Google CEO Sundar Pichai Speaks @India AI Summit
