
Nvidia, founder and CEO Jen-Hsun Huang gave his opening keynote address at CES 2017 on Wednesday, January 4, 2017 at Las Vegas. Following is the full transcript of the CES keynote event.
Speakers:
Gary Shapiro – President and CEO, CTA
Jen-Hsun Huang – Founder and CEO, Nvidia
Aaron Flynn – General Manager, BioWare
Scott Keogh – President of Audi America
Listen to the MP3 Audio here: Nvidia CEO Jen-Hsun Huang Keynote at CES 2017
TRANSCRIPT:
Gary Shapiro – President and CEO, CTA
[Audio starts abruptly]…. more than tripling. He visited the CTA headquarters in 2008 and he spoke to our staff at our staff meeting. He was also profiled that year on the cover story for CTA’s i3 magazine. He is focused, engaging and certainly visionary. Ladies and gentlemen, please welcome Nvidia’s founder and CEO Jen-Hsun Huang to share his vision of the exciting future ahead.
Jen-Hsun Huang – Founder and CEO, Nvidia
Thank you for that great introduction, Gary. It’s great to be here. We have a special treat for you, so we’re going to just walk offstage real quick, play this video for you and we’ll be right back.
[Video Presentation: (Voice-over) It all starts with our power to imagine, bringing the future we dream to life in amazing new ways, stories that take us to far off worlds, brilliant ideas that lead off the screen and incredible new adventures around every corner. Imagination has unlocked the promise of AI where robots help us build a better future, give us a helping hand and even entertain us. Imagination fuels new breakthroughs and autonomous vehicles letting us take to the road without taking the route, delivering the world to our doorstep and driving our competitive spirit. Imagination powers exciting new discoveries and drive to accelerate the cure for cancer, give the death a new voice and help the injured become whole again.
We are going through unquestionably the most exciting times in the computer industry that all of us have ever witnessed. What we thought was going to be science fiction for years to come is becoming reality as we speak. Our work at Nvidia is dedicated towards a computing model focused on visual and AI computing. It’s built on top of the GPU that we pioneered. This computing model is able to solve problems that normal computing is unable to solve and we dedicate ourselves to challenging — tackling the most challenging computing problems in the world.
There are four areas that we focus on. Surprisingly for many, over the years we dedicated ourselves to video games, not to mention the fact that it’s incredibly fun, it’s incredibly beautiful, and we love it. Video games is also the highest volume, most computationally intensive application the world has ever known. It is about achieving virtual reality. And now all of a sudden all the technologies are coming together for us to finally achieve virtual reality, augmented reality, mixed reality and bring together the experience of the holodeck for real for the first time. Computer graphics technology, computer vision technology and artificial intelligence will come together to realize this exciting new computing platform that we call VR, AR or mixed reality.
Our technology is also used in cloud. AI supercomputers are being built all over the world today so that all of you, when you’re talking to the internet, when you’re making queries, those searches are passed through artificial intelligence, so that the query that you make, the assist that you seek for is much more helpful to you.
And then lastly, some of the most exciting things working on today, the most impactful work that we’re doing for the society, for the industry, self-driving cars and autonomous vehicles. These four areas we’ve been endeavoring for some time. And then all of a sudden the last several years an enormous breakthrough happened.
Researchers all around the world working on a new field, a new technique of machine learning called deep learning met the GPU and the Big Bang of artificial intelligence happened. This technique allows software to write software, allows computers to learn from experience and data, and allows the computer to recognize complex patterns that are easy for you and I, but incredibly hard for computers. And it does so — it does so by hierarchically building up feature representations to represent very complex information, complex patterns but building it up from hierarchies of simpler patterns, the ability to recognize a face, for example, with its infinite variability, built on layers and layers and layers of artificial networks. The lowest layer could just be edges, edges made up the pixels, the layer above that could be contours, shapes, textures, motifs, the layer before above that could be parts of a human face and then eventually it’s able to understand the representation of a face, and understand the representation of a face in its incredible variability. It could be — you could be wearing your hair a little bit differently. You could be wearing a hat, you could be looking away partly and somehow — somehow we human can recognize that person.
Finally, for the very first time using this technique called deep learning, we’re able to do the same with computers. This ability to perceive the world is just an enormous breakthrough and I’m going to show you why. Why this foundational technology, foundational capability was so important. It just had one incredible challenge, one incredible handicap. And that is the amount of computation necessary, the amount of data processing necessary is absolutely enormous. And then one day — one day the AI researchers met the GPU that we invented. And the Big Bang of modern AI happened.
The achievements have been fast and furious. Some of the things that we’ve been able to accomplish in just the last several years: absolutely mind-blowing. You have all heard about the AlphaGo achievement. Demis Hassabis and his DeepMind team was able to teach a computer how to play Go, the most complex game we know how, more variability than all the atoms in the universe, more moves. And yet this computer was able to learn Go from the world’s masters and then play the master of our modern era and beat it.
Network has been able to play Doom, which is a game of maze, maze finding, resource management while you’re staying away from monsters. A network is able to learn how to play Go. A network has been able to learn the style of artists van Gogh, Monet, Picasso and apply that style to photographs. A network has been able to synthesize our voice instead of our voices stitched together from a whole bunch of little tiny chunks. This network is able to learn the tonation of our voice and from the words that we feed it synthesized how we would speak. A network was able to learn to recognize an image and understand its context and caption that image. A network was able to turn images that it sees in computer cameras and translated directly to repeat a trial-and-error called reinforcement learning to eventually adjust its motors and learn motor skills. A network was able to learn how to walk by itself. Just by teaching the kinematics of a robot, a robot that is actually sitting on the ground after repeated trying the robot was able to stand up and walk. And in fact, we’ve been able to teach a car how to drive.
Driving is a skill, it’s not mathematics. Kids can learn how to drive, adults drive and yet we do know computation whatsoever, we do know Newtonian physics whatsoever, in our head we just drive. We’ve been able to teach a car how to drive. The achievements that you see in front of you is impossible until just recently. It was just impossible until recently, and all of a sudden because we’re now able to understand the complex nature of the world, to perceive the world through vision, through audio technology, through natural language. We’re now able to apply artificial intelligence to solve problems that we had never conceived in the past.
The enabling technology behind all of these great achievements is GPU computing, and that GPU — that GPU had the benefit over 23 years, that GPU had the benefit over 23 years to be fueled by the single largest simulation industry in the world, the single largest entertainment industry in the world: video games. To many it’s just for fun, to us it’s incredibly fun, not to mention it propels the science of our company.
PC gaming is thriving and vibrant. It has doubled in the last five years to $31 billion. It is the single largest game platform today. And GeForce is PC gaming. GeForce is also thriving and vibrant. In the last five years it has doubled in revenues. There are 200 million GeForce gamers around the world. The dynamics that are driving this business is multi-dimensional. Of course, it is a global industry. And before anyone has a game console or even ever gets a game console, everyone has a PC and almost every single human today is a gamer. There are several hundred million core gamers in the world today. I expect there to be several billion gamers some day.
Our technology is also fueled and this market is also fueled by the amazing production value of the video games that continues to come out. In the last five years gaming technology has increased in performance by a factor of 10. And now 4K, HDR and virtual reality is coming.
Gaming is no longer just about games. Gaming is now the world’s largest sporting event — the world’s largest sporting event. In fact, it is very very likely that eSports will someday be larger than all of the other sports events combined. eSports, a hundred million video gamers — when you watch them play games is a game of intelligence, it’s a game of tactics and strategy, it’s a game of teamwork and is also a game of incredible hand-eye coordination. The training is intense. The number of people who enjoy it: 325 million spectators for eSports are now part of this very young sport.
Not only is gaming a game, a sport, it is also now social. There are more people than ever that watch video games, watch other people play video games. 600 million video gamer viewers, 2 trillion minutes have been watched in just the last several years. This is now $5 billion advertising industry, one of the fastest-growing areas of internet video that we know.
PC gaming is thriving. GeForce is thriving, and all of that is propelling the incredible R&D budget that we’re able to support. Today we’re announcing — the first announcement is to connect GeForce, the number one gaming platform in the world to the number one social platform in the world. And I’m going to show you what that means in just a second. But basically GeForce has a software platform above it that’s connected to tens of millions of gamers. And this software platform today is able to allow you to capture pictures, capture a video, capture a VR picture so your video game could be shared with other people who want to spectate and watch it in VR. And you could also live broadcast.
Our platform, the GeForce platform with GeForce experience is connected to all of these social networks. And today we’re going to connect it directly with just two clicks to Facebook. There are already millions of people who are watching us stream this on Facebook and they’ve been told that there’s something really really amazing that’s going to be revealed. And to help us celebrate this incredible connection we’re going to share with the world never-before-seen footage of one of the most anticipated video games of 2017: Mass Effect: Andromeda. Let’s welcome Aaron Flynn to the stage. Aaron, it’s great to have you here.
Aaron Flynn – General Manager, BioWare
Thank you so much, Jen. Thank you.
Jen-Hsun Huang – Founder and CEO, Nvidia
Well, you know, you’re going to tell us a little something about Mass Effect. But before you tell everybody, you guys have been working on this for quite a few years. I mean this is the fourth chapter of this incredible franchise and it is just going to be a doozy of a new chapter. Tell us about it?
Aaron Flynn – General Manager, BioWare
Yeah, absolutely. We’ve been on this game for almost five years now, it started as an idea. We wrapped up the Mass Effect Trilogy and we want to tell a new story with new characters and new places to go and explore. And so we reached out decidedly with new technology to do that. That’s how we turned to Frostbite and started working with the Frostbite team and really investing in that and building on that. And that’s allowed us to do all sorts of really cool things, larger environments than before, more dynamic, more detailed characters never before to tell stories with and even better gameplay now with verticality and destruction.
Jen-Hsun Huang – Founder and CEO, Nvidia
Now the thing that’s really amazing about Mass Effect is really it’s kind of — it’s an RPG but the story is really really deep. I mean these are explorers that are traveling to the Andromeda Galaxy. And they’re able to do so of course at faster than light, speed of light.
Aaron Flynn – General Manager, BioWare
Absolutely.
Jen-Hsun Huang – Founder and CEO, Nvidia
They are able to use this incredible element that they found, called Element Zero on Mars and as a result, reduced their mass to zero.
Aaron Flynn – General Manager, BioWare
That’s excellent chance, that’s amazing. Wow! That’s great.
Jen-Hsun Huang – Founder and CEO, Nvidia
And you and I both know that unless your mass is zero, you have no shot of traveling at the speed of light.
Aaron Flynn – General Manager, BioWare
I’ve heard that, yes.
Jen-Hsun Huang – Founder and CEO, Nvidia
And therefore Mass Effect. Ladies and gentlemen, the mass effect. The thing that’s really amazing, and as we’re talking about this, and I just want to take just a couple more seconds to build the suspense for all of the gamers that are on the web. There are millions of people who are watching and they’re going to tear down everything that you show them in a moment. I mean all the skill trees that are going to be revealed for the very first time, the weapons that are going to be revealed for the first time, the aliens that you might show for the very first time, the battletech that you might show for the very first time, I mean every one of those frames are going to be stopped framed and they will be dissected endlessly till this game launches. What do you say?
Aaron Flynn – General Manager, BioWare
Oh absolutely, yes and I expect no less of our fans, to be honest. It’d be great to have them do it.
Jen-Hsun Huang – Founder and CEO, Nvidia
OK, so what we’re going to show you here is this. These are real game footage running on a GeForce GTX 1080. And it’s captured and uses some pretty amazing new technology that you guys are going to show. And it’s never-before-seen. Ladies and gentlemen, Mass Effect: Andromeda, Aaron Flynn, Studio BioWare.
[Video clip: Mass Effect: Andromeda]
Jen-Hsun Huang – Founder and CEO, Nvidia
Is that right? March 21st?
Aaron Flynn – General Manager, BioWare
Yeah, March 21, somebody put out to me that was 3/21 so I guess blast-off then.
Jen-Hsun Huang – Founder and CEO, Nvidia
March 21, that’s amazing. Now, of course, I could sit here and look at that forever. I just love explosions and fire and smoke and all the particle effects and of course everybody sees beautiful things and always see its computer graphics and the mathematics behind it. And so the things that — how about highlighting a couple of things about the Mass Effect game, the game mechanics or things that you might want to highlight.
Aaron Flynn – General Manager, BioWare
Yeah, sure. So in that mission there we were helping one of our squad mates named PB, go and earned some loyalty from her. So that was part of her loyalty mission, that’s a whole new planet there, you get to go and explore, you’re learning about her background, learning about the background of the remnant who are the robots there, you’re fighting and you’re seeing things like our dynamic profile systems, that you change your character class dynamically in real-time, something new to Mass Effect. You’re seeing all sorts of verticality in there, you’re seeing the destruction, you’re seeing all that good stuff now available in Andromeda.
Jen-Hsun Huang – Founder and CEO, Nvidia
That’s unbelievable. Everybody, Aaron Flynn. Thank you.
Aaron Flynn – General Manager, BioWare
Thanks much.
Jen-Hsun Huang – Founder and CEO, Nvidia
Mass Effect: Andromeda launched here direct to gamers all over the world on Facebook live, streamed directly from CES.
Well, you can see that that video game is just unbelievable and yet if you think about the number of PC gamers in the world who would like to get into PC gaming or there are people that they have a thinner light notebook but they would like to enjoy some PC gaming every now and then and just haven’t dedicated themselves to build a new GeForce PC. The number of those PC users is shockingly high. There are 2 billion PC users in the world. We estimate some billion users with integrated graphics for Macs or older PCs or thinner light notebooks that would love to be able to enjoy games but simply don’t have the capability to do so.
And in fact, there’s really no real way of installing a new GeForce graphics card into those computers. And maybe they’re rather intimidated by opening up their computer in the first place and building something new. And we thought that wouldn’t it be amazing — wouldn’t it be amazing if we were to put a GeForce gaming PC, a state-of-the-art GeForce gaming PC powered by Pascal in the cloud like AWS, build a supercomputer like AWS just for enterprises, we would do this for consumers. And as a result, these supercomputers in the cloud could be shared by millions of gamers around the world. They could try video games for the very first time. If they don’t play frequently they could launch a game whenever they want, wherever they want. And if they could just enjoy state-of-the-art video games like this with just the click of a button, wouldn’t that be utterly amazing?
So after many years of endeavor, this is just one of those incredibly hard problems. And it’s incredibly hard because the computational capability necessary for video games is so high and the interactivity requirement is so high that any little bit of latency would ruin the experience. And so after just so much refinement, so much re-architecture, so much engineering the team has finally done it.
Ladies and gentlemen, we’re announcing today GeForce NOW for PCs. GeForce NOW, it turns any of your PCs with the download of a little tiny client into with a click of a button, into essentially your most powerful gaming PC. And it’s all in the cloud, just one click away. Why don’t we take a look at it?
OK, so what you’re looking at here — two computers in the back, one is based on PC, one is Mac. Your usual desktop that you recognize. The thing that you instantly recognize on the PC side is all of the major stores and hubs are now available and they work perfectly steamed with 150 million users, works perfectly. Origin from Electronic Arts works perfectly. Uplay from Ubisoft works perfectly. Multiplayer works perfectly. All of your storage states, your checkpoints, all of your friends all work perfectly. Every single game works exactly as it should. All of the software has been updated. And Dave, are you in the back?
Dave: Yeah, hi Jen-Hsun.
Jen-Hsun Huang – Founder and CEO, Nvidia
Hey Dave, happy new year first of all.
Dave: Happy new year to you.
Jen-Hsun Huang – Founder and CEO, Nvidia
So why don’t we do this? Why don’t we launch Steam? Steam is the single most popular video, PC gaming store in the world. Here Dave is launching Steam on GeForce NOW. It takes about 15 seconds or so. OK, there you are. And there you are, just a few seconds, you’re into Steam and all of the games that you’ve purchased or all of the games you brought into Steam are all there. It also works on a Mac. So why don’t we go over to the Mac? It’s just one of the apps on Mac, double-click on that, there it is. Steam on Mac. And you could buy a game right there. You can buy a game right there. We won’t do so right now. But we could buy a game. And usually when these video games are so large, they are so large because they have so much content and takes hours and hours to download it. But in the case of Steam, in the case of GeForce NOW, it only takes about a minute.
And so why don’t we go ahead and launch a game. Now that it’s installed, let’s launch it. And this game doesn’t run on a Mac and this game runs very poorly on integrated graphics. But yet we’re going to see it working here in its full fidelity. And one of the things that’s really cool is that it just works exactly the way you would expect an app to work on a computer. It’s completely seamless to you, you pretty much forget that is even in the cloud. And video games are complicated in the sense that the game is always being patched, there’s always digital download content, the drivers need to be enhanced all the time and we can do that all behind your back updated all the time, keep your computer always fresh. And there it is: Tomb Raider, running on GeForce NOW, over to cloud, a click away. Video games for the other billion PC users. Okay, thank you.
So GeForce NOW will be available in March. For the early users it’s coming very very soon. We’re putting the final touches in there and it will be available for $25 for 20 hours of play. It’s basically a GeForce gaming PC on-demand — GeForce gaming PC on demand. There will be several grades of performance that are available to you, the higher performance grades, you’ll have fewer hours for each 25 hours of play, for every $25 of credits. OK, so GeForce NOW, incredible value for somebody who doesn’t have the ability to access a gaming PC, somebody who hasn’t taken the effort to build a gaming PC or somebody who just plays infrequently but would love to be able to enjoy video game from time to time. Geforce NOW. Thank you.
Why don’t we take a look at television now? A year-and-a-half ago we announced our partnership with Google to build the world’s first Android TV. Our vision was that just as smartphones has revolutionized the way we do mobile computing and communicate, that smart TVs would also revolutionize the way we enjoy entertainment at home. That with a powerful computer connected to a cloud store, able to run applications that we would be able to realize this new revolution in how we enjoy content at home; we call that SHIELD. And it was the world’s first Android TV console and was built by NVIDIA. Of course, you would expect the performance to be incredibly good. Users all over the world have been absolutely delighted by it. We’ve continued to refine it and support it and enhance it. The number of applications is growing, the performance keeps getting richer. Today we’re announcing a major upgrade to SHIELD. Ladies and gentlemen, the new SHIELD.
The new SHIELD. The new SHIELD is amazing on so many different fronts. But the most important thing, the first thing that you get hit by — it is the world’s first entertainment platform that is full 4K and HDR. Netflix and Amazon work closely with our engineers and we’re announcing that SHIELD will be the first entertainment platform that is able to enjoy Netflix and Amazon’s library of content in 4K HDR. We have 4K services from YouTube, we have 4K services from Google Play. We know that there are tens of millions of PC gamers around the world and they have GeForce gaming and they would like to take that GeForce gaming PC and connect their Steam service to our television. We work with Valve and we’re making available for the very first time a Steam app on Android Play, a Steam app on SHIELD that connects to your PC and you can enjoy 4K HDR gaming on your TV while it’s playing on your PC.
And if you would like to have access to even more games we now have a thousand games in the NVIDIA SHIELD game store. This is just an incredible amount of content. 4K HDR, you can stream content from your PC through Valve Steam and you could of course enjoy great games on the NVIDIA store, 4K HDR.
But we didn’t stop there. The two most popular consumer electronics platforms today. One is the smart television, products like Apple TV and the SHIELD. But the other is the Amazon Echo. It brings AI into your home, allows you to communicate naturally with an AI assistant. We thought why have two devices when you can have one? And so we decided to work with Google to create the world’s first Android TV with the Google Assistant.
Now your television, which is the largest screen in your house, can be controlled through natural language. You can control your content, you can access your content, see content, find content, play it, stop it, fast-forward it, look for photographs, ask it questions, you can even control your home. But we didn’t stop there.
We felt that if you had a Google Assistant, and you had an AI agent in your home, it seems to me that you would want to have it all over your home, that you shouldn’t want to have to lean over to the coffee table and yell commands at it all the time. Wouldn’t it be nice if your AI was completely ambient and you’re just talking naturally and there are all kinds of many spots around the house where you could ask your AI agent to help you do things, maybe call an Uber, maybe make coffee, maybe turn on some music, ask it about the weather?
So we decided to create a peripheral for SHIELD and this is an AI microphone and this AI mike could be anywhere in the house. And this little tiny thing, love to show to you now, announcing the Nvidia Spot.
[Hi, how can I help?]
This little tiny device plugs directly into the wall and because the computing is done on SHIELD, we could have a whole bunch of these all over the house. And this little tiny microphone has far-field processing, has echo cancellation, so it picks up your speech relatively naturally, 20 feet away. And if you have multiple of these devices in a large room, it also does triangulation of where you are using beamforming. It figures out based on the different arrival times of your voice, subtle arrival times of your voice and it’s able to triangulate where you are in the in the room who’s talking and with beamforming focus the energy of the beam of the antennas to the person who’s actually talking. You can have all of these devices all over the house and they all go through one SHIELD over Wi-Fi. Incredible, incredible capability, brand-new Nvidia Spot.
We made a short little video to help you understand how we imagine people using the Google Assistant and the Nvidia Spot in your home. Let’s play it.
[Video clip]
What do you guys think? The combination of a smart TV and an AI assistant in your home, that is completely ambient, completely changes how we interact with our house. I think in the future our AI, our house is going to become an AI and increasingly the vision of Jarvis is going to be realized. And whereas Mark Zuckerberg at Facebook has incredible programming skills and he could take his year to build Jarvis for his home, I’ve decided that we should build it for all of you. And so today with SHIELD, with the new SHIELD, with the Nvidia Spot, the Google Assistant and the integration with the SmartThings Hub that connects to hundreds of consumer electronics devices: smart plugs, coffeemakers, garage doors, locks, thermostats, smart cameras, we can now integrate all of those consumer electronics devices and all of the exciting ones that will be announced this year into the SHIELD experience and all controlled by Google Assistant.
So ladies and gentlemen, the brand new SHIELD for $199. The Nvidia Spot will be available as a separate peripheral and we’ll announce it in the coming months. The Nvidia SHIELD.
Now let’s talk about AI for transportation. As we all know, transportation is one of the largest industries in the world. There’s a billion cars on the road. Just on a single day 20 million ride-shares are held on just Didi and Uber alone. There are 300 million trucks on the road carrying things to us so that we could live our lives. The infrastructure of society is made possible by all of these trucks and they’re carrying things over a trillion miles a year. There’s half a million buses in just large cities alone helping us with public transportation. This transportation industry is one of the largest industries in the world. It is also one of the most vital, without it society doesn’t move forward, without it we don’t have the fundamental infrastructure to live our lives. Everything we enjoy, everything we own, everything we eat and nourish our families with, all as a result of transportation.
And yet this is also one of the industries that has the largest waste. And the largest waste comes from human error. The fact of the matter is these massive machines shouldn’t be operated by humans or they should be operated by humans with quite a substantial amount of assistance. The amount of waste that comes from accidents, whether it’s damages or loss of lives, emergency room visits, insurance measures in hundreds of billions of dollars a year. Over the course of a decade or two, the waste and the human suffering that comes from it is enormous.
There’s other forms of waste as well. Most of the cars that we enjoy are mostly parked. By and large everywhere we look there are parked cars. There are parked cars in beaches and parks, there are parked cars in cities, there are parked cars on campuses. By the way this is the Nvidia campus. There are parked cars all over the streets littered everywhere. Wouldn’t it be amazing if we also reduced that waste and how can we change the face of our community, how can we reinvent our lives?
The amount of waste that we could help contribute to reduction, if we could somehow bring to bear autonomous vehicles, what is the otherwise known as self-driving cars, would be absolutely amazing. This is the reason why we decided almost a decade ago to start working on autonomous vehicles and to start developing the technology necessary for some day for your car to become essentially your most personal robot and it is so intelligent that is able to perform its function, enhance your mobility, keep people safe while of course do it intelligently and keep people out of harm’s way.
The technology necessary to do so is incredibly hard until just very recently. GPU deep learning has made it possible for us to finally imagine realizing this vision in the next year. We can realize this vision right now. Now when you think about self-driving cars, at some level it’s relatively easy. It’s easy because we all do it. It’s so easy that we can’t even explain it. How do you explain to somebody on a sheet of paper that has never driven a car how to drive a car? Well, the reason for that is because intelligence that we’ve come very naturally with is incredibly hard for computers. And as I mentioned earlier deep learning has made it possible for us to finally crack that nut. That with deep learning we can now perceive the world, not just sense the world. Sensation is seeing, hearing, touching, those are senses. Perception is accumulating all of those senses and building a mental model of what it is that you’re perceiving.
We can now finally, with deep learning, perceive our environment surrounding the car. We can also reason about where the car is, where everything else is around the car and where everything will be in the near future, we can predict using artificial intelligence where everything else around us and where we will be so that we can decide whether the path that we’re on or the new path that we’re going to take is going to be safe.
We also have the ability now as I’ve shown you earlier with the same with robotics, motor skills and walking skills that we can use the same technology to teach a car how to drive just by watching us — by watching us, observing us, learning from us a car could literally learn how to drive. That supported by HD maps which are maps in the cloud, another way of thinking about in the context of intelligence knowledge — a priori knowledge we can now compare what we proceed with what we know to be true in the cloud and determine what to do. And meanwhile you can never stop learning. The world is changing all the time. It seems every single week a new road is being beaten up or fixed, something is being added, lanes are being added, roads are being shifted. We have to continuously map, continuously relearn the environment. All of that requires AI computing. It is one of the reasons why we dedicated ourselves to building a fundamental new computer that would go into a car, that has the ability to do all of this deep learning processing and to do it at very high rates, we call an AI car supercomputer.
And our AI car supercomputer used to be multiple chips and we’re building a new one, it’s called Xavier that fits into a little tiny computer like this. This is what an AI supercomputer looks like for your future self-driving car. It’s a little tiny computer like this and has sensor information that comes in and can information controls, the accelerator, the brakes, the steering and all of the other things that you want to control inside the car. This runs a new operating system, we call DriveWorks that takes multiple sensors in, fuses it, recognize and perceive, localize, reason, drive and it does so while connecting to the HD map and comparing ourselves relative to the information that we get from the HD maps. Incredibly powerful, eight high end CPU cores inside this chip, 512 of our next generation GPU, it is SLD, the computer is SLD, the chip is SLC. It is ASIL, automotive, safety integrity level. The quality level, the reliability level of this computer is bar none. And then lastly we do all of that, the performance of a high-end gaming PC shrunk into a little tiny chip, 30 tera operations, trillion operations in just 30 watts, little tiny computer, we can chain a whole bunch of these together depending on the application. And this will be the future of our self-driving car strategy.
Let me show you now what our AI car computer can do. We created a car called BB-8 and it’s an AI car and it runs everything that I just described: this computer, this brand new operating system and a whole bunch of AI networks. Let’s roll it please.
[Video clip]
BB-8 is running in the East Coast, it’s running in the West Coast and it’s just incredibly fun watching BB-8 zipping around. Incredible. So the Nvidia BB-8 AI car. So our vision is that these cars running all of the AI networks that I just described connected to HD clouds should be able to drive from address to address in a very large part of the world. However there are always places where the confidence of the AI, the confidence of the car is not high enough to drive by itself. But because it knows where it’s going to go it knows the path, it could determine its confidence level in different parts of the path. And it could tell you which part you should drive yourself where it has low confidence and supporting. We believe that the car is going to be an AI car for driving. But we believe the AI, the car itself should also be an AI co-pilot.
So ladies and gentlemen today we’re announcing a new capability on the Nvidia AI car computer, it’s called the AI copilot. And it basically works like this. Remember the car has sensors all around, it’s got cameras, it’s got radars, it’s got lighter. So it has surround perception. The car also has cameras and speakers inside the car and so it has internal car perception. It is aware of its surroundings, it’s aware of the state of the driver, it’s aware of the state of the passenger. And just as the Nvidia Spot gives you an array of speakers and microphones there’s an array of speakers and microphones inside the car. And so this car has incredible perception capability, if only the AI was running all the time. We believe that the AI is either driving you or looking out for you, it’s either driving you or looking out for you. When it’s not driving you it is still completely engaged. Even though it doesn’t have the confidence to drive, because maybe the mapping has changed or maybe the road is too tricky or maybe there’s just too much traffic, too many pedestrians and that it shouldn’t drive, it should still be completely aware and it should continue to look out for you, we call that AI copilot. Surround and environmental awareness as well as in the cabin passenger driver awareness. Let me now show it to you.
And so what you’re looking at here is the four cameras — so notice there’s four cameras: front, rear, left and right and the front camera, notice there’s a biker in front of you about 45 feet ahead. And maybe your eyes aren’t looking in that direction. Maybe your head is not looking in that direction. And the car realizes that you might be more cautious and in this case it detects that there’s a biker and tells you through AI natural language processing. And so by talking to you very naturally and alerting you of the condition in front of you.
Here’s another example.
[Careful! There is a motorcycle approaching the center lane]
And so in this case the idea is relatively simple, incredibly complex to execute. You have to understand what you’re seeing and you have to convert what you’re seeing into natural language otherwise known as captioning. And then saying it in a natural way in the car so that we can understand what the car would like us to be concerned about. And so that’s environmental awareness, surrounding awareness.
We also would like to have the capability for inside the car AI. The AI should be paying attention to you too. Maybe you’re not looking where you should be driving, where you’re driving. Maybe you’re dozing off, maybe you had a little bit too good of a time. The AI should pay attention, maybe the AI discovered notices that you’re a little bit aggravated and probably should pull over, too much road rage. And so those capabilities, modern AI networks can absolutely do. And so let’s take a look at that.
[Video clip]
OK, so this is Ginny. This is one of our employees. Hey, let’s put our hands together for Ginny. So the first network does facial recognition, OK and it does facial recognition incredibly well. This is deep learning; facial recognition networks are among the best in the world and it’s reaching human-level capabilities.
This next one is head-tracking. By looking at Ginny with a camera, the artificial intelligence network is able to determine where her head is looking. OK, this one’s really cool. The artificial intelligence network, the deep learning network just by studying her eyes is able to figure out what direction she’s gazing. Maybe she’s looking at nuts and do that. OK so that’s called gaze tracking.
And this next one is really cool .This is inspired by — this is lip-reading, take me to Starbucks and so your car is too noisy and there are too many people talking and yet you said something rather important, wouldn’t it be nice if your AI car was able to recognize and read your lips and determine what it is that you said? This particular capability was inspired by the researchers at Oxford working on the LipNet and they’ve been able to achieve a lip-reading capability that’s 95% accurate, the best human is about 53% accurate. And so this gives you a sense of the state-of-the-art of artificial intelligence network. And you combine what I just described on the external surround perception capability and in-car passenger and driver perception the combination of that allows us to always be aware and to keep the car — keep the driver as alert as possible and always be on the lookout for us.
Not to mention all of the things that I described to you earlier about SHIELD and Google Assistant, we will surely have that assistant capability inside the car so that you can talk to your car and get whatever access to any information or enjoying the content and media you would like. This is the Nvidia AI car platform.
From the bottom up, starting from the bottom, this is the Drive PX computer and on top of it is the DriveWorks operating system. These two things are probably the most complex computer that we have ever built and we built some of the most complex computers the world has ever known. The amount of data that’s coming into this computer, the richness of the artificial intelligence networks and algorithms that we have to run, and the performance by which we have to run it, because time in the case of a fast driving car is reaction time and reaction time is safety. And so it is vital that we do this very very quickly.
On top of that are all of these artificial intelligence networks that I talked about. There’s the autopilot AIs, perception of the world, reasoning about where you are, reasoning about where everybody else is, driving the car, continuously mapping the car and exchanging that information with the HD maps in the cloud, make sure that all the data is coherent and things are changed if necessary, the copilot, deep neural net so that there’s an AI that’s watching out for you all the time. It might even in the near future say something like ‘Jen-Hsun, you’re driving up the hill. You’ll be home in five seconds. Would you like me to go ahead and open up the gate?’ So that by the time I actually got there just a second prior the gate opens up and I pull right into the garage. The AI co-pilot.
We also need the ability to converse and interact with our computer in a very natural way. So natural language processing has to be done inside the car so that if the connection is not very good you can still communicate with the car, and that the reaction time, the latency between your spoken words and its recognition of your speech is as fast as possible. And yet it’s connected to the cloud to an AI assistant. This architecture on top of that is an API called MapWorks. This is one of the most important things we do. MapWorks interacts with all of the mapping companies in the world and it does basically for things.
We basically do four things with mapping companies. The first of course is surveying car. Some cars record all the data and do the processing on GPU supercomputers in the cloud to extract the three-dimensional data from the video. Some survey cars do the processing using a computer like Drive PX, either one of these are something with a discrete — Nvidia discrete GPU so that it could essentially do mapping in real time inside the car, so that’s survey car.
Number two is the GPU supercomputer that’s done — that’s inside the cloud for mapping. Number three, the interface of data, the exchange of data so that our car can always see the HD map in the nearby surroundings of the car. And then number four, as we continuously map the world and notice changes we will update the changes to the live map in the cloud. These four things, these four functionalities is vital to the ability for self-driving cars to be realized in a very high confident way. And so we’ve been working with the world’s leading mapping companies.
We announced several weeks ago, just a few months ago, the leading mapping company in China, Baidu, incredible partner of ours, working across all four layers that I described — all four functionalities that I described, from surveying all the way to map processing, all the way to synchronization with the local computer inside our car. The reason why Baidu was so important is because every car company in the world and any car you build should be able to drive anywhere in the world. And China is now the world’s largest car market. It is too large to ignore and yet only a Chinese company can map China. And Baidu has mapped more of China than any other company. And so partnering with Baidu was a very logical first. We then partnered with TomTom, leading mapping company in Europe. Today we’re really super excited to announce that we’re partnering with ZENRIN, the leading mapping company in Japan. Japan’s roads are incredibly complicated and it is such a dense population, the mapping of Japan is quite an extraordinary task. ZENRIN is an amazing company and we’re working with them to map Japan.
And then also today we’re super excited to announce that we’re working with HERE to integrate Nvidia technology into their data centers for mapping, working with them on map algorithms as well as connecting to all of Nvidia AI car computers inside cars as we synchronize the live maps. Let’s welcome — let’s thank all of our partners to make this — create this incredible dream.
Well, building this ecosystem, building this computer is a gigantic effort. As I mentioned this is the most complex computing problem that we’ve ever tackled. This is high performance computing done in real time and the AI algorithms that we’re developing are all first-of-its-kind. And yet the endeavor is such great importance there are so many companies who could really help us realize this dream. I’m super excited to announce today that ZF is now a partner in helping us turn this computer into a production computer for the automotive industry. ZF is the leading truck and commercial vehicle supplier in Europe. They’re also one of the world’s top five suppliers to the automotive industry. This is an extraordinary company and they are the first to announce a production drive computer, drive AI computer to the market. It’s available commercially for sampling and we’ll ship into production this year. Ladies and gentlemen, ZF.
I have another partner I’d like to announce. There are a lot of cars as we mentioned: a billion on the road, a hundred million sold each year, several hundred million trucks. It’s going to take an enormous, enormous amount of engineering to transform the entire automotive industry into an autonomous industry. And so today we’re announcing that the largest — the number-one automotive technology supplier to the automotive industry, Bosch is going to adopt the Nvidia Drive computer. The largest and the fifth largest automotive supplier in the world have now adopted the Nvidia computing platform so that we can bring AI computers to the autonomous industry. Bosch, a 130 year old company, 375,000 employees, over €70 billion, enormous company, they serve every single car company in the world, unbelievable reach and it’s just such a great pleasure to partner with them. I will be at the Bosch ConnectedWorld in March and well, with a little bit of luck we’re going to give you guys a major update on the work that we’re doing together. So Bosch.
Well, I just have one more announcement. I just have one more announcement. And the momentum behind the work that we’re doing after all of these years is clearly accelerating. And I think people are realizing that creating an AI car computer is a really enormous undertaking. And we’ve worked at it for quite a few years and as you know Nvidia is one of the companies in the world who could specialize in building the most advanced computers from the largest supercomputers in the world to the most advanced gaming PCs in the world to AI car computers. And these computers are just an enormous undertaking, and without a great car company to partner with, it is hard to realize this vision. And so today we’re announcing that Audi and Nvidia will partner together to build the next-generation AI cars.
Audi and Nvidia building an AI car, the world’s most advanced autonomous vehicles powered by Nvidia’s AI car computer, we will have cars on the road by 2020. Let’s welcome Scott Keogh, the President of Audi America to celebrate this moment with us.
Scott, it’s great to have you. Happy New Year!
Scott Keogh – President of Audi America
Thank you. I am glad you saved the best for last, so I appreciate that.
Jen-Hsun Huang – Founder and CEO, Nvidia
You should always go last because you’re always the best. Gosh, we’ve been working together for 10 years and we’ve been building these advanced cars for quite a long time together. You know, frankly when we first started working together the car had no internet. When we started working together there were no internet-connected maps. And when we started working together there were no rich graphics inside the cars that we enjoy. And because of the efforts that we worked on together we now have — we have millions of cars, millions of Audis that are driving all over the world and we really led in technology — information technology. Now the next phase of information technology is artificial intelligence. And when you think about artificial intelligence and the concept of someone who’s been so long in the automotive industry, what is the implication of AI to the automotive world? And what is the implication of AI to Audi?
Scott Keogh – President of Audi America
I think it’s massive. You know, the first thing I just want to say, our partnership goes back 10 years, and if you think about where Audi was in America we were selling 60,000 cars a year, didn’t have much of an impact. If you look at this year we sold a record 210,000 cars. And the reason we did it honestly is the crazy technology that Audi engineers and your engineers put together — the virtual cockpit, Google Earth, Google Maps, point of interest, incredible stuff. So that’s the secret sauce. And I think with you we want to keep the secret sauce moving. Look, if I think of your entire presentation it’s quite simple to me: We want to get to this Nirvana state safer and we want to get there sooner. And really the only way to go about doing that there’s no amount of programming in the world that’s going to manage the complexity of what happens in the street environments daily. The only way to get there is with artificial intelligence. So clearly what we’re announcing is the first thing is a really cool demonstration in the Gold Lot you can drive a Q7 and I think what’s phenomenal when you look at the course this vehicle if you want to use the term has been trained for only four days. So in four days it’s dealing with obstacles, it’s dealing with different roads, it’s dealing with complex environments. Four days!
Jen-Hsun Huang – Founder and CEO, Nvidia
Did you guys just pick that up? There’s an Audi with an Nvidia AI car computer in the Gold Lot driving by itself today, right now.
Scott Keogh – President of Audi America
Exactly.
Jen-Hsun Huang – Founder and CEO, Nvidia
And I think — not end of next year, not end of this year, right now.
Scott Keogh – President of Audi America
Jen, I think that’s classic Audi, and then we want to put things into the real world, put things into action. And I think it’s cool. Now if you want to go program this it would probably take months upon months. So this gives you a sense of the power. And then of course the announcement here we’re excited, we’re talking highly automated vehicle in numerous situations by 2020. This will be in production level four autonomy automation, so this is huge, really huge.
Jen-Hsun Huang – Founder and CEO, Nvidia
We’ve been working together for 10 years and we’ve been in the automotive industry a long time. And obviously we’re both car buffs and we love car technology. But in no time in the history that I remember has the technological change in automotive industry has been so dramatic, so imaginative, so exciting and yet so fast. And yet at Audi you guys have managed somehow to always keep up with the pace. How would you describe the culture and the reason why it is that you guys have been able to stay up and always been in the forefront of it all?
Scott Keogh – President of Audi America
I think the thing we love at Audi is I think most car companies like to look to protect themselves. They protect themselves with their scale, they protect themselves with regulations. And I think at Audi we don’t look to protect ourselves, we like to compete, we like to innovate. And obviously the automobile business has sort of been stuck in the automobile section for probably 50 years. Now it is in the front page of everything, whether it’s mobility, whether it’s car-sharing, whether it’s piloted, whether it’s connectivity, all these powerful things that’s happening, it’s so cool to be integrated into society.
And I think what we also look at which is important is of course what is the real meaning. And I think now we have a real societal meaning. You mentioned a great stuff, we move goods, we move people, we share freedom, we have all kinds of things but the fact that we can bring this kind of safety onto the road, and it’s sort of breathtaking. But you can drive a car perfectly, drive it perfect for years and one-hundredth of a second look away and then look at the disaster that can come, you can prevent that. My mother, 80 years old, drives a B9 A4, loves the car. Her driving career is going to be coming to an end. This can extend that. My kids who are going to be teenagers soon enough. It’s the most dangerous thing you can do is drive a car as a teenager, so these are big societal changes that we can make together. This gets you out of bed in the morning in a very big way.
Jen-Hsun Huang – Founder and CEO, Nvidia
This is cool, this is cool. Well, let’s make sure none of our kids ever have to drive.
Scott Keogh, President of Audi America, incredible partner, can’t wait to come share with you guys the progress we’re making in the next coming years.
That’s it. OK, that’s it. No more announcements, no more announcements.
Well, you know, hey guys we’ve been in the — this is the largest electronics show in the world, 200,000 attendees. It is so fun, it is so fun to be in the middle of the technology industry today. It is so fun to be in the middle of the computer industry and it’s just incredibly thrilling to be in the middle of the automotive industry. Because of artificial intelligence the technology that is going to really reshape how we all enjoy technology in the coming years, we’re able to now realize the dreams that we’ve been dreaming about for so many years. What used to be science fiction is going to be reality in the coming years.
Today I had the privilege of sharing with you a few announcements of the things that we’ve been working on. We want to bring video games to a billion people who currently just simply don’t have the computers necessary to enjoy the type of video games that you see from Aaron Flynn and Mass Effect studio. We want to be able to turn your home into an AI. We believe that your home will engage you and you will engage your home in natural simple ways and it will arrange your life, help you find content for you, connect you with people and just make your life better. And of course we would like to turn your car into an AI that by applying this technology we could revolutionize the automobile and bring joy and delight and the safety to millions and millions of people in the future. Thank you very much. Have a great CES!
Related Posts
- Transcript of Anton Korinek on What Happens When AI Replaces Every Job?
- Transcript of Sundar Pichai on Future of AI, Antitrust, and Privacy
- AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Transcript)
- Transcript of Demis Hassabis + Google Co-Founder Sergey Brin: AGI by 2030?
- Transcript of Sergey Brin’s Interview At All-In Live from Miami