Nvidia CEO Jen-Hsun Huang Keynote at CES 2017 (Full Transcript)

Print Friendly

And our AI car supercomputer used to be multiple chips and we’re building a new one, it’s called Xavier that fits into a little tiny computer like this. This is what an AI supercomputer looks like for your future self-driving car. It’s a little tiny computer like this and has sensor information that comes in and can information controls, the accelerator, the brakes, the steering and all of the other things that you want to control inside the car. This runs a new operating system, we call DriveWorks that takes multiple sensors in, fuses it, recognize and perceive, localize, reason, drive and it does so while connecting to the HD map and comparing ourselves relative to the information that we get from the HD maps. Incredibly powerful, eight high end CPU cores inside this chip, 512 of our next generation GPU, it is SLD, the computer is SLD, the chip is SLC. It is ASIL, automotive, safety integrity level. The quality level, the reliability level of this computer is bar none. And then lastly we do all of that, the performance of a high-end gaming PC shrunk into a little tiny chip, 30 tera operations, trillion operations in just 30 watts, little tiny computer, we can chain a whole bunch of these together depending on the application. And this will be the future of our self-driving car strategy.

Let me show you now what our AI car computer can do. We created a car called BB-8 and it’s an AI car and it runs everything that I just described: this computer, this brand new operating system and a whole bunch of AI networks. Let’s roll it please.

[Video clip]

BB-8 is running in the East Coast, it’s running in the West Coast and it’s just incredibly fun watching BB-8 zipping around. Incredible. So the Nvidia BB-8 AI car. So our vision is that these cars running all of the AI networks that I just described connected to HD clouds should be able to drive from address to address in a very large part of the world. However there are always places where the confidence of the AI, the confidence of the car is not high enough to drive by itself. But because it knows where it’s going to go it knows the path, it could determine its confidence level in different parts of the path. And it could tell you which part you should drive yourself where it has low confidence and supporting. We believe that the car is going to be an AI car for driving. But we believe the AI, the car itself should also be an AI co-pilot.

READ ALSO:   Apple CEO Tim Cook Keynote at WWDC 2016 (Full Transcript)

So ladies and gentlemen today we’re announcing a new capability on the Nvidia AI car computer, it’s called the AI copilot. And it basically works like this. Remember the car has sensors all around, it’s got cameras, it’s got radars, it’s got lighter. So it has surround perception. The car also has cameras and speakers inside the car and so it has internal car perception. It is aware of its surroundings, it’s aware of the state of the driver, it’s aware of the state of the passenger. And just as the Nvidia Spot gives you an array of speakers and microphones there’s an array of speakers and microphones inside the car. And so this car has incredible perception capability, if only the AI was running all the time. We believe that the AI is either driving you or looking out for you, it’s either driving you or looking out for you. When it’s not driving you it is still completely engaged. Even though it doesn’t have the confidence to drive, because maybe the mapping has changed or maybe the road is too tricky or maybe there’s just too much traffic, too many pedestrians and that it shouldn’t drive, it should still be completely aware and it should continue to look out for you, we call that AI copilot. Surround and environmental awareness as well as in the cabin passenger driver awareness. Let me now show it to you.

And so what you’re looking at here is the four cameras — so notice there’s four cameras: front, rear, left and right and the front camera, notice there’s a biker in front of you about 45 feet ahead. And maybe your eyes aren’t looking in that direction. Maybe your head is not looking in that direction. And the car realizes that you might be more cautious and in this case it detects that there’s a biker and tells you through AI natural language processing. And so by talking to you very naturally and alerting you of the condition in front of you.

READ ALSO:   Facebook CEO Mark Zuckerberg at F8 2016 Day 1 Keynote (Full Transcript)

Here’s another example.

[Careful! There is a motorcycle approaching the center lane]

And so in this case the idea is relatively simple, incredibly complex to execute. You have to understand what you’re seeing and you have to convert what you’re seeing into natural language otherwise known as captioning. And then saying it in a natural way in the car so that we can understand what the car would like us to be concerned about. And so that’s environmental awareness, surrounding awareness.

We also would like to have the capability for inside the car AI. The AI should be paying attention to you too. Maybe you’re not looking where you should be driving, where you’re driving. Maybe you’re dozing off, maybe you had a little bit too good of a time. The AI should pay attention, maybe the AI discovered notices that you’re a little bit aggravated and probably should pull over, too much road rage. And so those capabilities, modern AI networks can absolutely do. And so let’s take a look at that.

[Video clip]

OK, so this is Ginny. This is one of our employees. Hey, let’s put our hands together for Ginny. So the first network does facial recognition, OK and it does facial recognition incredibly well. This is deep learning; facial recognition networks are among the best in the world and it’s reaching human-level capabilities.

This next one is head-tracking. By looking at Ginny with a camera, the artificial intelligence network is able to determine where her head is looking. OK, this one’s really cool. The artificial intelligence network, the deep learning network just by studying her eyes is able to figure out what direction she’s gazing. Maybe she’s looking at nuts and do that. OK so that’s called gaze tracking.

And this next one is really cool .This is inspired by — this is lip-reading, take me to Starbucks and so your car is too noisy and there are too many people talking and yet you said something rather important, wouldn’t it be nice if your AI car was able to recognize and read your lips and determine what it is that you said? This particular capability was inspired by the researchers at Oxford working on the LipNet and they’ve been able to achieve a lip-reading capability that’s 95% accurate, the best human is about 53% accurate. And so this gives you a sense of the state-of-the-art of artificial intelligence network. And you combine what I just described on the external surround perception capability and in-car passenger and driver perception the combination of that allows us to always be aware and to keep the car — keep the driver as alert as possible and always be on the lookout for us.

READ ALSO:   Hugh Herr at TED Talks on The New Bionics That Let Us Run, Climb And Dance (Transcript)

Not to mention all of the things that I described to you earlier about SHIELD and Google Assistant, we will surely have that assistant capability inside the car so that you can talk to your car and get whatever access to any information or enjoying the content and media you would like. This is the Nvidia AI car platform.

From the bottom up, starting from the bottom, this is the Drive PX computer and on top of it is the DriveWorks operating system. These two things are probably the most complex computer that we have ever built and we built some of the most complex computers the world has ever known. The amount of data that’s coming into this computer, the richness of the artificial intelligence networks and algorithms that we have to run, and the performance by which we have to run it, because time in the case of a fast driving car is reaction time and reaction time is safety. And so it is vital that we do this very very quickly.