Nvidia CEO Jen-Hsun Huang Keynote at CES 2017 (Full Transcript)

And if you would like to have access to even more games we now have a thousand games in the NVIDIA SHIELD game store. This is just an incredible amount of content. 4K HDR, you can stream content from your PC through Valve Steam and you could of course enjoy great games on the NVIDIA store, 4K HDR.

But we didn’t stop there. The two most popular consumer electronics platforms today. One is the smart television, products like Apple TV and the SHIELD. But the other is the Amazon Echo. It brings AI into your home, allows you to communicate naturally with an AI assistant. We thought why have two devices when you can have one? And so we decided to work with Google to create the world’s first Android TV with the Google Assistant.

Now your television, which is the largest screen in your house, can be controlled through natural language. You can control your content, you can access your content, see content, find content, play it, stop it, fast-forward it, look for photographs, ask it questions, you can even control your home. But we didn’t stop there.

We felt that if you had a Google Assistant, and you had an AI agent in your home, it seems to me that you would want to have it all over your home, that you shouldn’t want to have to lean over to the coffee table and yell commands at it all the time. Wouldn’t it be nice if your AI was completely ambient and you’re just talking naturally and there are all kinds of many spots around the house where you could ask your AI agent to help you do things, maybe call an Uber, maybe make coffee, maybe turn on some music, ask it about the weather?

So we decided to create a peripheral for SHIELD and this is an AI microphone and this AI mike could be anywhere in the house. And this little tiny thing, love to show to you now, announcing the Nvidia Spot.

[Hi, how can I help?]

This little tiny device plugs directly into the wall and because the computing is done on SHIELD, we could have a whole bunch of these all over the house. And this little tiny microphone has far-field processing, has echo cancellation, so it picks up your speech relatively naturally, 20 feet away. And if you have multiple of these devices in a large room, it also does triangulation of where you are using beamforming. It figures out based on the different arrival times of your voice, subtle arrival times of your voice and it’s able to triangulate where you are in the in the room who’s talking and with beamforming focus the energy of the beam of the antennas to the person who’s actually talking. You can have all of these devices all over the house and they all go through one SHIELD over Wi-Fi. Incredible, incredible capability, brand-new Nvidia Spot.

We made a short little video to help you understand how we imagine people using the Google Assistant and the Nvidia Spot in your home. Let’s play it.

[Video clip]

What do you guys think? The combination of a smart TV and an AI assistant in your home, that is completely ambient, completely changes how we interact with our house. I think in the future our AI, our house is going to become an AI and increasingly the vision of Jarvis is going to be realized. And whereas Mark Zuckerberg at Facebook has incredible programming skills and he could take his year to build Jarvis for his home, I’ve decided that we should build it for all of you. And so today with SHIELD, with the new SHIELD, with the Nvidia Spot, the Google Assistant and the integration with the SmartThings Hub that connects to hundreds of consumer electronics devices: smart plugs, coffeemakers, garage doors, locks, thermostats, smart cameras, we can now integrate all of those consumer electronics devices and all of the exciting ones that will be announced this year into the SHIELD experience and all controlled by Google Assistant.

ALSO READ:   Linus Torvalds on Git at Google Tech Talk Conference (Full Transcript)

So ladies and gentlemen, the brand new SHIELD for $199. The Nvidia Spot will be available as a separate peripheral and we’ll announce it in the coming months. The Nvidia SHIELD.

Now let’s talk about AI for transportation. As we all know, transportation is one of the largest industries in the world. There’s a billion cars on the road. Just on a single day 20 million ride-shares are held on just Didi and Uber alone. There are 300 million trucks on the road carrying things to us so that we could live our lives. The infrastructure of society is made possible by all of these trucks and they’re carrying things over a trillion miles a year. There’s half a million buses in just large cities alone helping us with public transportation. This transportation industry is one of the largest industries in the world. It is also one of the most vital, without it society doesn’t move forward, without it we don’t have the fundamental infrastructure to live our lives. Everything we enjoy, everything we own, everything we eat and nourish our families with, all as a result of transportation.

And yet this is also one of the industries that has the largest waste. And the largest waste comes from human error. The fact of the matter is these massive machines shouldn’t be operated by humans or they should be operated by humans with quite a substantial amount of assistance. The amount of waste that comes from accidents, whether it’s damages or loss of lives, emergency room visits, insurance measures in hundreds of billions of dollars a year. Over the course of a decade or two, the waste and the human suffering that comes from it is enormous.

There’s other forms of waste as well. Most of the cars that we enjoy are mostly parked. By and large everywhere we look there are parked cars. There are parked cars in beaches and parks, there are parked cars in cities, there are parked cars on campuses. By the way this is the Nvidia campus. There are parked cars all over the streets littered everywhere. Wouldn’t it be amazing if we also reduced that waste and how can we change the face of our community, how can we reinvent our lives?

The amount of waste that we could help contribute to reduction, if we could somehow bring to bear autonomous vehicles, what is the otherwise known as self-driving cars, would be absolutely amazing. This is the reason why we decided almost a decade ago to start working on autonomous vehicles and to start developing the technology necessary for some day for your car to become essentially your most personal robot and it is so intelligent that is able to perform its function, enhance your mobility, keep people safe while of course do it intelligently and keep people out of harm’s way.

ALSO READ:   A Memoir of My Life with Steve Jobs by Chrisann Brennan (Transcript)

The technology necessary to do so is incredibly hard until just very recently. GPU deep learning has made it possible for us to finally imagine realizing this vision in the next year. We can realize this vision right now. Now when you think about self-driving cars, at some level it’s relatively easy. It’s easy because we all do it. It’s so easy that we can’t even explain it. How do you explain to somebody on a sheet of paper that has never driven a car how to drive a car? Well, the reason for that is because intelligence that we’ve come very naturally with is incredibly hard for computers. And as I mentioned earlier deep learning has made it possible for us to finally crack that nut. That with deep learning we can now perceive the world, not just sense the world. Sensation is seeing, hearing, touching, those are senses. Perception is accumulating all of those senses and building a mental model of what it is that you’re perceiving.

We can now finally, with deep learning, perceive our environment surrounding the car. We can also reason about where the car is, where everything else is around the car and where everything will be in the near future, we can predict using artificial intelligence where everything else around us and where we will be so that we can decide whether the path that we’re on or the new path that we’re going to take is going to be safe.

We also have the ability now as I’ve shown you earlier with the same with robotics, motor skills and walking skills that we can use the same technology to teach a car how to drive just by watching us — by watching us, observing us, learning from us a car could literally learn how to drive. That supported by HD maps which are maps in the cloud, another way of thinking about in the context of intelligence knowledge — a priori knowledge we can now compare what we proceed with what we know to be true in the cloud and determine what to do. And meanwhile you can never stop learning. The world is changing all the time. It seems every single week a new road is being beaten up or fixed, something is being added, lanes are being added, roads are being shifted. We have to continuously map, continuously relearn the environment. All of that requires AI computing. It is one of the reasons why we dedicated ourselves to building a fundamental new computer that would go into a car, that has the ability to do all of this deep learning processing and to do it at very high rates, we call an AI car supercomputer.