Google I/O 2016 Keynote Full Transcript

Translation. 10 years ago, we could translate, machine-translate in two languages. Today we do that for over a hundred languages and every single day we translate over 140 billion words for our users. We even do real-time visual translation. If you’re a Chinese user and you run into a menu in English, all you need to do is to hold up your phone and we can translate it into English for you.

Progress in all of these areas is accelerating, thanks to profound advances in machine learning and AI, and I believe we are at a seminal moment. We as Google have evolved significantly over the past 10 years and we believe we are poised to take a big leap forward in the next 10 years.

So leveraging our state-of-the-art capabilities in machine learning and AI, we truly want to take the next step in being more assistive for our users. So today, we are announcing the Google Assistant.

So what do we mean when we say the Google Assistant? We want to be there for our users asking them, “Hi, how can I help?” We think of the assistant in a very specific way. We think of it as a conversational assistant. We want users to have an ongoing, two-way dialogue with Google. We want you to help get things done in your real world and we want to do it for you, understanding your context, giving you control of it. We think of this as building each user their own individual Google.

We already have elements of the assistant working hard for our users. I mentioned earlier that 20% of queries on our mobile app in Android in the US are voice queries. Every single day, people say, “Okay, Google…” and ask us questions that we help them with and we have started becoming truly conversational because of our strengths in natural language processing. For example, you can be in front of this structure in Chicago and ask Google, Who Designed This?” you don’t need to say “the bean” or “the cloud gate.” We understand your context and we answer that the designer is Anish Kapoor.

Here’s another example. You can ask google, “Who directed the Revenant?”

[Google: the Relevant was directed by Alejandro Iñárritu]

And you can follow up that with a question, show me his awards. Notice that I didn’t say the name, which I am glad because I find that name very, very hard to pronounce. And Google could pick that conversation up and return the right answer. This has historically been really hard to do for computers. The reason we are able to do is because we have invested the last decade in building the world’s best natural language processing technology. In our ability to do conversational understanding is far ahead of what other assistants can do. Especially if you look at follow-on queries, our studies show that we are an order of magnitude ahead of everyone else.

So today people are using Google and asking us questions in many, many different ways. So we’ve put together a short video so that you can take a look.

[Video Presentation]

As you can see, users are already looking to Google to get — to help them get things done, but we believe we are just getting started. We believe this is a long journey and given it’s a journey, we want to talk to you a little bit about the future. We want to show you the kind of things we aspire to be able to. Let me do that with an example.

Here’s a common situation. It’s a Friday night. I’m sure many of you can relate to it. Back home, and I want to take my family to a movie. You know, you normally pull out your phone, research movies, look at the reviews, find shows nearby, and try to book a ticket. We want to be there in these moments helping you.

So you should be able to ask Google, “What’s playing tonight?” and by the way, today, if you ask that question, we do return movie results, but we want to go a step further. We want to understand your context and maybe suggest three relevant movies which you would like nearby. I should be able to look at it and maybe tell Google, “We want to bring the kids this time.” and then if that’s the case, Google should refine the answer and suggest family-friendly options. And maybe even ask me, “Would you like four tickets to any of these?” And if I say, “Sure, let’s do Jungle Book,” it should go ahead and get the tickets and have them ready waiting for me when I need it.

As you can see, I engaged in a conversation with Google and it helped me get things done in my context. And by the way, this is just one version of the conversation. This could have gone many, many, different ways. For example, when Google returned the results, I could have asked, “Is jungle book any good?” And Google could have given me the reviews and maybe even shown me a trailer. And by the way, I saw the movie, it’s terrific. And hope you get to see it as well.

Every single conversation is different. Every single context is different. And we are working hard to do this for billions of conversations, for billions of users around the world, for everyone. We think of the assistant as an ambient experience that extends across devices. I think computing is poised to evolve beyond just phones. It will be in the context of a user’s daily life. It will be on their phones, devices they wear, in their cars, and even in their living rooms. For example, if you’re in one of the hundred different android auto models and you’re driving and you say, “Let’s have curry tonight,” we know the Warriors are on tonight and Steph Curry is playing but you know, all you’re looking for is food, and we should be smart, order that food and let you know when it is ready, and maybe even have it waiting for you at your home.

Pages: First | ← Previous | 1 | 2 | 3 | ... | Next → | Last | Single page view