Google I/O 2016 Keynote Full Transcript

May 20, 2016 7:24 am | By More

Here is the full transcript of the Google I/O 2016 conference keynote – the company’s yearly developer conference held at Shoreline Amphitheater in Mountain View on May 18, 2016.



Sundar Pichai – CEO, Google

Mario Queiroz – Vice President of Product Management, Google

Erik Kay – Engineering Director at Google

Rebecca Michael – Head of Marketing, Communication Products at Google

Dave Burke – VP of Engineering, Android

Clay Bavor – ‎VP, Virtual Reality at Google

David Singleton – Director, Android Wear

Jason Titus – VP, Developer Products Group at ‎Google

Stephanie Cuthbertson – Group Product Manager, Android Studio

Ellie Powers – Product Manager of Google Play



YouTube Video:





Sundar Pichai – CEO, Google

Welcome! Welcome to Google I/O and welcome to Shoreline. It feels really rice and different up here. We’ve been doing it for many, many years in Moscone, and in fact, we’ve been doing I/O for 10 years but I feel we are at a pivotal moment in terms of where we are going as a company and felt it appropriate to change the venue.

Doing it here also allows us to include a lot more of you. There are over 7,000 of you joining in person today. And later today, after the keynote, you’ll be joined by several Googlers, product managers, engineers, and designers, so hopefully you’ll engage in many, many conversations over the three days.

As always, I/O is being live streamed around the world. This year we have the largest-ever audience. We are live streaming this to 530 external events in over a hundred countries around the world, including Dublin, which is a major tech hub in Europe, Istanbul, which is our oldest Google developer group, and even to Colombo, Sri Lanka, which is the largest attendance outside of the US with 2,000 people.

Our largest developer audience on the live stream is from China today with over 1 million people tuning in live from China, so welcome to those users as well.

We live in very, very exciting times. Computing has had an amazing evolution. Stepping back, Larry and Sergey founded Google 17 years ago with the goal of helping users find the information they need. At the time, there were only 300 million people on line. Most of them were on big physical computers, on slow internet connections.

Fast-forward to today, thanks to the rate at which processors and sensors have evolved, it is truly the moment of mobile. There are over 3 billion people connected and they are using the internet in ways we have never seen before. They live on their phones. They use it to communicate, learn new things, gain knowledge, entertain themselves. They tap an icon, expect a car to show up. They talk to their phones and even expect music to play in the living room, or sometimes groceries to show up at the front door.

So we are pushing ourselves really hard so that Google is evolving and staying a step ahead of our users. All the queries you see behind me are live queries coming in from mobile. In fact, today, over 50% of our queries come from mobile phones. And the queries in color you see behind me are voice queries. In the US, on our mobile app in Android, one in five queries — 20% of our queries — are voice queries and that share is growing.

Given how differently users are engaging with us, we want to push ourselves and deliver them information, rich information, in the context of mobile. This is why, if you come to Google today and search for Beyoncé, you don’t just get ten blue links. You get a rich information card with music. You can listen to her songs, find information about upcoming show events, and book it right there.

You can come and ask us different queries. Presidential elections or Champions League, and we again give you rich in-depth information. And we do this across thousands and thousands of categories globally at scale. You can come to Google looking for news as well. For example, if you’re interested in Hyperloop, an exciting technology, we give you information with amp right there in search results and they load instantly and you can scroll through them.

Amazing to see how people engage differently with Google. It’s not just enough to give them links. We really need to help them get things done in the real world. This is why we are evolving search to be much more assistive. We’ve been laying the foundation for this for many, many years through investments in deep areas of computer science. We built the Knowledge Graph. We today have an understanding of 1 billion entities: people, places, and things and the relationships between them in the real world.

We have dramatically improved the quality of our voice recognition. We recently started training our data sets with noisy backgrounds deliberately, so that we can hear people more accurately. The quality has improved recently by 25%.

Image recognition and computer vision. We can do things which we never thought we could do before. If you’re in Google Photos today, and you search for hugs, we actually pull all the pictures of people hugging in your personal collection. We have recently extended this to videos, so you can say “show me my dog videos” and we actually go through your videos and pull out your favorite videos.

Translation. 10 years ago, we could translate, machine-translate in two languages. Today we do that for over a hundred languages and every single day we translate over 140 billion words for our users. We even do real-time visual translation. If you’re a Chinese user and you run into a menu in English, all you need to do is to hold up your phone and we can translate it into English for you.

Progress in all of these areas is accelerating, thanks to profound advances in machine learning and AI, and I believe we are at a seminal moment. We as Google have evolved significantly over the past 10 years and we believe we are poised to take a big leap forward in the next 10 years.

So leveraging our state-of-the-art capabilities in machine learning and AI, we truly want to take the next step in being more assistive for our users. So today, we are announcing the Google Assistant.

So what do we mean when we say the Google Assistant? We want to be there for our users asking them, “Hi, how can I help?” We think of the assistant in a very specific way. We think of it as a conversational assistant. We want users to have an ongoing, two-way dialogue with Google. We want you to help get things done in your real world and we want to do it for you, understanding your context, giving you control of it. We think of this as building each user their own individual Google.

We already have elements of the assistant working hard for our users. I mentioned earlier that 20% of queries on our mobile app in Android in the US are voice queries. Every single day, people say, “Okay, Google…” and ask us questions that we help them with and we have started becoming truly conversational because of our strengths in natural language processing. For example, you can be in front of this structure in Chicago and ask Google, Who Designed This?” you don’t need to say “the bean” or “the cloud gate.” We understand your context and we answer that the designer is Anish Kapoor.

Here’s another example. You can ask google, “Who directed the Revenant?”

[Google: the Relevant was directed by Alejandro Iñárritu]

And you can follow up that with a question, show me his awards. Notice that I didn’t say the name, which I am glad because I find that name very, very hard to pronounce. And Google could pick that conversation up and return the right answer. This has historically been really hard to do for computers. The reason we are able to do is because we have invested the last decade in building the world’s best natural language processing technology. In our ability to do conversational understanding is far ahead of what other assistants can do. Especially if you look at follow-on queries, our studies show that we are an order of magnitude ahead of everyone else.

So today people are using Google and asking us questions in many, many different ways. So we’ve put together a short video so that you can take a look.

[Video Presentation]

As you can see, users are already looking to Google to get — to help them get things done, but we believe we are just getting started. We believe this is a long journey and given it’s a journey, we want to talk to you a little bit about the future. We want to show you the kind of things we aspire to be able to. Let me do that with an example.

Here’s a common situation. It’s a Friday night. I’m sure many of you can relate to it. Back home, and I want to take my family to a movie. You know, you normally pull out your phone, research movies, look at the reviews, find shows nearby, and try to book a ticket. We want to be there in these moments helping you.

So you should be able to ask Google, “What’s playing tonight?” and by the way, today, if you ask that question, we do return movie results, but we want to go a step further. We want to understand your context and maybe suggest three relevant movies which you would like nearby. I should be able to look at it and maybe tell Google, “We want to bring the kids this time.” and then if that’s the case, Google should refine the answer and suggest family-friendly options. And maybe even ask me, “Would you like four tickets to any of these?” And if I say, “Sure, let’s do Jungle Book,” it should go ahead and get the tickets and have them ready waiting for me when I need it.

As you can see, I engaged in a conversation with Google and it helped me get things done in my context. And by the way, this is just one version of the conversation. This could have gone many, many, different ways. For example, when Google returned the results, I could have asked, “Is jungle book any good?” And Google could have given me the reviews and maybe even shown me a trailer. And by the way, I saw the movie, it’s terrific. And hope you get to see it as well.

Pages: 1 2 3 4 5 6 7 8 9

Category: Technology

Comments are closed.