Following is the full transcript of the entire Google I/O 2018 developer keynote event. Google’s CEO Sundar Pichai and the team announced latest products and services that the company provides. This event occurred on Tuesday, May 8, 2018 at Shoreline Amphitheatre, Mountain View, California, United States.
Speakers at the event:
Sundar Pichai – CEO, Google
Scott Huffman – Vice President, Google Assistant
Lilian Rincon – Director of Google Assistant
Trystan Upstill – Google News Engineer
Dave Burke – VP of Engineering, Android
Sameer Samat – VP of Android and Play
Jen Fitzpatrick – VP, Google Maps & Local
Aparna Chennapragada – VP of Product for AR and VR, Google
John Krafcik – CEO of Waymo
Dmitri Dolgov – VP of Engineering, Waymo
Sundar Pichai – CEO, Google
Good morning. Welcome to Google I/O. It’s a beautiful day, I think warmer than last year. Hope you are all enjoying it. Thank you for joining us. I think we have over 7000 people here today, as well as many many people – we are livestreaming this to many locations around the world. So thank you all for joining us today. We have a lot to cover.
But before we get started, I had one important business which I wanted to get over with. Towards the end of last year it came to my attention that we had a major bug in one of our core products. It turns out we got the cheese wrong in our burger emoji. Anyway we went hard to work; I never knew so many people cared about where the cheese is. We fixed it. You know, the irony of the whole thing is I’m a vegetarian in the first place. So we fixed it.
But hopefully we got the cheese right but as we were working on this, this came to my attention. I don’t even want to tell you the explanation the team gave me as to why the foam was floating above the beer. But we restored the natural laws of physics. So all is well. We can get back to business. We can talk about all the progress since last year’s I/O.
I’m sure all of you would agree it’s been an extraordinary year on many fronts. I’m sure you’ve all felt it. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. And it’s made us even more reflective about our responsibilities. Expectations for technology vary greatly depending on where you are in the world or what opportunities are available to you. For someone like me, who grew up without a phone, I can distinctly remember how gaining access to technology can make a difference in your lives. And we see this in the work we do around the world. You see it when someone gets access to a smartphone for the first time. And you can feel it in a huge demand for digital skills we see. That’s why we’ve been so focused on bringing digital skills to communities around the world.
So far we have trained over 25 million people and we expect that number to rise over 60 million in the next five years. It’s clear technology can be a positive force. But it’s equally clear that we just can’t be wide-eyed about the innovations technology creates. There are a very real and important questions being raised about the impact of these advances and the role they will play in our lives. So we know the path ahead needs to be navigated carefully and deliberately, and we feel a deep sense of responsibility to get this right.
That’s the spirit with which we are approaching our core mission: to make information more useful, accessible, and beneficial to society. Of all this felt that we were fortunate as a company to have a timeless mission that feels as relevant today as when we started. And we’re excited about how we can approach our mission with renewed vigor, thanks to the progress we see in AI. AI is enabling this — for us to do this in new ways: solving problems for our users around the world.
Last year at Google I/O, we announced Google AI. It’s a collection of our teams and efforts to bring the benefits of AI to everyone. And we want this to work globally, so we’re opening AI centers around the world. AI is going to impact many many fields, and I want to give you a couple of examples today.
Health care is one of the most important fields AI going to transform. Last year we announced our work on diabetic retinopathy. So it’s a leading cause of blindness and we use deep learning to help doctors diagnose it earlier, and we’ve been running field trials since then at Aravind and Sankara hospitals in India, and the field trials are going really well. We are bringing expert diagnosis to places where trained doctors are scarce.
It turned out using the same retinal scans, there were things which humans quite didn’t know to look for, but our AI systems offered more insights. Your same eye scan, turns out, holds information with which we can predict the five-year risk of you having an adverse cardiovascular event: heart attack or stroke. So to me the interesting thing is that more than what doctors could find in these eye scans, the machine learning systems offered newer insights. This could be the basis for a new non-invasive way to detect cardiovascular risk, and we are working… we just published the research and we are going to be working to bring this to field trials with our partners.
Another area where AI can help is to actually help doctors predict medically events. Turns out doctors have a lot of difficult decisions to make and for them getting advance notice, say 24 to 48 hours before a patient is likely to get very sick, has a tremendous difference in the outcome. And so we have put our machine learning systems to work. We’ve been working with our partners using the identified medical records, and it turns out if you go and analyze over 100,000 data points per patient, more than any single doctor could analyze, we can actually quantitatively predict the chance of re-admission 24 to 48 hours before: earlier than traditional methods. It gives doctors more time to act. We are publishing our paper on this later today and we’re looking forward to partnering with hospitals and medical institutions.
Another area where AI can help is accessibility. You know we can make day-to-day use cases much easier for people. Let’s take a common use case. You know, you come back home in the night and you turn your TV on. It’s not that uncommon to see two people passionately — two or more people passionately talking over each other. Imagine if you’re hearing impaired and you’re relying on closed captioning to understand what’s going on. This is how it looks to you.