Full Transcript of the opening day keynote of Google I/O 2015 developer conference held on May 28-29, 2015 in San Francisco.
Speakers:
Sundar Pichai – SVP, Android, Chrome and Apps, Google
Dave Burke – VP of Engineering, Android
David Singleton – Director, Android Wear
Aparna Chennapragada – Director, Google Now
Anil Sabharwal – Director, Google Photos
Jen Fitzpatrick – VP of Engineering
Jason Titus – Senior Director of Engineering
Ellie Powers – Product Manager of Google Play
Clay Bavor – VP, Product Management, Google
Listen to the Audio MP3 here: Google IO 2015 Keynote Full Audio
Operator: Welcome to the stage Senior Vice President of Products, Sundar Pichai.
Sundar Pichai – SVP, Android, Chrome and Apps, Google
Good morning. Welcome to Google I/O 2015. Thank you for joining us today. I know there were long lines. Thank you for making it in. We are being joined by over 2 million people on the live stream. So welcome to all of them as well.
As always, we live-stream I/O to many, many locations around the world, in fact, to over 460 locations in 90 countries across six continents. Let’s say hello to a couple of them. First is to Mexico City. Bienvenidos.
Let’s move on. We are moving to Munich in Germany. Guten Tag.
And finally to a small town, Juja, outside of Nairobi in Kenya. Habari. There’s a college there, and so we have many students joining us. It’s really exciting to be here today. This is, of course, the moment of mobile and the smartphone. We’ve been talking about the mobile revolution for a while. But just since last year, since last year’s I/O, there are over 600 million people who, for the first time, have adopted a smartphone. And they’re beginning the journey of computing. So it’s incredibly special the moment we live in.
So at Google, we have always been working hard to build products for everyone in the world.
We went on to solve many more problems. We asked, why does Gmail have to be so slow? Why couldn’t you search through all your email? That led us to Gmail. We noticed people were really interested in the real world around them. That led us to Google Maps, YouTube. Over time, we built two computing platforms, Chrome, because we noticed browsers were very slow and not safe to use; Android, because the team noticed the fragmentation and how difficult it was to build mobile phones, and the user experience was confusing and the developer experience was very difficult. We brought that together in the form of Android.
Each of these products today work at scale for everyone in the world. And we are privileged to serve over a billion users in each of these products. So today, in this moment of mobile, at I/O, we’re going to talk about two things. The first is how we are evolving our computing platforms, not just for mobile, but beyond mobile for a multi-screen world. The second is how Google, core to our mission of organizing the world’s information, is really evolving the mobile experience for users.
So let’s get started with our computing platforms and Android. Android is working at scale. Last year, eight out of 10 phones that were shipped were based on Android. The breadth and depth of what we see in Android is just stunning. We just want to visualize it and internalize it for a minute. So behind me, you’re going to see a dot — a single dot for every Android active phone out there. And we are representing the range with colors. Blue stands for high-end phones, high pixel density, high RAM. Phones like the Samsung Galaxy S6, LGG4, et cetera. Or on the other end of the spectrum what you see in emerging markets, small, affordable, the entry-level smartphones. There are over 4,000 distinct devices you see in Android. The range of what we see is what we really embrace.
In fact, when we say be together, not the same, that is precisely what we mean. We want to make sure we leave no one behind. We want to provide Android for users the way they like it, so that it works for them.
So today, we are also going to talk about how Android is not only evolving for mobile, but how we are taking computing beyond mobile as well. Last year at Google I/O, we talked about Android evolving to many, many form factors. So let’s see how we are doing. We have to remember, with the phone, we started with one phone, and today we serve 400 OEMs, 500 carriers, and over 4,000 devices. The same journey is underway in each of these areas.
For Android Wear, we started with two models. Today, we are up to over seven models, and there are many more to come. The team has been evolving the software continuously, and you’ll get a full update on that today.
Android Auto. We announced this last year as an open automotive alliance. Just last week, Hyundai announced that its Sonata models are available in U.S. dealerships right now. GM announced 13 of its Chevrolet models for 2016 will be based on Android Auto. Volkswagen just announced this week its entire lineup for 2016, including Passat and the common models in Europe and North America is based on Android Auto. Ford, GM, Mitsubishi, we have over 35 car manufacturers beginning to ship. And so we will have a whole range of vehicles coming in the market soon.
Android TV, we announced last year as a reference device, ADT-1. Today, we have Sony and Sharp televisions shipping in the U.S., Philips, which is very popular in Europe, and many more models coming. There are many, many streaming consoles, NVIDIA SHIELD is a great console, RAZR has one, and the range of content we are seeing in Android TV is pretty breathtaking. We have grown our user base there, we’ve doubled in the last three months alone. Of course, for TV, for televisions, we have a simple and elegant solution in the form of Chromecast. People have bought over 17 million devices. They have pressed the Cast button 1.5 billion times. And today you have over 20,000 applications you can cast from. All of this is powered by an incredible content ecosystem in Google Play. And today I’m very excited to announce HBO Now for the first time is coming to Google Play, and it’s available now across Android and iOS using Cast. You can watch your favorite episodes, be it Game of Thrones, the upcoming True Detective season, or maybe even your favorite episode of Silicon Valley. I hope this moment doesn’t make it in there.
So we always start I/O by giving an update of our upcoming platform release. That’s the foundation of everything we do. Last year with “L” — “L” was a major release for us in which we tackled many, many form factors. For “M,” we have gone back to the basics. We have really focused on polish and quality. We have literally solved thousands of bugs. We more importantly thought through every detail to make it better. To give you a preview of the upcoming “M” release, let me invite Dave Burke onto the stage.
Dave Burke – VP of Engineering, Android
Thanks, Sundar. So this year, we’ve made a conscious decision to focus on quality end-to-end. Today, I’m excited to share a preview of the new “M” release of Android. The central theme of M is improving the core user experience of android. Our focus is on product excellence. Everything from squashing thousands of bugs to rethinking fundamental aspects of how the platform has worked for years.
Now one of the unique and amazing things about Android is that it’s an open platform. We make the source code available to everyone from hobbyists to the world’s largest device manufacturers and this has enabled device makers both small and large to innovate and iterate Android, often many years ahead of the competition.
With “M,” we’re excited to be able to fold in some of these improvements that we’ve seen in the ecosystem into the official Android platform, so we can make them more widely available to users and app developers alike. So let me take a few minutes and walk you through six key areas where we really improved the core user experience in “M.”
Now, one of the things Android users tell us they love is their ability to customize and really control the behavior of their phones. So in “M,” we’re bringing this approach to App Permissions. So with App Permissions, we’re giving users meaningful choice and control over the data they care about. You don’t have to agree to permissions that don’t make sense to you. And we’re accomplishing this through a few big changes. First, we’re greatly simplifying App Permissions to a smaller set of easily understood things like location, camera, microphone. Second, apps will now ask you for permission the first time you try to use a feature instead of asking during app installation time.
So let’s take a look at how this could work with WhatsApp. Now, keep in mind when I install the app, I wasn’t asked to grant any permissions upfront. Okay. So let’s say I’m in the app and I want to send a voice message. When I press the mic button, the app makes a request to the system to access the microphone which then brings up this permission prompt. The permission directly reflects the use case. And this is a one-time request and of course, I as a user can allow or deny the request on a per-permission basis.
Now that I granted the permission, I can hold the mic button and record a message like so. So one of the things we’ve heard from our users is the desire to change or revoke an already-granted permission. So with “M,” I can now go into settings, choose the app, see what permissions it has and even modify them. I can also go — and I can also go the other way. So I can choose a permission, say microphone, and see which apps have access to it. Now, for developers the new App Permissions apply to apps compiled against the M SDK, legacy apps targeting SDK version before M will behave as before. One of the really nice side-effects of the new permission model for app developers is it’s faster to get users up and running in your app. We also know that with the old permission model that adding a new permission to your app can affect your update adoption. With the new permission model, updates are seamless because user involvement is deferred until right when it’s needed.
Okay. So that’s App Permissions. This is a pretty big departure since Android 1.0, but it’s more intuitive model for users as a much more seamless app install process for app developers.
So next up, let me highlight one of the ways that we’re improving the web experience on mobile. So one of the interesting trends that we’re observing on mobile today is around how web content is being consumed. And app developers increasingly care about the experience that users get when they tap on a web link from within their app. And today you’ve got two choices, right? You either make a big context switch out to the browser or you build your own experience by embedding a webview within your app. And webviews are this nice property that they enable you to make the transition to web content really seamless and you can make it feel like one app. But while webviews are powerful they have some downsides. It means developers have to get into the business of building a browser which is a complex and time consume thing to do well. And for the user browsing content in webview means you lose some of the things that make users’ lives really easy on the web like, say save passwords or logged in sessions.
So Chrome custom tabs is a feature that gives developers a way to harness all of Chrome’s capabilities while still keeping control of the look and feel of the experience. And we’ve been working with our friends over at Pinterest on this and I’d like to show you a sample of what’s possible.
So here I am in the Pinterest app. Let’s tap on something interesting. Now when I tap on a web link at the bottom, you’ll notice that there is a custom transition automation into Chrome custom tabs. Now, remember, this is actually the Chrome browser now running on top of your app. And the custom tab is branded the same color as Pinterest which feels like one experience and you can even see that Pinterest has added a custom button to the toolbar to pin pages. And they can also add additional items to Chrome’s overflow menu up at the top right. Finally, the back button gives an easy way to seamlessly go into the app.
So the custom tab was super fast to load because Pinterest was able to ask Chrome ahead of time to pre-fetch the content. And of course the real benefit is for users. With Chrome custom tabs, you’re signed nto your favorite sites since it uses Chrome state and you get all of Chrome’s feature such as save passwords, auto filling forms, Google Translate and more. And of course you also get the benefits of Chrome security model of. So Chrome custom tabs is available today on the Chrome dev channel and we’re excited about rolling it out to users in Q3 this year. So that’s an example of how we’re improving the web experience, when you want to link from an app to the web.
Now I’d like to talk to you about some improvements we’re doing when you want to link from an app to another app. And linking is, of course, one of those fundamental principles of the web. It brings different websites together as part of a natural user experience flow. And as more and more web destinations get corresponding rich app experiences, for example, the YouTube app or the Twitter app, we see different attempts to enable linking between apps so as to apply those same fundamental principles of the web.
Now, Android’s intent system already provides a powerful way to enable an app to advertise that it can handle rendering of a particular link pattern but one of the limitations of the current system is that when a user selects a web link from somewhere, Android doesn’t know whether to show it in a web browser or some other app that claims support for the link. As a result, Android will show the infamous disambig dialog to ask a user to choose. No. So in the “M” release we’re enhancing Android’s intent system to provide a more powerful app linking capability. Apps can now add an auto verify attribute to their application manifest to indicate that they want the links they claim they support to be verified by the platform. The Android platform will then make a request to the web server pointed to by the links at app installation time and look for a file containing the name and signature of the application. This enables Android to verify that the app owns the links it claims it does.
So now, when I, as a user, tap on a verified link, let’s say a Twitter link I received in an email, the Android platform will seamlessly write me to the Twitter app with no more disambig dialog. So by putting app linking directly in the platform we greatly improve the core user experience.
Okay. So next up we want to give you a preview of an important initiative we’re working on which we call Android Pay, and it builds on a work we did in previous Android releases such as near-field communications or NFC in Gingerbread and Host Card Emulation which we introduced in Kitkat. With Android Pay, users can simply and safely use their Android phone to pay in stores. Wherever they see the Android Pay logo or the NFC logo or, indeed, in thousands of Android Pay partner apps.
Android Pay is focused on simplicity, security and choice. It’s simple because all you have to do is unlock your phone like normal, place it in front of the NFC terminal to pay, and there’s no need to open any app. It’s that easy. It’s secure because when you add a card for use with Android Pay, a virtual account number is created, which is then used to process your payment. Your actual card number is not shared with the store during the transaction. And you have a choice. We built Android Pay as an open platform, so people will be able to choose the most convenient way to activate Android Pay, either through our app or through any supported banking app. And we’re working with leading financial institutions so you can securely use your existing debit and credit cards with Android Pay. And, of course, we’re also working with major U.S. mobile carriers, including AT&T, Verizon and T-Mobile to ensure that whenever you buy a new phone you can walk out the door ready to go. And, of course, Android Pay works with any Android device with NFC.
Android Pay will work in over 700,000 stores in the U.S. which accept contactless payments, including retailers such as Macy’s, Bloomingdale’s, McDonald’s and so much more. Android Pay will also be available in app from developers selling physical goods and services to help you speed through the checkout process. So leading developers like Lyft, Grubhub and Groupon and more will be offering Android Pay in their app soon.
So we are at the start of an exciting journey. We believe the same partnership model that fueled Android’s growth from a single device seven years ago to now more than a billion users will enable Android Pay to be successful too, and we’re working closely with payment networks, banks and developers to bring mobile payments to Android users around the world with the rollouts starting in parallel to launching “M” this year.
Now Android Pay works on phones from KitKat 4th but with the “M” release Android Pay gets even better because it turns out that Android device makers have been including fingerprint sensors on devices since 2011, for example with the Motorola Atrix and most recently with devices like the Samsung Galaxy S5 and S6. In M, we are taking the opportunity to standardized support for fingerprint in Android. So it works across a breadth of sensors and devices and exposes a standard API to developers.
So what does this mean? Well, for one, in M you can use your fingerprint to authorize Android Pay transactions. So let’s take a look at this in action.
The user simply touches the fingerprint sensor which unlocks the phone. The phone will then make a secure NFC exchange with the payment terminal and then the payment goes through and you get the Android Pay notification of the transaction at the top. Fast and simple.
With the “M” release you can also use your fingerprint to simply unlock your device or make place or purchases. Most importantly, any app developer can make use of the new APIs to add fingerprint support to their own apps and we’ve been working with lots of our app partners on the new fingerprint APIs and the feedback has been overwhelmingly positive. So for example, let’s take a look at the new Target app which is aiming to release later this year. Now the users previously associated their username and password credentials with their fingerprint and that’s a simple and popular design pattern we’re seeing with app developers. Now when the user wants to purchase something, they just present their fingerprint like so and it will process the payment. Super easy.
So let’s talk about another big area where we’ve improved in “M,” and that’s power. Now, Android has always enabled true multitasking as an open platform for developers. And people love that about Android. But in making the platform exceptionally flexible, there’s a trade-off in data freshness and battery, especially if the user installs hundreds of applications. So in the spirit of improving the core experience, we’re changing Android in M to be smarter about managing power through a new feature we call Doze. Now many of us have user patterns with our devices, especially with tablets. For example, you might leave your tablet on your coffee table or your nice [sandal day] only to pick it up and use it to read a book or watch a movie in the evening. With “M,” Android uses significant motion detection to learn if a device has been left unattended for an extended period of time. In that case, it will exponentially back off background activity to go into a deeper sleep state. So what we’re doing is we’re trading off a little bit of app freshness for longer battery life. And we call it Dozing because while the device is asleep, it’s still possible for the device to trigger real-time alarms or to respond to incoming chat requests through high-priority messages.
So how well does this work? Well we took two Nexus 9s and we put Lollipop on one and “M” on the other and we loaded both up with the same account and lots and lots of applications. We put the two devices side by side and measured power. And I’m happy to say that we’re seeing devices with “M” lasting up to two times longer in standby. Of course, no matter how much better we make power management in Android, sooner or later, you’ve got to recharge that device. So we wanted to improve that too. So we have been heavily involved in creating a new USB Type-C standard, and Type C ushers in a new way of charging that works across hardware from cell phones to tablets to laptops and everything in between. And that means we’re going to start seeing really fast charging of devices as standard, anything from three to five times faster. And, of course, USB Type C connectors are flippable and hence much more mobile friendly and durable. No more grappling to find the right direction for the charging plug.
So we’ve been working with device manufacturers to bring Type C devices to the market with the “M” release. And because Type C is bidirectional, the “M” release adds the ability to select whether you want your device to be charged by the cable or, instead, for your device to act as the charger to whatever it’s plugged into. So that’s Type C, coming to a phone near you soon.
So there are hundreds of new features in “M,” and you’ll find improvements to the core user experience peppered throughout the release. And it’s really the little things that matter and add up. So, for example, we’ve improved word selection in “M.” We now auto select or chunk on each word boundary. And you can still drag backwards to go character by character. And we’ve also added an awesome floating toolbar for quick access to things like copy and paste. As another example, we’ve made sharing easier. The system can now automatically learn which people and which apps you share with most frequently and make those available with just a single click.
And if you weren’t a big fan of the volume control changes in Lollipop, hands up if you weren’t fans of the Lollipop changes. Okay. The good news is we’ve simplified those. Much easier. And as an example of the little details we polished and obsessed over, we’ve added a dropdown to control of volume of individual audio streams such as alarms and music.
Okay. So the “M” release is still in development. But we’re excited to be able to share an initial developer preview today for the Nexus 5, 6, 9, and Player. And we’d love your feedback on the new features and APIs to help us get the platform ready for official release in Q3. So that’s “M.” We’re working incredibly hard to produce our most polished Android release to date. And the improvements don’t just stop at the platform. We’ve also have some really exciting new app developments enabled by “M,” for example, with Google Now, which you’ll hear about a little bit later.
And so with that, let me hand over to David to give you an update on Android Wear. Thank you.
David Singleton – Director, Android Wear
Thanks, Dave. You know, we love watches. They’ve always been this incredible mix of beauty and technology. And from the very beginning, they’ve inspired artists and engineers to come together and create. And with Android Wear, we’re taking a similar path. We’re partnering with companies all over the world to create beautiful, useful devices. And the result is choice.
Earlier this morning, you heard about our growing set of watches, straps, and watch faces. And in fact, there are now over 1500 watch faces available for Android Wear. And we’re thrilled by all the ways that you can express your style. In parallel, and for the past year, we’ve been making key platform investments to enable truly useful wearable apps. And that’s what I want to talk to you about today.
In the last 12 months, we’ve launched four major OS releases and seven different watches, bringing dozens of important improvements to users and developers alike. With features like GPS, offline music, and Wi-Fi, we’re making your watch more useful for you, even when you don’t have your phone. Third-party watch faces not only enable you to express your style; they provide all sorts of information at a glance, and features like brightness boost, fader mode, and screen lock provide the flexibility you need for technology that you actually wear. And perhaps the best part, every time we make these improvements, we’re able to push them to existing watches. So when you buy an Android Wear watch, you know it will keep getting better over time.
Today, we’re evolving Android Wear even further. And we’re taking our inspiration from something we already do on our watches. And that’s checking the time. Checking the time is pretty cool. You can just look at your wrist, get the information you need, and make a decision in fractions of a second. It’s glanceable, actionable, and effortless. And that’s at the cornerstone of why we think about all interactions on Android Wear.
Our latest release of Android Wear, which is rolling out over the next few weeks, brings this approach to even more parts of the experience, from the display, to device inputs, to the launcher. Let us show you a few examples. For starters, Android Wear watches include always-on screens, meaning you can see the time all the time. There’s no need to tap, twist, or shake your wrist to wake up the display. In our latest release, we’ve brought this ability to apps as well. So you can always see useful information at a glance while also saving on battery. We call this always-on apps. So if you’re grocery shopping, you can wear your shopping list on your wrist. Once Jeff has found the tomatoes, he can walk all the way to the dairy section, and his shopping list will stay on the screen the whole time in this low-power, black and white mode. Or, if you’re walking around in your neighborhood, directions with maps will stay on the screen. There’s no need to shake your arm and tap at the display just to get back to the map. It’s always just a glance away.
The latest release of Android Wear also lets you interact with your watch in new ways that feel natural on your wrist. For example, Jeff can scroll up and down through his notifications using wrist gestures like this. It’s pretty cool. And it’s really helpful when you want to check the details of a message or the exact location of an appointment you’re going to but your hands are full. And if you’re chatting with a friend, like Jeff is here, using Facebook Messenger, the latest version of Android Wear now lets you draw and send emoji. So when he sketches this cocktail glass, the watch recognizes it automatically. And his friend will see that message on their phone, computer, or watch. It’s really great.
Android Wear now puts all your apps and contacts at your fingertips in the new launcher. All you need to do is touch the watch face, and you can start an app, start communicating with a friend, or ask a question right there. Ultimately, wearable apps should be glanceable, actionable, and effortless, just like checking the time. And Android Wear’s new platform capabilities enable experiences that really fit your wrist. We’re seeing lots of great examples of this from developers. So let’s take a look at some of the apps launching in the near future.
Foursquare, for instance, uses geofencing and in-context notifications to suggest the ideal entree to you when you walk into a new restaurant. With the new always-on feature, apps like Citymapper can show you information about your train arrival for as long as you’re going to need it. And with voice input, you can now request a ride from Uber by saying, “Okay, Google, call a car.” And you can track its arrival right on your watch. Of course, we know that wearable technology is evolving fast and that today’s devices include everything from accelerometers and heart rate monitors to haptic motors and microphones to GPS and Wi-Fi. And we have enabled apps to make full use of these capabilities, too. So with Android Wear, developers are free to build on top of all of them with code that runs directly on the watch itself, giving you deep, interactive experiences on your wrist.
Google Fit, for instance, interprets that sensor data to recognize walking, running, and cycling, as well as squats, situps, and pushups, automatically. If golf is more your thing, golf swing analyzer uses the watch’s accelerometer and gyroscope to measure the tempo, speed, and angle of your swing. And if you hear a song you like, Shazam can listen and recognize it using the watch’s built-in microphone. Android Wear apps can also take advantage of our increasingly connected world, giving you control directly from your wrist.
With Spotify, for instance, you can browse and control the music that you want to hear. With Ford’s app, you can check your car’s battery range right on your watch, and even find your car if you forgot where you parked it. And on your way home, you’ll be able to adjust your living room temperature with Nest. Honestly, if you can dream it, you can build it. And our developer community is really delivering. Today we’re happy to announce that there are already more than 4,000 apps built specifically for Android Wear. That’s thousands of apps that do way more than just tell the time. And each one is a powerful example of what’s possible on a wearable device. Ultimately, Android Wear is about choice. Your choice of powerful developer APIs, your choice of watches, your choice of straps and apps. Together with manufacturers, chip makers, developers, and fashion brands, we’re growing this family of devices and experiences even further. By the end of the year, there will be many more Android Wear watches to choose from, because you should be able to wear what you want and build what you want. And with Android Wear, you can. Thank you.
Sundar Pichai – SVP, Android, Chrome and Apps, Google
It’s really exciting to see the momentum behind Android Wear. But for me, what’s fascinating is that when you take a traditional device like a watch and you add computing and connectivity and a layer of intelligence to it, you end up transforming that experience. Wouldn’t it be great if we could do that for more devices, if we could connect more devices, the devices that you run into in your day-to-day life? Things like parking meters, washing machines, airport kiosks? If we could get physical devices and connect them in a smart way to the internet, we think we can transform the experience for users. We call this the Internet of Things. And there are a whole range of possibilities you can imagine. The most common thing that gets talked about is the smarter home. Imagine you’re driving in, your garage door, your lighting, your blinds, your music all working together to create a better home. Maybe save energy in the process. But the possibilities go well beyond that.
You can imagine a farmer managing the entire farm from her smartphone, the security cameras, the sensors, the irrigation equipment, all of them can be connected so that it works better together. A city’s public transportation system, buses, bus schedules, parking spots, you could manage traffic, maintenance, and create a much better experience for people living in the city. So we see a range of possibilities, and we think it’s endless. But there are a whole lot of challenges.
Today people are making connected devices, like smart lightbulbs. But it’s really hard for device manufacturers, just like in early days of smartphones. You don’t know exactly how to build your software stack. Developers don’t know how to target these experiences. And, finally, for users, it is really confusing to make all of this work together.
We are fortunate to have Nest. Nest has been working hard at taking traditional devices in the home and reimagining them for users. They’ve already done that with the thermostat and the smoke detector, and they’ve been very, very successful. So we have collaborated closely, we’ve pulled in people from the Nest, the Android, and the Chrome OS teams to take a fundamentally new approach to the Internet of Things. And we want to provide an end-to-end complete solution for our ecosystem. And to do that, we needed to think through all the building blocks. You need the underlying operating system, you need a communications layer so that the devices can talk to each other seamlessly. And finally, for users, it has to be a simple and elegant experience.
So I’m very excited to announce today, we are announcing Project Brillo, which is the underlying operating system for the Internet of Things. Brillo is derived from Android, but we have taken Android and polished it down, hence the name Brillo. This is basically the lower layers of Android, the kernel, the hardware abstraction layer, the real core essentials, so that it can run on devices with a minimal footprint. Things like door locks. Because it’s derived from Android, you get full operating system support, things like connectivity, you have Wi-Fi, Bluetooth low-energy built in. And working with Nest, we are adding support for alternative connectivity, like threads, so that there are low-power wireless solutions as well. We have thought about security from the ground up. And given it’s based on Android, you get immediate scale, many, many device manufacturers can use it. In addition, we provide — device manufacturers can manage it from a centralized console. They can provision these devices, they can update them and so on. So it’s an end-to-end functioning operating system.
The next step is what we call as Weave. Weave is the communications layer by which the Internet of Things can actually talk to each other. You need a common language, a shared understanding, so that devices cannot only talk to each other and to the cloud and to your phone. So what we are doing is, we have standardized schemas. Schemas are nothing but a semantic blueprint for all these devices to have a common language. For example, a camera can define what does it mean to say, “take a picture,” and all devices around it can understand that. A door lock can define lock and unlock as two phrases which all other devices in that ecosystem can understand and work off each other. So we will have standardized schemas. Developers can submit custom schemas, and we will have a Weave certification program to make sure anything that is Weave-certified can work together.
Weave is available cross-platform. So you can use — this is a modular approach. You can have Brillo and Weave together, or you can run Weave on top of your existing stack. And the powerful thing is, Weave exposes developer APIs in a cross-platform way. So if you’re writing a recipe application on your smartphone, the actual application can now turn on your smart oven, set it to the right temperature. Right? And any connected device, your oven can be voice-enabled easily, because we provide voice APIs as part of this.
The final thing we are doing is getting the user interface right. So — because this is built into Android, any Android device will recognize another device based on Brillo or Weave. And as a user, you get the same standardized setup for any connected device. You open up your phone, we detect it, you choose the device and set up the right owners, and you’re good to go. This is the beginning of a journey, just like we have done Android for smartphones, we are doing this for the entire ecosystem together. Brillo goes into developer preview in Q3 of this year, and Weave, we’re going to announce documentation throughout the year, and we are working with developers. And the full stack will be ready to go by Q4 of 2015. And so we are very excited that for the first time, we are bringing a comprehensive, end-to-end solution, and we hope we can connect devices in a seamless and intuitive way and make them work better for users.
You’ve heard us talk about how the phone is enabling a multiscreen world. You’ve heard about other form factors like Android Wear and connected devices and how the phone is at the center of this digital experience. Now we want to talk about how we, as Google, we are improving the experience on the smartphone. To do that, we go to the core of what Google set out to do. Our core mission is to organize the world’s information and make it universally accessible and useful. And we’ve been doing that for a while. Think about how far search has evolved from the ten blue links. If you’re on a modern mobile phone, you can ask a question like, what does Kermit’s nephew look like, and you get the answer instantly on your smartphone, for you muppets fans out there. In fact, you can even ask, what does Kermit the frog — how do you say Kermit the frog in Spanish?
Google: (Speaking Spanish)
You know, in this query, what looked like a simple query, we understood voice, we did natural language processing, we are doing image recognition, and, finally, translation, and making it all work in an instant. The reason we are able to do all of this is thanks to the investments we have made in machine learning. Machine learning is what helps us answer the question, what does a tree frog look like, from millions of images around the world. You know, the computers can go through a lot of data and understand patterns. It turns out the tree frog is actually the third picture there. The reason we are able to do that so much better in the last few years is thanks to an advance in the technology called deep neural nets. Deep neural nets are a hierarchical, layered learning system. So we learn in layers. The first layer can understand lines and edges and shadows and shapes. A second layer may understand things like like ears, legs, hands, and so on. And the final layer understands the entire image. We have the best investment in machine learning over the past many years, and we believe we have the best capability in the world. Our current deep neural nets are over 30 layers deep. It is what helps us when you speak to Google, our word error rate has dropped from 23% to 8% in just over a year. And that progress is due to our investment in machine learning. It is what helps us when users type queries like sunsets and lightning, we return the exact right images to users instantly. This insight is what we will use to help organize users’ photos. And you’ll hear about that in a minute.
As a next step, we want to take all these advancements we have made and in the context of mobile, be a whole lot more assistive to users. We want to give users this information even before they know they need it. We want to give it to them in context. If you take a product like Google Inbox and you are planning a trip to London, we bring together all the information in one place, and it is waiting, ready to go for you. It has all the details to do with your London trip.
Google Now lets you know when to leave before you actually need to go from home based on traffic, et cetera. Or if you reach the airport, we have your boarding pass ready to go. We are beginning to think about how to advance all of this further to work better for users. In mobile, the need is even greater. You’re deluged with a lot of information on your phones. Even a simple use case like if someone pings me and says, “Can we meet at the restaurant I emailed you about?” I need to open my email, search for it, figure out a way to book that, all while I’m on the go. Small use case, but every day, you have many, many moments like that. So we are working hard to be more assistive to users.
To talk about how we are doing that, I’m going to invite Aparna from the Google Now team.
Aparna Chennapragada – Director, Google Now
Your smartphone ought to be smarter. Why can’t it tell you where you parked your car? Why can’t it remind you to pick up the milk that your spouse texted you about? And, in fact, why can’t it figure out that you’re flying to New York in two months and you should call your college roommate, and you should book show tickets in advance and don’t forget to check out that cupcake place that you really loved the last time. These are the kind of crazy questions that got us started on Google Now.
As Sundar talked about, we are working hard to figure out how we can assist users in a mobile world. Since we first launched, we brought more and more useful information to you. The last train home in Tokyo. A new open house in your neighborhood. And, yes, even a reminder for where you parked your car.
Today, I’m going to talk to you about the progress we’ve made and what’s coming next. So, look, to assist you, we need to be able to do three things really well. One, understand your context. Two, bring you answers proactively. And, three, help you take action, get stuff done. So let’s talk about the first thing, context.
Now, context has multiple dimensions, where you are, what you are doing, and what you care about. And in a different context, you need different things. Say you’re at Disneyland. The information that you need, how long are the lines, what are the most popular rides? How do you deal with a six-year-old on a sugar rush? How do you deal with it? It’s very different from, say, you’re vegging out on a lazy Sunday. But you know context is also about getting what you are saying. When does “it” open? How long does it take to get “there”? When does “my flight” leave? And we’ve made some great progress here. We have built up a natural language understanding engine like Sundar talked about. But we have also built up this powerful context engine. And we understand more than 100 million places, not just their physical layout and geometry, but also interesting things like when are they busy, when are they open and what are you likely to need when you’re there? Once we understand context, we then want to proactively bring you answers. You are on your way to the airport rushing to return the rental car and so we’ll tell you here are the nearest gas stations, fill that gas tank on your route. Or you’re interested in a baseball team. We’ll get you the score, we’ll get you the upcoming schedule, we’ll get you news. So how do we do this? This is where Google’s Knowledge Graph comes in pretty handy.
So the Knowledge Graph is Google’s understanding of the world and all the things in it. In fact, we have over one billion entities, things like baseball teams, gas stations, TV shows, cocktail recipes, the works. So this can power a lot of useful answers for you. We talked about context. We talked about answers.
The third thing we want to be able to do, if we want to assist you, is help you take action. Get stuff done. And, you know, in a mobile world, you get stuff done with apps. Play music, order a cab, buy groceries. So we just started a pilot program with over a hundred partners where we proactively surface actions and information from apps in Google Now. So when you land at the airport, you can order an Uber or a Lyft. In the context of your commute you can play your Pandora station, or this is my favorite, you can reorder groceries instantly with a now card from Instacart. We’re very excited about all this progress, but you know what? Your smartphone still ought to be smarter.
You spend lots of time on the phone looking for information, jumping between tasks and we asked ourselves how can we get you quick answers to quick questions without leaving context? How can we help you get things done in as few steps as possible? So we’re working on a new capability to assist you in the moment, right when you need it, wherever you are on the phone. We’re calling it Now On Tap.
So Now On Tap, as Dave mentioned, takes advantage of new functionality in the Android “M” release, so it will be rolling out in “M,” and I want to give you a quick preview of what it looks like, what it can do for you when you turn the feature on. So I have Neel up here with me. Hot off the press demo. So you want to see it in action? Yes, all right. Let’s do it.
So here he’s listening to Skrillex, and you wonder like me, what is his real name?
Neel Rao: Okay, Google, what’s his real name?
Google: Skrillex’s full name is Sonny John Moore.
Aparna Chennapragada – Director, Google Now
Knew? Quick answer, to a quick question without having to switch context. Notice also, he could say his real name. It’s obvious to you and me who Neel is referring to but thanks to the context of what you are looking at and the natural language understanding it’s obvious to now as well. But like I said, it’s not just about understanding context. It’s about bringing you answers proactively. So let’s look at another example.
So here, you see an email from my friend Ali about catching a movie. I don’t know much about the movie. I want to find out more. So all I have to have to do is a simple tap and hold of the home button, Google now brings me information about Tomorrowland. Remember the Knowledge Graph — remember the Knowledge Graph we were talking about? Thanks to that, we know Tomorrowland is not just a string of characters. It’s not just a word. It’s a movie. A movie has reviews, ratings, actors, actresses, et cetera. Now, the reviews, they’re kind of okay. But let’s be honest. I know it does have George Clooney, so I’m going to watch it anyway. But, you know you can easily watch the trailer on YouTube. You can check out the cast list on IMDB or the Rotten Tomatoes score on Flixster.
I want to show you another example of how you can take action in the moment with Now On Tap. So this time let’s go to [Wieber]. So by the way Now On Tap works in a variety of apps. The app itself doesn’t have to make any modifications. It’s pretty nice. So in this case you see a message from my husband about dinner plans and of course he forgot to pick up dry cleaning so it’s on me. Again with a simple tap and hold on the home button, you get help. But I want to point out a couple things going on here.
So first, just like the movie you saw a nice information, instant information about the restaurant, reviews, ratings, even how to book a table. But check this. Google Now created a smart reminder card for me to pick up that dry cleaning. Now the user in me is pretty happy, but I have to say the computer scientist in me is practically giddy with the excitement here because that’s unlike epic natural language understanding action going on here. Pretty neat!
So let’s look at what happened here. So when you tap and hold on the home button, you’re telling Google Now, “Hey, here’s something I need help with. Use this context”. And Google Now uses this context and brings you back relevant answers. But as you might have noticed, it’s not just about answers. It’s also apps. So what does it mean for developers? Yep, this is a new way that you can reach and reengage with users when your relevant users in the moment once your app is indexed by Google.
So while we’re here let’s go tap on OpenTable so you can see how you can jump in, and notice we took you right to the specific restaurant in the app. No fudging with the phone. And I want to show you a quick subtle but cool thing while we’re here. So let’s switch to the menu, and that second dish there sounds good. Kind of hard to pronounce for me, and — but I want to see how it looks like.
Neel Rao: Okay, Google, show me pictures of Spanakotiropita.
Google: Pictures of Spanakotiropita. Here you go.
Neel also had trouble pronouncing it clearly, but thanks to the context of the words on the menu, Google Now is able to recognize it and get me what I want. It’s not just apps, though. Now can assist you in the moment even when you’re on the web, so for this last example, let’s switch to Chrome. Okay. Here is an article about Hugh Laurie heading to heading Veep, my favorite TV show. I know a lot about Veep way more than I should, really. But not a lot about Hugh. So, yes, I can fiddle with the phone, open a new tab, peck at the keyboard, or I can tap on Hugh, and I get information about Hugh Laurie. Pretty cool! I can check out movies and other TV shows he’s been in. There’s Tomorrowland again. I have to watch it. But the nice thing here is that you could get information instantly. In all these examples, be it the article you’re reading, the music you’re listening to, the message you’re replying to, the key is understanding the context of the moment.
Once NOW has that understanding, it’s able to get you quick answers to quick questions, help you get things done, wherever you are on the phone. And for developers, like we said, it’s a new way for you to reach and reengage with users. We’ll share a lot more details over the next few months. That was just a quick preview of Google Now on Android M.
When it comes to mobile, there’s another key area where users need a lot of assistance managing a lot of information. Yep, photos. To tell you about how we’re applying machine learning and intelligence to that critical area, I’m going to hand it off to Anil. Thank you.
Anil Sabharwal – Director, Google Photos
Not too long ago my eldest daughter Ava graduated from preschool. It was a pretty fun day, and of course we wanted to capture the moment. So my wife took some photos. 312, to be exact. I only took 237 photos and 56 videos. Sundar mentioned the information challenge we have on mobile, and I’m sure all of us can relate. I mean, how incredible is it that we all have a camera in our pockets everywhere we go ready to capture any moment? Think about the moments in your life and the photos and videos you’ve taken. The big moments. The small moments. The please burn the evidence moments. These moments tell your story.
But here’s the kicker. We thought that taking more photos and videos would make it easier to relive the moments that matter but it’s actually made it harder. The sheer volume alone has made it near impossible. How often do we spend time just scrolling, scrolling, scrolling, to find that one photo that we want. What if we could use Google’s unique capabilities to help people take back control of their digital lives? And that’s why I’m thrilled to be here to introduce a brand-new product Google Photos.
With Google Photos we built an entirely new experience from the ground up centered around three big ideas. First, a home for all your photos and videos, a private and safe place to keep a lifetime of memories, available from any device. Fast, intuitive, and beautiful. Second, help you organize and bring your moments to life. An app that takes the work out of photos and lets you focus on making memories, not managing them. And third, make it easy to share and save what matters. Sharing should be simple and reliable. And when you’re on the receiving end, it should be easy to hold on to the photos and videos you care about. These ideas form the foundation on which Google Photos was built. So let’s have a look, starting with a home for all your photos and videos.
I’d like to introduce David Lieb, our product mastermind, who is going to be helping me with the demo today.
Now, before we start, I have to say this is a pretty cool moment, and as you and I all know, the only way to do a moment like this true justice is to take a selfie.
All right. Now that we have that out of the way, let’s open up the new Google Photos app. You can see the photo we just took, is at the top and it’s already been safely backed up. Google Photos automatically backs up from phone, tablet, computer, and even your camera memory cards. We can also sync all your photos and videos with Google Drive. Now remember here Dave has my phone, and we’re looking at my account. These are all my personal photos and videos, so you’re about to learn a whole lot about me. So what does a lifetime of photos and videos look like? Well in Google Photos it’s everything in one place no matter what device you are on. I can easily jump to my any moment in time all the way back to my earliest photos. I can view my memories across days, but with a simple pinch across months, and again across years.
Let’s go back to 2005 back to when I took that road trip across Canada. Again using a very simple pinch gesture I can go all the way back in, back to a time when I still had hair. Every interaction feels fast, as if all these photos are local, but in fact, not a single photo in today’s demo is actually on the phone. Let’s flick out of this photo. A simple swipe to the right brings up my collections, a timeline of my most memorable moments. This not the only includes the albums I’ve made, but also montage movies and stories that I have saved. A home for all your memories, big and small.
I’d like to now talk about idea number 2. Google Photos helps you organize and bring your moments to life. We want to take the work out of photos. Using machine learning, Google Photos understands what’s important and helps you by automatically organizing your memories. So here, I’m just going to tap the blue search button and you’ll see all my photos organized by the people, places, and things that matter the most in my life. I have not tagged a single one of them. This auto organization is private, and it’s for my for eyes only.
Let’s tap on my face. Notice that the selfie we just took is already in here instantly grouped with all the other photos of me. Dave, how about we tweak this? Now, we’ve tapped on share, and as you can see, we make it easy to share with any of your favorite services. This is going to make an epic first tweet from our new Google Photos Twitter handle.
All right. Let’s have a look at all the photos I’ve taken of my niece. The recent ones are at the top, and I can go back to when she was four years old as a flower girl in my wedding. But what’s amazing is I can go all the way back to the week she was born. We can automatically group photos of the same person over time. So say I’m looking for something specific. I remember being caught in a really bad snow storm about ten years ago back home. Normally it would take me forever to find these photos. Instead I just tap the search box, I will type snow storm in Toronto and instantly I am able to find these photos that I’m looking for. Incidentally, I remember why I moved to California.
Now that I found this memory, let’s talk about how Google Photos helps bring these memories to life. I can easily tap the pencil, which will let me make adjustments tuned to the photo’s color, lighting, and subject. Or, by selecting and tapping the Plus button, Google Photos lets me create collages, animations, movies with sound tracks, and more.
But let’s be honest, we don’t always have time to do the work ourselves, and Google Photos is here to help, too. A swipe to the left brings up my photos assistant. Here, I will get suggestions for new creations made from my photos and videos that I can preview and either choose to save, edit, or throw away. The choice is mine.
Now, I think I have a video up here that I recently got after uploading all the GoPro footage some friends and I took after our day mountain biking. Let’s watch.
[Video]
It would have taken me hours to do this myself. But Google Photos did all the hard work. If I want to make edits, it’s easy to change the theme, sound track, and even reorder or remove clips. Like this one here of me wiping out. I just might take that one out before I share. Which brings me to idea number three. We want to make it easy for you to share and save what matters. Think of it as sharing how you want with who you want, no strings attached. For months, Dave’s been asking me to share with him the photos we took from the giants game we went to last year. First, I’m just going to tap on that blue search button and we’re going to find the photos from the baseball game. Now, you can see I took some good shots, but I never got around to sharing. So let’s select them all.
Now, rather than having to do this tediously one by one by one, we’ve introduced a new gesture to make multiphoto selection really fast. I just press and hold on one of the photos and drag my finger to select them all. Next, I’m going to tap the share button. We firmly believe you should be able to share photos and videos any way you want. Now earlier, you saw me share to Twitter. But we also want to do our part in making sharing predictable and reliable. So in this case, I’m simply going to tap “get a link.” and in less than a second, I have a link to all 25 items. Now, the beauty of this is I don’t have to worry about whether the recipient of the link has a particular app or has a login. I can share it any way that I want.
So let’s say I sent this link to Dave. Let’s switch to Dave’s phone. And let’s see what Dave sees when he clicks on that link. And remember, this is sharing with no strings attached. This is the high-quality content, without needing to log in or download any app. Now, in this case, because Dave is logged in, he has an extra button. And that lets him copy all of these photos and videos to his Google Photos library instantly. This lets you hold on to the memories that matter, even when you weren’t the one holding the camera. So there you have it, Google Photos, a home for all your photos and videos, organized and brought to life, so that you can share and save what matters.
Now, when we say a home for all your photos and videos, we want everyone to be able to safely back up and store a lifetime of memories. And that’s why we’re also announcing, with Google Photos, you can now back up and store unlimited high-quality photos and videos for free. We maintain the original resolution up to 16 megapixels for photos and 1080p high-definition for videos. We still compress versions of the photos and videos at near identical visual quality. Every one of the photos you see on the screens here today has been backed up to Google Photos for free.
Let’s take a closer look at one of these images. Notice the detail on the feathers, the water droplets, the Penguin’s eye. With Google Photos, you will have peace of mind that your memories are safe, backed up in beautiful, print-quality resolution. Now, you might be asking, when can I get my hands on Google Photos and all this free storage? The answer is today.
Google Photos is rolling out starting later today on Android, iOS, and web. Now, I just showed you how Google Photos is making it easier on mobile to manage the most important moments in your life. And Aparna showed you how Google Now is helping people get the information they need on mobile. Let’s now take a look at how the Google Translate mobile app is connecting people across languages.
[Video Presentation]
Jen Fitzpatrick – VP of Engineering
Hi, I’m Jen and I’m here to talk to you today about what we at Google are doing to help the next billion people come online and have a great experience doing it. The video you just saw about Google Translate is a great example of how our technology can serve people all over the world. Making the world’s information accessible and useful to people everywhere has been at the heart of what Google does right from the start. That’s why we’re so excited about this powerful shift taking place right now. More and more people are getting their first smartphone. And for many of them, that mobile phone will be their very first computer.
These next billion people will have a profound impact on mobile computing, both as users and as creators. And so we’re thinking very carefully about how we evolve both our products and our platforms to address their particular needs. Put yourself in the shoes of someone coming online for the first time on a smartphone today. You want to tap into the wealth of information out there on the web and in apps. You want to learn about the world and connect to the people and the places in it. But life in São Paulo is very different from life in Mumbai. And each country brings its own unique challenges and opportunities. The bulk of new smartphones growth over the next several years is happening in just a handful of key countries. In fact, in the next two years, we expect to see 1.2 billion smartphones sold across just the six countries you see here. And just in case it’s not clear, this is a huge opportunity for Android developers.
As Sundar mentioned, nearly eight out of 10 new phones shipping worldwide are Android devices. And the vast majority of the next billion will be Android users. We’re excited about what we’re already seeing from the community of Android developers. And let me give you a taste of how we’re working at Google to bring people online and to give them a great experience once they get there.
First, we want to remove barriers to smartphone adoption. People buying devices for the first time in these markets too often encounter low-quality hardware or out-of-date software. Go to a store in Jakarta or Jaipur, you’ll see a huge display of phones on sale, but not all of them are able to run the latest and greatest apps. Our Android One effort is all about enabling high quality and up-to-date smartphones at a great value. We’ve worked very closely with our hardware and carrier partners to make it possible to create these better devices.
Last year we introduced Android One in India with phones from three OEM partners. Now, Android One devices are available in seven countries, including India, Nepal, Bangladesh, Sri Lanka, Indonesia, the Philippines, and as of just a few weeks ago, Turkey. All of this is done through collaborations with more than 10 OEMs and many other ecosystem partners, with more on the way.
We’re also building support for features that are particularly useful in these emerging markets, like Dual Sim cards or replaceable battery and built-in FM radio. And, of course, Android One phones run the up-to-date version of Android and are among the very first to receive new versions of Android. Our hope is that Android One will continue to spur innovation across the ecosystem so that all phones, no matter who built them, get better and more affordable for the next billion users.
Now, speaking of devices, Chromebooks continue to have incredible traction around the world. Chromebooks come at a wide range of price points and have a growing range of features and capabilities. This is allowing first-time users, and even schools, to get high-quality devices for an affordable price point. It’s now possible to buy a quality Chromebook laptop that’s fast, secure, and has all-day battery life for under $150. There are over 10 million students using Chromebooks around the world today and tens of thousands of new Chromebooks being lit up every single school day.
In addition to our platforms, we’re also focused on making sure our core Google apps are fast, useful, and relevant, no matter where you are in the world. We know, having access to the full range of information on the internet can have a transformative impact on people’s lives. I am reminded of the story of Zak, a potato farmer in Kenya whose crops were dying and he didn’t know why. The books he had didn’t give him any help. So he went to a cyber cafe and used search. And he found a website that suggested that sprinkling wood ash on his crops could help stop the ants that were killing them. He did this. It worked. And he wound up with a successful harvest. This is just one small example. But we see every single day how having access to good information can change people’s lives everywhere for the better.
Even with access to the internet, though, for many users, in these fast-growing countries, connectivity can be a real challenge. In some cases, data is just too expensive to make it practical to use in large quantities. And even if you do have access to a good internet connection, data transfer is still too often intermittent or slow. What does this mean in practice? It means it can sometimes take minutes to load a medium-sized web page or a map. And even longer to buffer a video. So with this in mind, we’re taking many of our core products and rethinking them in ways that work far better in a world where speed, size, and connectivity are central concerns.
Let’s start with search and Chrome, two of our foundational products that facilitate people’s first experience with the internet. We saw in Indonesia that loading a search results page was often taking as long as eight and a half seconds on a 2G connection. So last October, we launched a streamlined version of our search results page that was ten times smaller and 30% faster. Since then, we’ve expanded this light search results page to 13 other countries where users are often on slow connections. Now, once the search result page loads, a user should be able to click on the result and load the web page quickly. But on 2G, we saw that clicking on a result could often take an average of 25 seconds and use up to a megabyte of data, which can be very expensive. We’re now optimizing the result and web pages for users on slow connections, starting in Indonesia. These optimized pages load four timings faster, use 80% fewer bytes, and most importantly, make it easier for users to find the information they want right when they want it.
We’ve also found that these faster and lighter search results pages can reduce memory usage of the browser or search app by up to 80 megabytes. And since over a quarter of new Android devices have only 512 MB of RAM that’s like you’re freeing up almost a fifth of the full memory on your phone. That’s a big deal. So we’re now looking at enabling these changes for low-memory devices across the board.
We also know that everybody everywhere wants the fastest browsing experience possible. In India, we’re starting to pilot a network-quality estimator for Chrome, which evaluates the quality of the data connection you’re on and responds to lower bandwidth, adapts the fidelity of a web page. Behind me is a screenshot of the Times of India. Since we’re on a slower connection, you’ll see that we replaced some of the images with placeholders, but we’ve kept the most important ones, like navigational icons and logos. This ensures that the page is both usable and fast. And because users everywhere can sometimes find themselves without an internet connection, we’re working hard to support offline capabilities in Chrome, on Android. This means that you’ll be able to save any page you visit, be it a bus schedule, a news article, or in this case, a delicious recipe for egg biryani for later offline access.
Turning to apps. YouTube is extremely popular around the world, especially in these growing markets. Video consumption is exploding on mobile. And people are also flocking to YouTube to tell their own local stories. In Indonesia, for example, YouTube creator Natashi Farani has seen more than 20 million views of her videos on how to style Hijab scarfs. YouTube videos can require a lot of bandwidth, so we just recently launched something called YouTube offline in India, Indonesia, the Philippines, and Vietnam. This lets you take a video offline for up to 48 hours and watch it whether or not you’ve got an internet connection. We’re continuing to look at ways to help video content become accessible to our growing base of YouTube mobile users around the world.
Maps is another area where we’re really investing in creating a better experience for the next billion users. We hear wonderful stories about how Maps help people in their daily lives. Like the story of two sisters in Brazil whose business was in a favela and was impossible to find on local maps. So we worked to train local residents to literally put the favelas on the map. And now, many of these businesses are seeing increased sales, more people are visiting their store each day, and many are able to hire new employees, too. Most importantly, being on the map became something akin to a proof of existence for these businesses. Many of the entrepreneurs told us it gave them confidence in their business. The whole community benefited. We’re constantly customizing the Maps experience to make it work better for users’ local needs. For example, we’re bringing our transit experience to new places all the time. So Google Maps can now help you navigate the transit systems of Mexico City and Sao Paulo, or travel by rail anywhere in India.
Finally, we’ve been working hard to make Google Maps work offline. With offline maps, you won’t need to suck down expensive data or have super reliable connectivity every time you want to navigate somewhere. Let’s take a look.
Let’s pretend we’re in Mexico city. I’ve saved this map previously on my phone. And just to show you that we are really talking about offline maps, let’s go ahead and put the phone in airplane mode. Okay. So now we’re in airplane mode. No internet connection. But you’ll notice that I can still do things like search for places. I’ve heard great things about the Museo Soumaya in Mexico City. So let’s search for that. You can see that auto complete works, too. And once I pick the museum, even though I’m offline, I can still see the reviews and the opening hours. So I can learn about this place. So I can learn about this place and decide whether I want to go there. But that’s not all.
Now, for the first time, you can also get turn-by-turn voice directions offline as well. This looks like a pretty good museum, so let’s go. [Foreign Language]
Hopefully, just a few of you out there have good Spanish. There you have it. Now I can search and navigate in the real world, online or offline. We want users of maps to be able to navigate and explore the world literally wherever they are, on a good connection or a spotty one. So we’re excited about bringing maps offline starting later this year.
Google is committed to making our platforms and our products work well for the next billion people who are going to come online. But it’s not just about the work that we’re doing. We’re most excited by the entrepreneurship, the leadership, and the creativity that we see from developers in these markets, some of whom are here today. We’re looking forward to working with all of you to build the future of mobile computing. Thank you.
[Video Presentation]
Jason Titus – Senior Director of Engineering
So those were some powerful stories that are just small examples of the way developers are having substantial impact around the world. We’re also excited that Carlos and his family could join us today.
Now I would like to talk to you about some of the things we’re doing to making it easier and faster for you to build these kinds of great apps. My name is Jason Titus, and I lead the developer product group. I’ve been at Google a little bit more than a year now, and before that I was, like many of you here, working with my team to try and make our app great. This gives me a unique perspective of both the challenges and the opportunities that Google offers to developers. So now I’m really excited to be working at Google with teams across the company to bring you the best developer offering and experience across Android, iOS, and the mobile web.
Google has always worked with developers, building powerful platforms that have enabled massive innovation. But mobile has really evolved, so we want to make our offerings more cohesive, and let you use the same APIs across platforms to build the apps that you want. Today, I’m going to talk to you about how we’re helping you throughout your development life cycle. I’ll go over some of the improvements to help you develop, engage, and earn with Google.
First, let’s talk about developing your apps. We want to give you the tools you need to quickly build the apps as quickly and reliably as possible across platforms. First, let’s talk about developer tools. Last december, we released Android Studio 1.0. Today we’re sharing a preview of version 1.3 with significant enhancements including faster Gradle build speeds and a new memory profile. Now, we recognize that many apps on the app store — on the play store actually use the native development kit or NDK. So the biggest feature we’re announcing in 1.3 is full editing and debugging support for C++. You’ll now get error correction, code completion and debugging in the same IDE that you use with your Java code. Check out the new version today on the canary channel.
As for the web, the Polymer library brings visually rich app-like experiences to browsers across both desktop and mobile. So today we are announcing Polymer 1.0. In addition, we’re introducing new Polymer elements that make it easy to drop in common features like toolbars and menus, services like maps and charts, or to build a complete mobile checkout flow in your web app.
Now, I mentioned iOS before, and we’ve always had iOS libraries, but we’re starting to bring them together into a cohesive SDK. To improve the developer experience for iOS, we’d like to announce that CocoaPods will now be the default distribution channel for our SDKs. CocoaPods is quickly becoming the iOS standard for dependency management and makes it easy to import libraries and frameworks into X code. Starting today you can get Google Analytics, Google Maps and many other libraries all via CocoaPods.
On to testing. The Android ecosystem has continued to grow and provides users a diverse range of devices to choose from. You may only have access to two or three to test with, and certainly not to those that are available outside of your country. To help with this challenge, we are releasing Cloud Test Lab, a platform built on top of our acquisition of Appurify to automate the testing of mobile apps. All you need to do is upload your app and we’ll automatically run tests across the top 20 Android devices from around the world. You’ll receive a free report that includes screen videos and crash logs. Cloud Test Lab will be coming to the Google Play developer console soon. We’ve been talking about a lot of ways that we’re making it easier to develop on Android, iOS, and the mobile web.
Now let’s talk about back-end services. Firebase makes it easy to quickly build an app without spinning up servers or writing back-end code. It provides data storage, user authentication, and asset hosting. Firebase is now used by over 190,000 developers, including Citrix, Instacart, and CBS. And for apps that have more complex infrastructure requirements, Google Cloud platform offers many useful services from compute and storage to handling big data. We’re seeing great adoption from companies large and small, including Khan Academy, EA and SnapChat.
So once you’ve finished developing your app, the next step is to get users and keep them coming back. So now I’m going to talk to you about ways to engage users with free tools, app install ads and on Google Play. First, let’s talk about Google search where over a hundred billion searches happen every month. With app indexing it allows you to index your app content into Google search results just like web content. Today we have over 50 billion app links and growing. App indexing is available on Android. We’re piloting it on iOS. And it’s a great way to drive installs and user engagement, and it’s free.
Another free tool we have is cloud messaging. Cloud messaging is the most popular way to send messages from the cloud to users’ devices, sending over 70 billion messages every day through over 600,000 apps. Today I’m going to talk about two significant improvements we’re releasing to cloud messaging. First, we’re expanding beyond Android and Chrome and broadening the platform to iOS. You can now — you can now use the same messaging infrastructure across all these platforms.
Secondly, we’re announcing the ability to subscribe to topics in cloud messaging. For example, using the NPR One app, a user who is interested in the TED radio hour can subscribe to notifications. With topics in cloud messaging, the client app subscribes to a particular radio show and for NPR, they can send a single notification that fans out across all subscribers. For you this means less database management and fewer lines of code. We’re also making it easier to engage with users on the mobile web. Through Chrome you’ll now be able to send OS level notifications your websites. Additionally, users who engage frequently with your site can add it to their home screens. These features bring the power of native development to the mobile web experience.
So we’ve talked about ways you can engage users for free. Now let’s talk about app install ads. A lot of developers, particularly smaller ones, tell us that they want to start marketing their app but they don’t have dedicated teams, so we’re making it simpler with the introduction of universal app campaigns. All you have to do is set your budget and the cost you’re willing to pay per user, and we’ll set up an automated campaign with the right ad inventory across Search, AdMob, YouTube, and the new search ads we’re piloting in Google Play. This feature will be available in the Google Play developer console and AdWords in a few months.
Through Google Analytics you can already track the effectiveness of your app install ads on Android. Today we’re adding support for iOS with over 20 ad networks, including inMobi and Millennial Media. We will continue to invest in delivering the best in cross-network and lifetime value attribution. We also believe that you should have a choice in the tools that you use. And think the entire mobile industry is stronger when we have transparent, open and reliable tools for measurement. We remain committed to our integrations with partners such as Tune and Kochava and are proud to have their support in our open approach.
Now let’s tune in on Android developers. We frequently hear from you that you love the Google Play developer console. It’s a great tool for publishing your app and we want it to be an even better tool for you to acquire and engage users. Now for the first time, you’ll be able to see how many people are looking at your listing and making purchases in addition to how many installed your app. This gives you a snapshot of the entire conversion funnel. You’ll also be able to see where your most valuable users come from across organic and paid traffic. And now that you have access to this data, you’re going to want to use it to make your listing even better. Starting today you can run experiments on your play store listing. You can test out different versions of graphics and text and see which drives more downloads. We do all the number crunching for you.
In our pilot program, we were thrilled to see that developers such as Kongregate got double digit lift in their app store listing conversion rates, and this created significant impact on their business. So if you have more than one app, you’ll also want to let users explore all of them. Starting today, we’re allowing you to create your own Google Play home page. Upload your graphics, explain what your company is all about, and pick a special app to feature. This gives you a single destination to promote all of your apps on the play store.
So we’ve talked about acquiring and engaging users, and for some of you, it’s not just about developing for apps for fun. You’ve also got to pay the rent. Now, there’s lots of ways to earn money, but today I want to specifically focus on AdMob which helps you run ads inside of your app. We want monetization to be smarter for you. So we’ve integrated Google Analytics into AdMob. So you can get insight as to where your users are in the life cycle of your app. This information is already helping app developers make decisions about their in-app advertising. For example, if you have a gaming app you can use analytics to understand how long it takes for users to complete a level. You can show fewer ads to the users that finish the level quickly and show in-app promotions to those who need a little help. This will help you maximize your revenue in a way that protects your user experience.
Mediation is a way to run other ad networks through the AdMob platform and create more competition in your app. Today we support over 40 ad networks in our mediation partners having added over 15 in the last year alone, and we’re proud to announce our newest partner, Tencent, one of the largest mobile ad networks in China. We think these features will give you the flexibility to make the right decisions for both your users and your app.
So today I’ve talked to you about the things we’re doing to make it easier for you to develop your app, engage your users and earn with Google. And while we continue to innovate, we’re working hard to nail the fundamentals. And with each new release you will see more efficient, easier to use APIs with better consistency on Android and iOS. We will also continue to simplify our offering and engineer our products to work better together. Everything I talked about today will be accessible through developers.google.com.
And now I’d like to invite Ellie up to talk about new ways users can discover your apps on Google Play.
Ellie Powers – Product Manager of Google Play
Thanks, Jason. So, I’m Ellie from the Google Play team, and Google Play continues to scale with the growth of the Android ecosystem. We’re helping users all over the world get the most out of their devices, and we’ve delivered 50 million app installs in just the past 12 months alone. And we are now reaching more than a billion users every single day. So as people around the world have their very first experience with the smartphone each day, Play brings them the apps and content that make that experience magical. And today, Play is growing twice as quickly in markets like India, Southeast Asia, and the Middle East, as the rest of the world. And users on Play just love games. Many of you are already using Play game services to build multiplayer games and make your games more social. We’ve seen hundreds of millions of users connect to Play game services through your games, including more than 180 million new users in the past six months alone.
So if you’re an app developer, Play is how you reach new users, so the better and more personally relevant that we make the store, the more users come back to Play to install more apps like yours. And we know that you work hard to grow your user base. So at Play we also spend a great deal of time thinking about how to match users to your content in the best way possible to bring users to your apps. So with more than a billion users on Play, the Android ecosystem is extremely diverse, and it makes sense that the Play Store that you see is different from the Play Store that your friend sees. And we have found that using personalization doubles the likelihood that a user will install an app. And Search also plays a big role in helping people find what they want. In some cases the user searches for something very specific like recipes. So a list of the top recipe apps is perfect. But when someone makes a broader search, it’s not necessarily clear what the user really wants; right? And that same search results page just doesn’t cut it. We can be a lot smarter about helping the user find exactly the right app.
So now, we organize results for a search like shopping into intuitive groups like fashion and coupons. So this way, people can see and understand the full range of shopping apps that are available. And it really works. Organizing results in this new way not only delivers more installs but from a much wider range of developers than before. And with search results that are easier and more fun to explore, your apps are being seen by more users. We’re helping people find what they love and love what they find.
So now let’s talk about one particular group of people for whom finding the right content is just so important. Families. One-third of Android users in the U.S. are parents with kids aged 12 and under, and when you look around in this room, you see passionate Android developers creating apps and games for families and children, delivering little moments that help broaden the mind and inspire creativity. We know that parents only want the best for their kids, but they often don’t know how to find the great content that you’ve created for them.
So today, we’re launching a new family discovery experience that makes it much easier for parents to find high quality, family friendly apps, games, movies, TV shows and books on Google Play. We’re introducing the Family Star. So this gives us parents an easy way to find the right content throughout the store and you’ll find the star on all of our family friendly content. So, for example, when you navigate to the family home you can browse content based on age. A special badge will tell you which age group an app is designed for and it signals to parents that they can trust the quality of your content. Top charts and searches from within the family home page are filtered. We only display the apps and games that have met the designed for families program criteria. This way parents can feel comfortable with their content choices. And growing up, every kid has their favorite character from books or TV, right? And so now you can use the popular character browser or the character badge to find them. So any Star Wars fans here today? I figured. That’s fantastic. So you’ll have a lot of content to explore. And let’s find my favorite. Dora the Explorer. I am secretly a huge fan. And you’ll be taken to a dedicated page with lots of Dora content.
And finally, we’ve introduced more features to empower parents to make better decisions. We provide objective third-party content ratings for all apps. We have a new set of parental controls, we enable stronger password protection for in-app purchases and we label apps that are ad supported. We firmly believe that when parents have a better content discovery experience, developers win too. So whether you’re creating the next big favorite, helping kids explore the world or bringing beloved characters like Dora to life, we invite you to join us. Let’s make Android fantastic for families together.
Now I’m going to hand it back to Sundar. Thank you.
Sundar Pichai – SVP, Android, Chrome and Apps, Google
Thank you, Ellie. It’s great to see Google Play for families. As a father of two, I have had my share of Dora. And I hope all of you get to enjoy that experience as well. You’ve heard how developers can build amazing experiences on top of our platforms. For us we really want to enable the next generation of developers as well. A lot of us take for granted the skills we have in this room. You earlier heard about the next billion users coming online. It is important we empower others to become developers so that they can build the next set of experiences.
So we’ve been working with Udacity. Udacity is a leading provider of massively scalable online education. And so I am very excited to announce today we are launching the Android Nano-degree. It is a six-month course for just $200 a month, and we cover the entire life cycle of Android development, all the way including details like Google Play Services material design, et cetera. Google has invested over $4 million in developing this curriculum, and we hope many folks take advantage of this.
At this point, we are going to shift gears and take a look ahead, talk about the future of computing. We have talked about how mobile is at the center of your digital experience, but there will be cases when you want computing to be much more immersive in context for you. Not all of your computing experiences is going to be looking at a small black rectangle in front of you. Think about the icebergs of Greenland, you do want an experience as if you’re there and you can experience the real world for all its richness, depth, color, and glory. So we’ve been thinking a lot about how to bring the real world to users in a more immersive way. We’ve always cared about it. Google Maps has been doing street view, imagery, trying to capture the real world.
So we started our efforts here with Google Cardboard. The Cardboard team was a 20% project. We launched it at last year’s Google I/O, and they’ve been thinking hard about VR and how to get a much more immersive computing experience. To talk about that, I’m going to invite Clay Bavor onto the stage.
Clay Bavor – VP, Product Management, Google
As Sundar said, a year ago on this very stage, we introduced Google Cardboard. And our goal with Cardboard was to make virtual reality available to everyone. And so we started with a piece of cardboard, some velcro, added some lenses and a rubber band. And amazingly enough, that was all you needed to turn your smartphone into a fully functional VR viewer. There has been incredible excitement about Cardboard ever since. What began as a single open design has turned into an entire ecosystem of manufacturers, making Cardboard in all shapes and sizes. There are over 100 apps, hundreds of apps, on the Google Play store that are compatible with Google Cardboard, including apps from folks like Jaunt and Verse, who are actually here with us today.
And people keep finding new and creative uses of Cardboard and VR, campus tours, art shows. One guy even proposed to his girlfriend with the help of Cardboard. And I’m not actually totally sure how that worked. But the important thing is that she said yes. Today, I am proud to say, there are more than one million Cardboard viewers out there in the world. And it’s what we dreamed about when we folded our first piece of Cardboard, immersive experiences just like these for everyone. And we really couldn’t have done it without your help, everyone who folded your own Cardboard, who built an app, who filmed this video, thank you.
Today I’m really excited to give you a glimpse of what’s next for Cardboard and for our larger efforts in VR. There are three things I’d like to share. The first of those is an improved Cardboard viewer. The original viewer, it was great. But phones, turns out, got a lot bigger in the last year. And so the new design fits phones with screens as large as six inches. The Magnet button, if you remember, was a clever way to do input. But it didn’t work on every phone. So we replaced it with one that does, one that’s actually made out of Cardboard.
And now instead of it taking 12 steps to assemble, it takes just three. So viewers are on sale today from partners. And, of course, if you’re here at Google I/O, just like last year, you’ll get one immediately after the keynote.
Okay. So it works with any phone. It fits any phone. The button works with every phone. But the software, the Cardboard SDK, it needs to work with every phone, too. And so as of today, the Cardboard SDK for Unity will support both Android and iOS. So if you’re creating a VR experience and you want to bring it to everyone, we think Cardboard can help. That’s the Cardboard update. Lots of manufacturing partners. Hundreds of apps. Over one million viewers. And it’s still just a piece of Cardboard.
Second thing I want to share today is about how we’re bringing VR and its unique ability to take you other places and bringing that to someplace that’s pretty special. And that’s the classroom. Think about your favorite field trip growing up. There’s something amazing about visiting a place, seeing it up close, experiencing it with your own eyes. But, of course, the school bus, it can’t go everywhere. It can’t go to the moon, it can’t go to another country and back in a day, it can’t go to the bottom of the Pacific Ocean. But VR can help take you those places, which is why today we’re excited to announce Expeditions.
So Expeditions lets teachers take their classes on field trips to anywhere. Here’s how it works. So a box arrives with everything that you need to travel. Cardboard and phones for every student, a teacher tablet. And all of these devices are synchronized so that when the teacher chooses a place, the entire classroom jumps there together. And the response from students and teachers has just been incredible. Let’s have a look.
[Video Presentation]
So thanks to an incredible group of educators, hundreds of classes all around the world have already gone on Expeditions. And today, teachers who want to create their own, who want to bring Expeditions to their school, can sign up online.
Now, we’re also partnering with some amazing organizations, like the Planetary Society, the American Museum of Natural History, and the Palace of Versailles, to create new expeditions and bring those to schools around the world in time for back to school this fall. So that’s Expeditions, field trips to anywhere for every classroom.
The third and final thing I want to talk about is about capturing and sharing these real-world experiences, like the Great Wall or Coral Reef, in an entirely new way, one that looks and feels like you’re actually there. Because the world is filled with all of these awesome places and events, like Great Barrier Reefs and Golden Gate Bridges, and birthday parties, and mountaintops. But there’s a problem. If you can’t actually be there, if you want to go back to a place or a time, then your experience is pretty limited. Because cameras kind of only capture the world like this. And it’s like watching a flat version of the world through a tiny little window. And if you want to capture something that’s truly immersive, there are really only a handful of very custom camera rigs in the world that will do the job. And even they have their limitations. We want to change that.
We want to put professional, previously impossible tools in the hands of any creator who’s motivated so that they can capture the world around them and then share it in a way that lets all of us jump to the top of that mountain, jump to anyplace or event on the planet, and experience those sights and sounds like we’re actually there. So today, we’d like to preview something that we call Jump.
Jump. Jump enables any creator to capture the world in VR video, video that you can step inside of and make it available to everyone. It has three parts: a camera rig with very specialized geometry, an assembler which turns raw footage into VR video, and a player.
Start with the camera. So the rigs that we’ve built, they include 16 camera modules mounted in a circular array. And you can actually use off the shelf cameras for this if you want. And you can make the array out of basically any material. We’ve made one out of 3D printed plastic, one out of machined metal, and for good measure, of course, we also made one out of cardboard, and it worked. What’s critical is the actual geometry. And we spent a lot of time optimizing everything, the size of the rig, the number and placement of the cameras, their field of view, relative overlap, every last detail.
And — we seem to be losing the slides here — every last detail. And now, what we want to do is share what we’ve learned with everyone. So just like we did with Cardboard, we’re going to be opening up the camera geometry with planned available to everyone this summer. So anyone who’s motivated will be able to build a Jump-ready camera. Now of course, if you’re a pro and you’ve done filming with multiple cameras, you know that it’s kind of complicated. You need to synchronize recording, exposure control, and so on. And so we thought it would be good if someone who really knows how to build a great camera could help out. So we called our friends at GoPro. And today I’m excited to announce that GoPro plans to build and sell a Jump-ready 360-degree camera array.
Now GoPro of course has enabled people to capture some of the world’s most awesome experiences, including spherical content. They’re bringing their camera expertise to the Jump-ready rig, which will include shared camera settings, frame-level synchronization and other features that will allow all 16 cameras to operate as one. Here’s what it looks like.
So GoPro is actually here with us today, and they brought one of their rigs and it’s in our I/O sandbox if you want to check it out. So that’s the camera geometry.
Next up is what we call the assembler. And this is where the Google magic really begins. The assembler takes 16 different video feeds and uses a combination of computational photography, computer vision, and a whole lot of computers to recreate the scene as viewed from thousands of in-between viewpoints everywhere along the circumference of the camera array. And we then use these in-between viewpoints to synthesize the final imagery, stereoscopic VR video. Let me show you an example of how the assembler creates one single frame of VR video.
So first, we take the raw camera data and we do a rough alignment. Next, we perform a global color calibration and exposure compensation and things start to look a bit better. But if we zoom in, you’ll see there are still seams between some of the images. Like here. To fix those, our algorithms use information about the underlying structure of the scene to perform a three-dimensional alignment. And the 3D alignment works by compensating for the depth of different objects in the scene, like this. And it’s this understanding of depth that also enables us to create all of those in-between viewpoints which you can see here. Like this. It’s pretty cool. It’s a fundamentally different and more advanced approach than anything else we’ve seen. And unlike other solutions, you don’t see borders where the cameras are spliced together and you have beautiful, accurate depth-corrected stereo in all directions.
Now we’ve actually built a bunch of these cameras and sent them to plays places all over the world from the Google campus in Mountain View to Iceland to Japan. And we’ve captured some really beautiful places. I have to say, you’ve got to see these properly in VR, which you can do at our booth. But let’s see some footage here on stage.
[Video Presentation]
Again, wait until you see this footage in VR. Near things look near. Far things look far. And you can look all around you. It just feels like you’re there.
Now, assembling footage like this takes thousands of computers. And we want to make this processing power broadly available. So this summer, we’ll begin making the Jump assembler available to select creators worldwide. But this leaves one question: where are people going to watch this stuff? How are we going to make it so that anyone can experience it. And we’ve been working on something for that, too and we call it YouTube.
So starting this summer, YouTube will support Jump. So if you want to experience VR video, all you need is the YouTube app, your smartphone, and some cardboard. Now, in the meantime, starting this week, you can try out basic, non-stereoscopic 360 content in YouTube and Cardboard. That’s Jump, an open camera design with a fully-integrated version from GoPro on the way, an assembler that turns raw footage into VR video with the help of thousands of computers and a player that everyone already has, called YouTube. All of this will be available this summer. And I am so excited to see what you all create.
And there you have it, a glimpse at our efforts in VR. Jump is about capturing the world’s places in VR video and giving everyone the chance to experience them. Expeditions, which lets teachers take their classes on field trips to anywhere. And then, of course, Cardboard, where we got started, the beginning of our journey. Cardboard is about VR for absolutely everyone. We hope you’ll come explore with us. Thank you.
Sundar Pichai – SVP, Android, Chrome and Apps, Google
It’s really exciting to see what we can do to bring the real world alive to users in an immersive way. Projects like this, bold approaches like this, is at the heart of how we approach problems for users. I often get asked, how does Search, Android, Photos, and things like this, how do they relate to each other? For us, it is about, as Google, putting technology and computer science to work on important problems that users face, and do it at scale for everyone in the world.
Take driverless cars. The reason we work on driverless cars is, it’s something people do every day. And in the U.S. just last year, there were over 33,000 deaths. That’s almost 100 people who die every day on roads. We really want to make a difference, and we think technology can do that. We started with driving a few Priuses around in parking lots. Our Lexus hybrid vehicles have driven more than 1 million miles autonomously without a single accident having been caused by the self-driving car. We just announced last week the prototypes — our next-generation prototype, they’re actually going to be driving around in Mountain View. And if you take a look at what the car sees when it is driving around, that is all the machine learning I talked about earlier. The purple boxes are other cars. The size reflects the size of the vehicle. You’re seeing yellow appear there, which is pedestrians. So the car is using computer vision to navigate accurately. It’s incredible computer science at work which makes a difference.
Project Loon is another bold project where we have a plan to put balloons at the edge of space to provide connectivity to hard-to-reach rural areas, so that we can bring the next billion users online. Loon started as an experimental project, and we have made huge strides. Our balloons can stay up for over 100 days. The previous world record was 55 days set by NASA. We today can deliver LTE speeds, 10 megabits per second, directly to handsets. We can cover areas four times the size of what we used to before, an area the size of Rhode Island. And we are very excited that now we can connect the balloons so that we can reach far from a single base station. We can navigate these balloons to within 500 meters accuracy from over 20,000 kilometers away. We are actually testing Project Loon live. So in New Zealand, working with Vodafone New Zealand, we provided live coverage for well over a day. And now we are expanding our partnerships with Telefonica, Telstra. We are in discussions with telecom carriers who have over a billion subscribers total. Projects like these, be it search, be it Android, be it organizing your photos, be it taking an immersive trip to Greenland, is at the heart of what we try to do. Using technology at work to solve problems for everyone in the world. This is why I/O is so exciting for us. We get to share what we have been up to in the last year, and all you developers get to go out and build amazing things on top of what we do. So I can’t wait to see what you build next year.
Thank you so much for joining us. Good luck. And see you next year.
Related Posts
- AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Transcript)
- Transcript of Demis Hassabis + Google Co-Founder Sergey Brin: AGI by 2030?
- Transcript of Sergey Brin’s Interview At All-In Live from Miami
- Transcript of The Catastrophic Risks of AI — and a Safer Path: Yoshua Bengio
- Transcript of Google CEO Sundar Pichai at Google I/O 2025 Keynote