Skip to content
Home » Google I/O 2014 – What’s New in Android (Full Transcript)

Google I/O 2014 – What’s New in Android (Full Transcript)

Speakers: Chet Haase, Dan Sandler

As part of the Google I/O 2014 talk sessions, Speakers Chet Haase and Dan Sandler discuss the latest developments in Android technologies and APIs and cover everything that’s new and improved in the Android platform…


Chet Haase – Senior Software Engineer at Google

Good afternoon, and welcome to the first session, right?

What’s New In Android? The session that I like to think of as the Android keynote for the people that couldn’t actually wake up that early. So congratulations for actually waking up this early. We’ll see how it goes.

Yes, well done. Give yourselves a hand. Absolutely. This is a talk that traditionally has been done by me and Romain Guy, who could not make it this year because we didn’t ask him to. Though we did get an appropriate stand-in for Romain. We found someone that can fake a decent French accent.

Dan Sandler – Software engineer Google (Android)

[Speaking French] Eiffel Tower.

Chet Haase: So with that, let’s introduce ourselves, because obviously you have no idea who we are. I am Chet Haase. I am on the UI Toolkit team in Android.

Dan Sandler: I’m Dan Sandler. I’m on the Android System UI team.

Chet Haase: That accent didn’t last very long.

Dan Sandler: It didn’t. I couldn’t.

Chet Haase: All right, so one of the questions that comes up — it just came up at lunchtime, actually, down in the cafeteria — is okay, so there’s an L release. What does L stand for? And I’m here to tell you — can we have like, a drum roll, or something? L if I know.

But for today, we are calling this the L Developer Preview release. We heard about this in the keynote, and we can see by the graphics on the screen that aren’t quite professionally done that it is not a final release. It is instead a preview release where things work pretty well, but it’s not done yet.

We’re hard at work finishing the L release. And in the meantime, we’re exposing it to you to actually use in the meantime, get your apps running and happy on it, and most importantly, to send us feedback about what’s not working exactly perfectly so that we can actually nail that down by the time we ship it.

So in the meantime today, we wanted to give a session talking about all the bits that are new in this preview release that you can get your hands on and play with, and there’s a lot of material in here. We’ll see how fast–

Dan Sandler: We have about six hours of material to cover in 45 minutes, so you’re going to have to hang on.

Chet Haase: So first of all, let’s start with graphics and UI, because I like to start with graphics and UI, and I usually like to end with that as well.

So we heard about the material design stuff in the keynote, and we wanted to touch on a couple of those elements in here. I also want to point out, I’ll give you references at the end of this section, about where to go for more information during the conference. In fact, one of the whole points of this session is to give you just a little bit more detailed info on all of the feature areas that we find interesting to talk about, and also the references to what other sessions and what other videos you should check out, or sandboxes that have further information, or where you can simply find Diane on the show floor if you want to ask her directly.

So in the material area, we have a new theme, we have some new widgets for you, and we also have some new APIs, which you can use at your option. The theme exposes new colors. There’s an idea in material design that all the assets are by default grayscale, and then they can be tinted. So it’s very easy to brand your application, or your application state, with colors, as opposed to baking the colors directly into the assets. So that’s much easier now.

There’s new icons out there. Some of them are animated, part of the rich interactive experience that we have. With material design, we have touch feedback ripples. We’ll see a little bit more about — give the user a sense of interacting with the UI and knowing exactly what’s going on in the UI at all times. And also, activity transitions with shared hero elements. We’ll see a little bit more about that.

In the widget space, we have a couple of widgets that are very important. One of them is minor. It’s CardView. There’s not a lot there. It’s basically a container with rounded corners, and it’s raised up off the view hierarchy plane a little bit to give a shadowed look to it. This is not something that’s too hard to do in your own code, but having CardView there allows you to have this look and feel in a consistent way that other applications are using it as well.

RecyclerView is a little bit larger. If we can actually just do an informal poll of who has actually used ListView? Okay. If I can just count. Hang on. Okay, so that was basically everyone in the audience. Now if we can get a count of the people who have enjoyed that experience? I count two, which is actually one more than I expected. So you can think of RecyclerView as being ListView2. This is more extensible, more flexible. We have layout managers that you can plug in to get different layouts. It’s amazing. You can actually linearly lay out both vertically and horizontally. Incredible.

Dan Sandler: Absolutely.

Chet Haase: Because on the Android team, we think not only about why, but also about X. So we have — we have a linear — why the groan? We have a linear layout manager in there right now. We have some other layout managers that we’re working on that will come out with it, or you can write your own custom layout manager. There’s also animations baked into it. Some very simple add remove animations right now. I don’t know if anybody has actually tried to implement animations in ListView. I know I personally have done several videos trying to explain how to do this nearly impossible task.


What we’d like is for that to simply be automatic, so we’ve started down the road for that. And both of these, most importantly, unlike a lot of the new APIs, which are obviously just part of the L release, these widgets are actually in the support library in V7. So you can use those —

Dan Sandler: How much did you pay them? We’re getting a lot of applause lines here.

Chet Haase: I actually don’t know what they’re clapping at. It has nothing to do with what I’m saying. Something else is going on–

Dan Sandler: World Cup.

Chet Haase: So you can use those in your material applications, in your L applications, but you can also use them in code for earlier releases as well. So have at it.

Also, in the graphics area, we have real time soft shadows. We heard a little bit about that in the keynote. We’ll hear more tomorrow — in some sessions tomorrow. It’s the ability to give elevation to views to pop them up off the view hierarchy plane. Not only giving them elevation and Z value, and then allowing them to cast a shadow, a soft shadow based on that elevation, but also to draw outside their bounds.

One of the tricky parts about doing things like shadows is, or if you want to scale that view, well, then you need to tell the containment hierarchy of that view not to clip it. Well, giving it elevation pops it into– what you can picture is like an aquarium, a 3D volume that sits on top of the view hierarchy. And all of a sudden, you’ve got much more flexibility about how that thing is drawn, about how it’s clipped, and about the ordering with which it and its shadow is drawn in the hierarchy.

We have animations. Yay, more animation stuff. The biggest one in this area is activity transitions — in particular, the ability to share elements between activities. So we’ve seen some work. I think it was last year at I/O there was an animation talk, and there was some “DevBytes” around this where we showed techniques for passing information between activities such that you could pass information about elements, and you could sort of fake this animation to look like it transitions seamlessly from one activity to another.

So that technique has been baked into the platform, so there’s a standard API way for you to say this is my shared element, or a set of shared elements, and you can pass those between activities, and they can share them. They can animate them between the activities. You can animate other items in and out between the activities, and you can customize the entire experience, making it all part of the material idea of making all the transitions seamless for the user as they go from state to state to state in their application, or in your application.

Also, there’s new animation curve capabilities, both motion and timing curves, so you can have a much more custom path-based curve in the timing area. You can also move things in x and y along a curve, which is a little bit tricky. Possible, but tricky before.

And finally, there’s animated reveal capabilities. So you can reveal the next state of an activity or a view by having a circular reveal that exposes it over time. And I think there’s a video of some of this stuff. So this is sort of an epilepsy-causing animation here that I looped, just showing some of the shared element transition stuff where we’re popping the view in and out.

If you look closely, you can see a shadow that’s popped as we’re elevating it. And then as it goes back down to the view hierarchy, we launch the next activity and pass that view over as the shared element between these two separate activities. We have some new icon capabilities. There’s a couple of different ways of animating states in icons. One of them you see in the check boxes and radio buttons, the ability to basically animate key frames, or these images that represent an animation from one state to another.

And there’s another one called StateListAnimator, where when you go from one state to another, you can specify a custom animation that will animate properties over time. And then finally, we have touch feedback ripples, which gives the user indication of what’s going on in the UI when they press that button. It’s not simply going from unpressed to pressed, but it’s actually giving them information about the gradual state change that’s occurring, as well as possibly where that state change occurred.

So if we look at the video here, let’s see if — that is really hard to see on this screen. There’s some subtle ripples on the button down below, and you can see the ripples are actually emanating from the touch point that I had when I touched this beautiful button in my UI.

And then the check box up at the top, that’s one of the animated PNG animations that we have for the AnimatedStateListDrawable. And render thread. So this is kind of an implementation detail, but it’s a really interesting one, so I’m going to talk about it anyway. And it’s also important, and probably increasingly important as we go forward, for performance.

One of the issues that we have with UI and graphics animations, and performance in UIs in general and Android, is that everything needs to stay on the UI toolkit thread, which means if you’re doing something silly like querying a web service on your UI thread, A, don’t, and B, you’re going to freeze your UI. And C, see A before. Don’t do that. But you can get yourself into these positions, in some cases necessarily, because that is an operation we need to perform in the UI toolkit thread, and therefore everything else happening halts.

A really great example of that is when you launch an activity, then we need to inflate that new activity if you’re in the same process on the UI toolkit thread.

Well, in the meantime, if you’re running an animation that also needs to run in the UI toolkit thread, then that animation is going to stop while the activity launches. So we came along with the render thread technology to be able to break apart the two processes of rendering. There’s the creating the display list of what we actually want to render, and then there’s actually executing that display list and telling the GPU how to draw this stuff. And we broke these two apart.

So we create it on the UI toolkit thread, where it necessarily has to be, and then we pass that information over the wall to the render thread to let it actually execute and talk to the GPU on a separate thread. In particular, what we want to do is take these atomic animations and send them over so they can perform completely autonomously on the render thread so that now you’re not beholden to the state of the UI toolkit thread if you are inflating an activity or doing an expensive operation, because the animations can happen asynchronously at the same time.

So we’ve started down that path right now. There’s going to be more work going forward on that. A great visual example of that right now is the touch feedback ripples, which happen on the render thread, and they happen completely autonomous of the UI toolkit thread, which is why when you click on something that launches a new activity, the ripple continues to actually animate while the new activity window is coming up.

There’s some important I/O talks where we go into a lot more gory detail in a lot of this stuff, so I would suggest that you check those out. Some of them are, of course, the material design talks themselves. They’re scattered throughout the conference, and I frankly didn’t look up the names and titles, so they’re not on this slide.

There are two sessions in particular that go into the more techie details. One is called Material Science – this is tomorrow morning at 11:00 — and the other is Material Witness. Material Science is an overview of sort of the entire space, kind of a deeper dive of everything I’ve just talked about. And Material Witness is a use case where Romain and I wrote particular apps using these APIs and then talk about how they were actually implemented and how the technology works. The sessions in your schedule right now probably have different names because we were withholding the material name until after the keynote, but the real names will be out there very soon. So check those out, and there’s also an I/O Byte on activity transitions in particular that you can check out as well.

In Support Lib, there’s the RecyclerView and the CardView stuff that I talked about. There’s also other capabilities, including palette capabilities for doing color sampling stuff. This was mentioned in the keynote. Matias was talking about that this morning. There’s RoundedBitmapDrawable. This comes into play in things like CardView. It’s very useful. ViewPropertyAnimator. This was done as more of an implementation detail of getting RecyclerView animations to work. And NotificationCompat is useful for Android Wear stuff.

And we’re onto WebView, where we have updated to Chromium, build M36, which enables various useful web standards, so you now have WebGL, and the other things listed on the slide. Check out the I/O Byte “What’s New In WebView” for more detailed information.

On the graphic side, there is an update to OpenGL ES 3.1 with new compute shaders and new shader language capabilities. We have bindings in the SDK as well as NDK, and obviously it’s backward compatible with OpenGL ES 2 and 3, as they usually are. And you can say use feature in your manifest to specify this version exactly.

The other important thing to mention was also mentioned by Dave Burke in the keynote. It’s the Android extension pack. We basically collected a bunch of extensions that are really useful and powerful together and sort of bring the platform up to the current state of, say, console gaming hardware. And all of these come as a bundle, and we’re working with partners to enable one and all of these extensions together. And there will probably be a mechanism in the future for you to ask for this particular capability, which basically gives you the whole sandbox of capabilities altogether. Lots of useful stuff in there, including tessellation, enhanced geometry shaders, and texture compression would be nice. Camera and audio space.

There’s a couple of talks I would suggest you go to for the actual details on this, but some image processing capabilities, also some audio data type buffering information that I couldn’t possibly address, because I don’t know. So I would suggest that you go to the talks instead.

And in the meantime, listen to Dan.

Dan Sandler: You can take a breath now.

Chet Haase: Yeah.

Dan Sandler: Right. So related to the audio is a whole new set of APIs to effectively replace RemoteControlClient. If you’ve ever built a media player and you’ve dealt with transport controls, you know about RemoteControlClient.

Here to the rescue is MediaSession and its friend MediaController. These are two new classes and a bunch of other support code in the platform to allow you to make multiple playback sources, and multiple transport controllers, and wire them all together. The nice thing about MediaController is that it works from a non-UI process, so you can be running this entirely in the background if you need to do control of an ongoing playback stream from there, extract metadata, things like that. And we use that in the system UI as well.

We’ll talk about that a little bit later in the talk. MediaSession hooks up to your playback stream and essentially handles those transport control requests in much the same way that you’re already accustomed to. And you’ll talk to the MediaSession manager to work with those. Great new tools, all on So you want to check those out.

I will segue from here into all the other good stuff that’s in the framework, if it doesn’t fall under the green category.

Chet Haase: It’s the non-visual framework stuff.

Dan Sandler: That’s right, except that I’m sure there’s going to some visual stuff here, like right here at the beginning. So recent documents and tasks. You saw this in the keynote.

Our recent apps interface in the system UI is not just for apps anymore. We’re now encouraging your app to break apart different components of the experience into different tasks or documents. We sort of call this a more document-centric interface. So you see in the screen shot here we’ve got a couple of different web sessions, essentially web documents that are showing up as different cards in the recents experience. And so this gives the user a much greater ability to shift back and forth between different tasks on one device. Go ahead.

Chet Haase: Yeah, I was going to say, this came up in the keynote when they were talking about Chrome. There was a web session where they were talking about different tabs in the recents, and this is the capability that enables that.

Dan Sandler: That’s right. So you can start a new document at any time by throwing in Flag_Activity_New_Document into that intent. You can also mark your activity that way in the manifest

16:22to say this is always going to start a new document. Lots of other APIs that we didn’t have room for on the slide. You should definitely check it out. Also in the system UI, you can now do more with the status bar.

In KitKat, we introduced a translucent status bar, which was great for wallpaper experiences and things that wanted to go full bleed. The problem that we struggled with was that gradient that we put in there to protect the status bar and navigation icons didn’t work great in every situation. It’s not a lot of space. We didn’t want to overlap the app window. We didn’t know what was going to be in the app window region.

So in the L Developer preview, you’ll be able to see that you can do a lot of things with the status bar that you couldn’t do before, particularly change the color to something that matches your application’s color branding, and so forth. You can use a solid color there, or you can use a completely transparent status bar, and then it’s up to you to make sure that all the status bar icons, all the navigation bar icons still look great.

But as you can see here in the screen shot, that gives us the opportunity in Google Now Launcher to create a nice, long gradient that both protects the icon so the user can see them and gives you a really nice transition into the content. Check it out.

You heard about power a bunch in the keynote. And we’ll talk a little bit about it here. There’s so much more to look into under the heading of Project Volta. The first thing you’ll see is that there’s a battery stat service that you can ask from ADB to give you all kinds of gory details about where the power is going on the system, what kind of power events there have been, and break it down by UID, as well as looking at the global story.

There’s a tool in the SDK called Battery Historian, which I believe Dave Burke mentioned, that gives you sort of a trace view like HTML graph output. You feed it a bug report, and it will tell you essentially what happened to all of the power in that device over that session. It’s incredibly, incredibly useful. Just a quick example of how to talk to these things, you can talk to batterystats and say, oh, you want to turn it on, you want to enable the full history, clear out all the stats, take a bug report, and then you stuff it into the historian, and you get out this great piece of graphics that you can use to understand where those coulombs went.

Another thing that helps with power, and this was also touched on a little bit in the keynote, is JobScheduler. We’re going to go a little bit deeper here. It’s pretty common that you’ve got an app that wants to do some work later — sometime in the background, sometime when it’s plugged in, sometime when you have access to an unmetered high speed network. So you can do all this today, right? You probably do in your app.

You wake up periodically, check the conditions, and then go back to sleep if the situation isn’t right. That’s not super efficient because you are waking up the device just to check to see if it’s a good time to wake up the device. JobScheduler is here to help. It wraps it all up into something that the system can do and essentially call back your job when certain conditions are met. And you see a little bit of code here.

The interesting part here is that you specify the capabilities you need, and then you just give it a component name, and you’ll get that component name called when the time is right. Very handy. Also in the framework, some improvements to the Storage Access Framework, which we introduced in KitKat in API 19. Previously if you wanted to pop up a file browser and ask for a directory, you couldn’t do it. Now you can.

Once an app receives a whole directory from the picker in this way, you can actually explore the entire subtree of documents and do whatever you need to do with that whole directory. Also in our framework are some new things around networking. A couple of different topics.

First one is multi-networking. We talked a little bit about JobScheduler being handy for doing some work later under a particular kind of condition. What if you need to do work now? Your service is running now, your app is running now, but you need to open a socket on a link that has some particular feature– some carrier future you need, needs to have SMS capability, it needs to be unmetered, and so forth. ConnectivityManager now supports the ability to handle these network requests. You say requestNetwork, and you give it a call back. Inside that call back, you will find out when a network meeting your criteria becomes available, and then you can use that network object to look up hosts and open sockets on that link specifically.

The other nice thing about the call back, and this is very cool, you get warnings, when possible, when the network is about to go down. This allows you to actually do a graceful handoff in your app from one link to another. Oh, you’re about to lose Wi-Fi because you’re leaving its range.

We’re just going to move this over, rebuild the stream on cellular with no interruption to the user. Also under the banner of networking is Bluetooth. We introduced in 4.3 Bluetooth Low Energy. In L, we are now adding peripheral device support to BLE. So you can provide services and scan for services pursuant to the way that Bluetooth LE peripheral devices do that sort of thing. Check out the Android Bluetooth LE package for more on that.

Finally, under networking, we have NFC changes. NFC has been a little hard to use for users, a little hard to use for developers. We’re trying to remove all those pain points. We’re now showing Android Beam in the Share menu to make it easier to go ahead and start an NFC transmission from the user standpoint.

From an app standpoint, I don’t know if you’ve ever had a situation where you’re trying to pop up a dialog saying now hold your phones closer together. Now you can actually specifically start a Beam operation anytime you want with NfcAdapter.invokeBeam.

4Another thing that has been a little harder than it needs to be is creating a very simple piece of text to push over NFC. Now there’s a method on NdefRecord to let you do that. And there’s a bunch of new CardEmulation stuff that I can’t begin to explain because, as Chet explained, we don’t know what it is. So you should check that out.

And now as a sub head of the frameworks, we’re going to talk about notifications. There’s a lot to talk about here.

Chet Haase: It’s a visual side of the non-visual side.

Dan Sandler: The visual side of the non — that’s right. It’s layers. There’s a lot to talk about here. It’s so key to what we’re doing with L for the user. There’s a lot to talk about with notifications. Also, it’s an area of my expertise, and I have the clicker, so we’re going to talk about it for a while.

I’m going to look at four different categories, how they look, how they work. We’re going to talk about how that interacts with the lock screen and some privacy features we’ve added to make that work for everyone, additional metadata that every app needs to start putting into its notifications to make sure you’re ready for L, and then finally we’ll touch on Wear just a little bit. They’ve got great talks later in the program, but I wanted to just mention some of those interactions.

So first, the new style notifications in L. We should take a quick look back. These are the different versions we’ve had just in the last five years. We’ve had a little bit of evolution. I don’t know if you can see the differences there.

Chet Haase: We’re getting blacker and blacker — how—

Dan Sandler: It’s a dark time for the rebellion right there at the end. Here we have notifications in L. We’ve moved to the material style. Let’s go through and take a look at the pieces here. So first I want to show you that we are in fact using the material theme. We are in line with the rest of the material design system that we’re introducing for Android with L. We’ve got these card-shaped backgrounds that cast shadows.

The foreground is now dark text, dark icons, and the background is this light color. I should point out at this point, if you’ve been following the icon guidelines, there are no new icon guidelines again, by the way.

Trying to keep that going. But if you’ve been following the icon guidelines, you should now have small icons for notifications and for actions that are effectively masks. And in L, we’re actually going to treat them as masks. We’re just going to use those to draw in the correct color. As Chet was alluding earlier, that’s something we’re trying to do more of in L.

So if you have any opacity baked into your icons right now, now is the time to get rid of it. Those should be full opacity, white on transparent icons everywhere.

The next thing I want to draw your attention to is this accent color. This really looks great and can really make a notification pop out and showcase, again, that sort of color story that you may be trying to tell with your app. It’s just one method on the builder, setColor, and it fills that nice circle right behind the small icon on the notification inside that layout. I should point out that if you have a large icon in the past, we’ve kicked that small icon, that important symbolic representation of what that notification’s about, we’ve kicked it over kind of to the side, where nobody notices it.

Now we’re going to keep that color circle, we’re going to keep that visual story,

24:28and we’re gonna shrink it down to a little badge that goes right in front of the large icon. So it’s always right where the user expects to see it. And then finally, it just bears mentioning that everything else you know and love about Android notifications is still alive and well and showcased great in the L developer preview. Expanded views, action buttons, and as always in Android, if what you need it extends beyond what we’ve been able to come up with in our templates, customer remote views are there for you and available.

Now, I will say that one of the things that people have said to us is, well, okay, that’s all fine and well, but I have to make a media player, and for media players, people expect transport controls, and then I have to go deal with custom remote views and all that pain. We’ve been suffering along with that basically since Honeycomb. No longer.

We have a new template for you, the first one since Jellybean– MediaStyle. Material design finally comes to media playback. If you use this template, you will opt into, essentially, the design created just for you by the Android UX team, designing something that will look great both on the lock screen and in the regular notification shade.

One other thing that you’ll notice about this is that the accent color fills the entire background of the card.

Chet Haase: It’s orange.

Dan Sandler: It is orange. I wanted to make sure they could see it from the cheap seats. What am I saying? This is Google I/O. There are no cheap seats. The accent color fills the entire card, and that helps it pop out when you’re looking at the lock screen. That’s where your media is coming from. For the first time, we’re going to let you actually use up to six action icons. In fact, you could always attach up to six, we just only show you the first three. If you’re using MediaStyle, you’ll get all six, and we draw them as little transport controls here, which is super handy. And in fact, we even let you use one or two of those in the compact form of the notification, again, to make sure that wherever you see that notification, it’s a great place to play and pause media.

There’s a custom ProgressBar that fits with the theme, but most importantly, you don’t need to use RemoteViews anymore just to do simple media playback. I mentioned a little bit earlier that there’s a MediaSession API.

There’s also a bit of the MediaStyle that allows you to attach your MediaSession token right into the notification, which is going to tell the system UI, hey, this thing is playing back media right now, which is going to be important as we start to do things like integrating more metadata from that track, more metadata from that ongoing playback, into the system UI.

So for example, if you take a look at the L Developer preview, you’ll see that album art is regrettably not showing on the lock screen right now. I was working on that on the plane. I couldn’t quite get it done in time. I’m sorry. But when we have it in L, it will use this connection to MediaSession to get that metadata right out of your ongoing play back in real time.

Very quickly I want to show you a wall of code, because everybody loves that. The important thing about this is just to note that most of these APIs here are the existing notification APIs that you know and love. They’re the five actions that are in the demonstration there. I’m setting the color, and then right there at the bottom we’re doing Notification.MediaStyle, which lets you attach the media token and then pick which of the actions to show in the compact form.

Something else you saw in the keynote, it kind of went by quickly, and I really should have put a video in here. So it went by so quickly, it disappeared from this slide. Heads-up notifications is something that we’ve added to L to make it easier for important things to get in front of the user without actually taking you out of context.

So in the past, if a phone call comes in, you’re in the middle of playing a game, that activity is paused, that activity goes away. If it was a multiplayer game, that’s a great way to cause somebody’s game to freeze so that you can tag them or what have you. No longer. Heads-up notifications are the new way for important things to get the user’s attention without stealing focus away from the app. When that pops up, you can look at it, you can decide to ignore it, you can decide to swipe it away, you can decide to just push it back into the notification shade without swiping it away, or you can act on it by clicking one of the action buttons. We’re using this for things that the user needs to deal with, the user needs to see.

So high priority notifications will show up there. Notifications that involve people, we’ll talk about that in the metadata section. Similarly about notifications that buzz, make noise, or that would use the full screen intent, which is the way, of course, that phone calls and alarms and things like that would take over your whole screen just from a notification.

Let’s take a little time and talk about the lock screen. So this is a big part of L. You saw this in the keynote. We heard you like notifications, so we put your notifications in your lock screen. It’s there any time you want to look at it. Why would we bother doing this? What is the point of this? Pre-L, the workflow of your phone going off, your phone buzzing in your pocket is you hear it buzz, you hear it ding, you reach in, you take it out of your pocket, you turn it on, unlock the screen, pull down the notification shade. By the time you’ve gotten through all those gauntlets, you’ve forgotten what you came there for, and there’s some– five other things– your email was sitting there before you pulled down your notifications, and you’re already distracted.

In L, you hear the phone buzz, you take it out and turn it on, and that’s it. It’s right there. It’s completely glanceable for everyone using an L device.

But what about privacy? So for many users, the design that we showed in the keynote, the design I just described, is a perfect trade-off. It’s okay for you to allow your notification content to show up on your lock screen because that’s super useful to you. Unauthorized users are still not be able to get into your phone without authenticating, but that doesn’t work for everyone or every IT department, particularly in a bring your own device scenario.

So we’ve introduced something in L called notification visibility. This is a new privacy feature for notifications specifically about how they interact with the lock screen. It lets apps specify what is safe to show where. It lets users specify whether they care about this level of privacy, and device policy can be involved as well.

So to explain this, I have to dig into the spheres of visibility. I need an echo, like a reverb for that. Spheres of visibility. This is the public sphere. This is all the things anyone can do with your phone. You can do it, Chet can do it if he picks up your phone, the person who picks up the phone at the bar when you’ve left it on the bar stool. These are all the things that you can do from the lock screen without authenticating. The circle inside, these are the private things. These are the things that only you can do with your phone, because you know how to get into it. You can authenticate. You can get past the lock screen and get into your notifications, your email, your apps, your games, your data, your pictures, and so forth.

So in the notification visibility world, we call this central– this ring of the things that only you are supposed to be able to see — visibility private. A notification that is marked visibility private is very much like an Android notification up until now. That is to say we don’t leak anything about it on a lock screen where the user has said that they care about this sort of thing.

All you show is the icon, and in fact, in L we also show the application name to help fill the space. If you set a notification to be visibility public, you’re saying this notification has nothing sensitive. It’s completely unobjectionable. It’s the weather, right? It’s your device is low on battery. This is not information that is personally sensitive. It’s safe to show on any lock screen, no matter whether the user is concerned with that level of security. There’s a little ring here that I didn’t mention before, which is the things that are private that you can kind of see on the lock screen.

I told you that in L, we continue Android’s tradition of showing notification icons, even if you haven’t authenticated. So one of the things that you can do now in L, I’ll show you an example in a minute, is say, if you’ve got a private notification, where the user doesn’t want to see all that sensitive information on the lock screen, you can provide a substitute, a public version that lets you provide a redacted form of that same notification for a sensitive lock screen.

We’ll see an example on the next slide. I do want to mention also that now that we’ve created all these circles, all these pretty builds in the slide, there’s one little spot that’s left, which is things that only you know are even there. We didn’t have an opportunity to do this in Android before, so we’ve created the third and final visibility level, which is visibility secret.


If you post a notification that is visibility secret, it doesn’t appear on the lock screen at all. So if it’s particularly sensitive and you would like to get notifications for it but you don’t really want anybody else to know that it’s installed, visibility secret is for you. Okay, here’s that promised example. So let’s say the user has said, this is important to me. They’ve set up a pattern or PIN on their phone, and they’ve said, when the device is locked, I want to hide the sensitive notification content from the lock screen.

If you have a notification that is visibility public, this is what you see on the lock screen. I may or may not have overseen that on a lock screen somewhere. Anybody can see this notification whether they’ve authenticated or not, whether the user cares about security or not. This is safe for everyone.

Chet Haase: I don’t think so.

Dan Sandler: Well, so this is why you would say, maybe this is something that’s visibility private. A chat app is the sort of thing that maybe is supposed to be personal.

Chet Haase: Should have been.

Dan Sandler: Should have been. Really. You should think about that next time. So if you have a visibility private notification on a lock screen where the user has said hide sensitive notification content, this is the experience you’ll see by default under the L Developer preview — icon, app name, nothing else.

So if you as an app want to provide a better experience, you can provide that redacted version. You can create a public version of the notification, you say setPublicVersion, you construct a whole new notification object that is a substitute for that one to be shown only on the lock screen. And so in this case, we’ve corrected, we’ve changed the app name a little bit. We’ve added an exclamation mark because that makes people feel cool. And we’ve actually given the user some interesting information. You have a new message. It’s not simply something is happening from this app, which could be I have a sync problem, or you need to pay $5 or whatever, it’s actually giving you a little bit of information, but it’s not sensitive information. So it’s compatible with the user’s wishes.

And then finally there’s that visibility secret, and those things wouldn’t show up at all. Now of course, if the user goes and says, well, when my device is locked, I actually don’t mind seeing all my notifications there, then you see the same thing. It essentially is every notification is public. We talked a little bit about metadata, about adding people and things like that. We actually talked about metadata in KitKat. We’ve started to introduce some new features to be able to stick more information onto notifications for the benefit of the system UI and any notification listeners that the user may have enabled.

Well, we’ve got more for you in the L Developer preview. One of the reasons that we want to do sorting a lot better. There’s limited space to show notifications on the lock screen. In the past, sorting things chronologically, maybe with a little bit of priority, it’s good — it’s not great. We’re doing better in L. So one of the things that you can do in L to make sure that notifications get sorted in the right way so that users are seeing the most important stuff first is attach, first of all, a notification category.

We have a number of categories that are in the notification API. It’s a set of essentially a global partition of notification space. If what you have doesn’t fit, you don’t have to set a category. But if your thing is an incoming voice call, or an incoming video call, or you’ve got an alarm clock app, high timely team, then you can tag your notifications with that category and make sure that the system UI knows what kind of thing it is.

And as we improve L, we will be able to do more things to sort of mute things that are not relevant to the user’s current context. We’ll use that category to do it. There’s a new extra key for the notification extras called EXTRA_PEOPLE. This is for you to be able to say this notification relates to a person that the user might care about. What do you put in here? You put in a URI from the contacts provider, or you put in a telephone number if that’s all you have and you’re not integrating with Android contacts, or you put in an email address.

Whatever you have, you put in EXTRA_PEOPLE, and then system UI can take a look at that and say, hey, that involves somebody. I might want to boost that up. And the other stuff is still important. The timestamp, whether there’s a fullScreenIntent, priority, whether it makes noise, all these things are going to give the system UI new tools to be able to know what things are most important to show to the user first. Most importantly, though, for developers is to give that user that control. Let the user turn off notifications, let the user change the properties of notifications. You don’t want to run the risk of the user banning notifications using the system Settings app, a very popular feature that we introduced in Jellybean. And when you do give users choices about notifications, when you give the users that activity that lets them configure everything, put that activity into your manifest with the new intent category listed on the slide, NOTIFICATION_PREFERENCES.

And we will actually link to that notification preferences page right from system Settings. So if you go into Settings, Notifications, you’ll be able to jump straight into the apps that have notification preferences that the user can control. Briefly I want to mention that Wear is key to the notification story in L and vice versa. It’s very much like the L lock screen. It’s super glanceable, except with Wear it’s already out of your pocket. Your phone app’s notifications appear on wearables automatically. They get bridged there. You’ve seen this in the keynote.

You saw this as part of the Wear unveiling a couple months ago. Notification.WearableExtender is the place where all the APIs exist for you to customize the appearance of that notification specifically on the wearable device. Split it up into multiple pages, group notifications together, and things like that. And as you get into more advanced Wear development and you’re developing apps for Wear itself, to run on the watch, you’ll see that it uses notifications there, too.

In fact, the notification manager on the watch is the thing that manages those cards that you see in the wearable UI. So you’ll use the same old Notifications API to interact with Wear from on the wearable itself.

There are I/O sessions that you should definitely check out. Tomorrow at 10:00, the Android developers on the Wear team are going to take you through all these APIs in detail, and there’s an I/O Byte on YouTube about building these UIs as well. Okay, other important stuff.

Take it away, Chet.

Chet Haase: Okay, I took my breath. I’m back.

So other random stuff that we couldn’t actually find an appropriate bucket for. So here it is. ART, we heard about this in the keynote. You’ve heard about it before. It came out in KitKat as an optional run time, and now it is the runtime. I had thought previously, oh, it’s the one that’s enabled by default, and then I was corrected this week. It is the one.

In fact, Dalvik? What Dalvik? Everything is ART, and it’s a good thing. Faster runtime, the ability to actually pre-compile this stuff so that it’s running faster. A lot more intelligence going on in that runtime than we had before. And one of my favorites is the increased capabilities for garbage collection. So less frequent pauses and shorter durations of those pauses mean a better ability to actually hit your frame rate, particularly for animations.

If you had a GC pause of 10 milliseconds, which was sometimes common, unfortunately, in Dalvik, depending on what was going on in the system, that could just push you right over the boundary of a frame, and you’d skip a frame, and the user would see a hiccup in the animation.

Now if you have pause times of around 2 milliseconds, then it’s much more probable that you’re going to stay within that 16 millisecond boundary, and it’s not going to affect your frame rate or those animations. So faster, better, newer, cooler. Check it out, in particular go to the ART talk. That’s Thursday, tomorrow morning, at 10:00. And also, go to the Sandbox, and they’re giving an ongoing talk, GC and Jank in ART. Talk to them.

I think there’s also documentation that’s coming out on the web if it’s not there already. So lots of stuff to check out there. Oh, one of my favorite things too is the moving collector. The ability to actually move stuff around in the heap, which caused some of the delays that we saw before, that things couldn’t move. So then the heap would get fragmented, and then it got harder and harder to find space for things that you needed to newly allocate.

Well, now we can actually collect the heap and move stuff around when that app is backgrounded. So very powerful capability, linked with the ability to then take really large objects like bitmap and put those in a set aside space so that the really large objects aren’t taking up room in the common space where all the little tiny objects need to go.

5So that’s part of the reason why we have much smaller pause times for allocations as well as collections, because it’s much faster for us to find the space that we need when we actually need it.

Android TV?

Dan Sandler: There’s a great talk about this, let’s see when it is. That’s this afternoon. Yes, this afternoon. All about Android TV. There’s an I/O Byte on YouTube. The message from the Android TV team is this– you saw it in the keynote– there’s going to be one app. You’re going to produce one app, and it’s going to run great on every device that calls itself Android — phones, tablets, TVs.

When your app is ready for that 10 foot experience they talked about in the keynote, there’s an additional intent category you can toss into your manifest to move your app up so that it shows up on that main panel, that main rail inside the TV launcher UI. You can still launch your app if it’s not there, but when it’s ready for that TV experience, that’s how you move it up.

More info at

Chet Haase: Enterprise, the main thing I’d say here is go to the Sandbox and attend the talk on frameworks for enterprise and device management, and Ben will give you more information about what’s going on for enterprise. They talked about it a little bit, Sundar was talking about it in the keynote, about managed profiles, the ability to actually have your IT department, let’s say, in a BYOD world actually manage what’s on the device. And then this division between the device owner, the person actually using the device, and the profile owner, which might be the organization that that user is a part of. And the ability to do that in enterprises is increasingly important.

There’s new APIs in DevicePolicyManager that enable this, and there’s also an app that they’re working on for pre-L releases that will enable some of these capabilities as well.

Android tools. I spoke to one of the people on the Tools team and got the brief take on exactly what’s new and exciting in the tool space. And then I said, okay, great, I will talk about it in the session. He said, actually, we’re going to talk about it in Thursday’s so please don’t. So go to their session tomorrow at 9:00 and learn what’s new in tools.

Play Services. There’s an excellent session coming on this afternoon. So we have the Wear Data API, much more information about what’s going on there. Games, capabilities, the capabilities of Quests, as well as saved games in the cloud. So you can actually save your share state between devices much more easily.

New Drive capabilities, new Wallet capabilities, increased analytics for your apps. Much more that I didn’t want to get into this slide, and they can tell you much more about that at the session if you go there. And some other stuff that didn’t even fit into our other grab bag category.

Security. SELinux is now in enforcing mode. I would encourage you to go to the “Secure Development on Android” I/O Byte on YouTube and check that out. Get more information about that, as well as some other things in the encryption area that you can learn about. And also in the printing area, but I would say just in general, we have a new PdfRenderer capability that allows you to take PDFs and then render them as bitmaps. Very useful for printing, and we use it for the print preview, which is now part of the release. But you can use it in general if that’s a capability that you need in your application.

And most importantly, the preview SDK. What is it? When is it coming out? Tomorrow. So here’s some URLs., actually /preview is where you can get more information about the preview release, as well as download the bits for building against and download system images for Nexus 5 and 7 minimally. And that comes out tomorrow, so please get started developing today, or — yeah, there’s the links. Tomorrow. Yeah.

Developed for the preview release of L today, tomorrow. And in the meantime, go to all the I/O sessions, enjoy those, and the I/O Bytes online are being posted as of today. And please, if you have issues, please submit them. The sooner the better, because the sooner you submit them, the better chance we’re going to have of actually knowing about them in time to fix them for the full on L release.

Dan Sandler: And we ran out of time.

Chet Haase: We did run out of time.

Dan Sandler: We planned it perfectly. We have 17 seconds for Q&A. So actually, we’re going to take Q&A downstairs in the sandbox on the platform level of the second floor, right?

Chet Haase: Yes. We’ll be there right after this talk, and thanks for coming.


Related Posts