Google I/O 2014 – What’s New in Android (Full Transcript)

Speakers: Chet Haase, Dan Sandler

As part of the Google I/O 2014 talk sessions, Speakers Chet Haase and Dan Sandler discuss the latest developments in Android technologies and APIs and cover everything that’s new and improved in the Android platform…


Chet Haase – Senior Software Engineer at Google

Good afternoon, and welcome to the first session, right?

What’s New In Android? The session that I like to think of as the Android keynote for the people that couldn’t actually wake up that early. So congratulations for actually waking up this early. We’ll see how it goes.

Yes, well done. Give yourselves a hand. Absolutely. This is a talk that traditionally has been done by me and Romain Guy, who could not make it this year because we didn’t ask him to. Though we did get an appropriate stand-in for Romain. We found someone that can fake a decent French accent.

Dan Sandler – Software engineer Google (Android)

[Speaking French] Eiffel Tower.

Chet Haase: So with that, let’s introduce ourselves, because obviously you have no idea who we are. I am Chet Haase. I am on the UI Toolkit team in Android.

Dan Sandler: I’m Dan Sandler. I’m on the Android System UI team.

Chet Haase: That accent didn’t last very long.

Dan Sandler: It didn’t. I couldn’t.

Chet Haase: All right, so one of the questions that comes up — it just came up at lunchtime, actually, down in the cafeteria — is okay, so there’s an L release. What does L stand for? And I’m here to tell you — can we have like, a drum roll, or something? L if I know.

But for today, we are calling this the L Developer Preview release. We heard about this in the keynote, and we can see by the graphics on the screen that aren’t quite professionally done that it is not a final release. It is instead a preview release where things work pretty well, but it’s not done yet.

We’re hard at work finishing the L release. And in the meantime, we’re exposing it to you to actually use in the meantime, get your apps running and happy on it, and most importantly, to send us feedback about what’s not working exactly perfectly so that we can actually nail that down by the time we ship it.

So in the meantime today, we wanted to give a session talking about all the bits that are new in this preview release that you can get your hands on and play with, and there’s a lot of material in here. We’ll see how fast–

Dan Sandler: We have about six hours of material to cover in 45 minutes, so you’re going to have to hang on.

Chet Haase: So first of all, let’s start with graphics and UI, because I like to start with graphics and UI, and I usually like to end with that as well.

So we heard about the material design stuff in the keynote, and we wanted to touch on a couple of those elements in here. I also want to point out, I’ll give you references at the end of this section, about where to go for more information during the conference. In fact, one of the whole points of this session is to give you just a little bit more detailed info on all of the feature areas that we find interesting to talk about, and also the references to what other sessions and what other videos you should check out, or sandboxes that have further information, or where you can simply find Diane on the show floor if you want to ask her directly.

So in the material area, we have a new theme, we have some new widgets for you, and we also have some new APIs, which you can use at your option. The theme exposes new colors. There’s an idea in material design that all the assets are by default grayscale, and then they can be tinted. So it’s very easy to brand your application, or your application state, with colors, as opposed to baking the colors directly into the assets. So that’s much easier now.

There’s new icons out there. Some of them are animated, part of the rich interactive experience that we have. With material design, we have touch feedback ripples. We’ll see a little bit more about — give the user a sense of interacting with the UI and knowing exactly what’s going on in the UI at all times. And also, activity transitions with shared hero elements. We’ll see a little bit more about that.

In the widget space, we have a couple of widgets that are very important. One of them is minor. It’s CardView. There’s not a lot there. It’s basically a container with rounded corners, and it’s raised up off the view hierarchy plane a little bit to give a shadowed look to it. This is not something that’s too hard to do in your own code, but having CardView there allows you to have this look and feel in a consistent way that other applications are using it as well.

RecyclerView is a little bit larger. If we can actually just do an informal poll of who has actually used ListView? Okay. If I can just count. Hang on. Okay, so that was basically everyone in the audience. Now if we can get a count of the people who have enjoyed that experience? I count two, which is actually one more than I expected. So you can think of RecyclerView as being ListView2. This is more extensible, more flexible. We have layout managers that you can plug in to get different layouts. It’s amazing. You can actually linearly lay out both vertically and horizontally. Incredible.

Dan Sandler: Absolutely.

Chet Haase: Because on the Android team, we think not only about why, but also about X. So we have — we have a linear — why the groan? We have a linear layout manager in there right now. We have some other layout managers that we’re working on that will come out with it, or you can write your own custom layout manager. There’s also animations baked into it. Some very simple add remove animations right now. I don’t know if anybody has actually tried to implement animations in ListView. I know I personally have done several videos trying to explain how to do this nearly impossible task.


What we’d like is for that to simply be automatic, so we’ve started down the road for that. And both of these, most importantly, unlike a lot of the new APIs, which are obviously just part of the L release, these widgets are actually in the support library in V7. So you can use those —

Dan Sandler: How much did you pay them? We’re getting a lot of applause lines here.

Chet Haase: I actually don’t know what they’re clapping at. It has nothing to do with what I’m saying. Something else is going on–

Dan Sandler: World Cup.

Chet Haase: So you can use those in your material applications, in your L applications, but you can also use them in code for earlier releases as well. So have at it.

Also, in the graphics area, we have real time soft shadows. We heard a little bit about that in the keynote. We’ll hear more tomorrow — in some sessions tomorrow. It’s the ability to give elevation to views to pop them up off the view hierarchy plane. Not only giving them elevation and Z value, and then allowing them to cast a shadow, a soft shadow based on that elevation, but also to draw outside their bounds.

One of the tricky parts about doing things like shadows is, or if you want to scale that view, well, then you need to tell the containment hierarchy of that view not to clip it. Well, giving it elevation pops it into– what you can picture is like an aquarium, a 3D volume that sits on top of the view hierarchy. And all of a sudden, you’ve got much more flexibility about how that thing is drawn, about how it’s clipped, and about the ordering with which it and its shadow is drawn in the hierarchy.

We have animations. Yay, more animation stuff. The biggest one in this area is activity transitions — in particular, the ability to share elements between activities. So we’ve seen some work. I think it was last year at I/O there was an animation talk, and there was some “DevBytes” around this where we showed techniques for passing information between activities such that you could pass information about elements, and you could sort of fake this animation to look like it transitions seamlessly from one activity to another.

So that technique has been baked into the platform, so there’s a standard API way for you to say this is my shared element, or a set of shared elements, and you can pass those between activities, and they can share them. They can animate them between the activities. You can animate other items in and out between the activities, and you can customize the entire experience, making it all part of the material idea of making all the transitions seamless for the user as they go from state to state to state in their application, or in your application.

Also, there’s new animation curve capabilities, both motion and timing curves, so you can have a much more custom path-based curve in the timing area. You can also move things in x and y along a curve, which is a little bit tricky. Possible, but tricky before.

And finally, there’s animated reveal capabilities. So you can reveal the next state of an activity or a view by having a circular reveal that exposes it over time. And I think there’s a video of some of this stuff. So this is sort of an epilepsy-causing animation here that I looped, just showing some of the shared element transition stuff where we’re popping the view in and out.

If you look closely, you can see a shadow that’s popped as we’re elevating it. And then as it goes back down to the view hierarchy, we launch the next activity and pass that view over as the shared element between these two separate activities. We have some new icon capabilities. There’s a couple of different ways of animating states in icons. One of them you see in the check boxes and radio buttons, the ability to basically animate key frames, or these images that represent an animation from one state to another.

ALSO READ:   Linus Torvalds: The Mind Behind Linux at TED (Full Transcript)

And there’s another one called StateListAnimator, where when you go from one state to another, you can specify a custom animation that will animate properties over time. And then finally, we have touch feedback ripples, which gives the user indication of what’s going on in the UI when they press that button. It’s not simply going from unpressed to pressed, but it’s actually giving them information about the gradual state change that’s occurring, as well as possibly where that state change occurred.

So if we look at the video here, let’s see if — that is really hard to see on this screen. There’s some subtle ripples on the button down below, and you can see the ripples are actually emanating from the touch point that I had when I touched this beautiful button in my UI.

And then the check box up at the top, that’s one of the animated PNG animations that we have for the AnimatedStateListDrawable. And render thread. So this is kind of an implementation detail, but it’s a really interesting one, so I’m going to talk about it anyway. And it’s also important, and probably increasingly important as we go forward, for performance.

One of the issues that we have with UI and graphics animations, and performance in UIs in general and Android, is that everything needs to stay on the UI toolkit thread, which means if you’re doing something silly like querying a web service on your UI thread, A, don’t, and B, you’re going to freeze your UI. And C, see A before. Don’t do that. But you can get yourself into these positions, in some cases necessarily, because that is an operation we need to perform in the UI toolkit thread, and therefore everything else happening halts.

A really great example of that is when you launch an activity, then we need to inflate that new activity if you’re in the same process on the UI toolkit thread.

Well, in the meantime, if you’re running an animation that also needs to run in the UI toolkit thread, then that animation is going to stop while the activity launches. So we came along with the render thread technology to be able to break apart the two processes of rendering. There’s the creating the display list of what we actually want to render, and then there’s actually executing that display list and telling the GPU how to draw this stuff. And we broke these two apart.

So we create it on the UI toolkit thread, where it necessarily has to be, and then we pass that information over the wall to the render thread to let it actually execute and talk to the GPU on a separate thread. In particular, what we want to do is take these atomic animations and send them over so they can perform completely autonomously on the render thread so that now you’re not beholden to the state of the UI toolkit thread if you are inflating an activity or doing an expensive operation, because the animations can happen asynchronously at the same time.

So we’ve started down that path right now. There’s going to be more work going forward on that. A great visual example of that right now is the touch feedback ripples, which happen on the render thread, and they happen completely autonomous of the UI toolkit thread, which is why when you click on something that launches a new activity, the ripple continues to actually animate while the new activity window is coming up.

There’s some important I/O talks where we go into a lot more gory detail in a lot of this stuff, so I would suggest that you check those out. Some of them are, of course, the material design talks themselves. They’re scattered throughout the conference, and I frankly didn’t look up the names and titles, so they’re not on this slide.

There are two sessions in particular that go into the more techie details. One is called Material Science – this is tomorrow morning at 11:00 — and the other is Material Witness. Material Science is an overview of sort of the entire space, kind of a deeper dive of everything I’ve just talked about. And Material Witness is a use case where Romain and I wrote particular apps using these APIs and then talk about how they were actually implemented and how the technology works. The sessions in your schedule right now probably have different names because we were withholding the material name until after the keynote, but the real names will be out there very soon. So check those out, and there’s also an I/O Byte on activity transitions in particular that you can check out as well.

In Support Lib, there’s the RecyclerView and the CardView stuff that I talked about. There’s also other capabilities, including palette capabilities for doing color sampling stuff. This was mentioned in the keynote. Matias was talking about that this morning. There’s RoundedBitmapDrawable. This comes into play in things like CardView. It’s very useful. ViewPropertyAnimator. This was done as more of an implementation detail of getting RecyclerView animations to work. And NotificationCompat is useful for Android Wear stuff.

And we’re onto WebView, where we have updated to Chromium, build M36, which enables various useful web standards, so you now have WebGL, and the other things listed on the slide. Check out the I/O Byte “What’s New In WebView” for more detailed information.

On the graphic side, there is an update to OpenGL ES 3.1 with new compute shaders and new shader language capabilities. We have bindings in the SDK as well as NDK, and obviously it’s backward compatible with OpenGL ES 2 and 3, as they usually are. And you can say use feature in your manifest to specify this version exactly.

The other important thing to mention was also mentioned by Dave Burke in the keynote. It’s the Android extension pack. We basically collected a bunch of extensions that are really useful and powerful together and sort of bring the platform up to the current state of, say, console gaming hardware. And all of these come as a bundle, and we’re working with partners to enable one and all of these extensions together. And there will probably be a mechanism in the future for you to ask for this particular capability, which basically gives you the whole sandbox of capabilities altogether. Lots of useful stuff in there, including tessellation, enhanced geometry shaders, and texture compression would be nice. Camera and audio space.

There’s a couple of talks I would suggest you go to for the actual details on this, but some image processing capabilities, also some audio data type buffering information that I couldn’t possibly address, because I don’t know. So I would suggest that you go to the talks instead.

And in the meantime, listen to Dan.

Dan Sandler: You can take a breath now.

Chet Haase: Yeah.

Dan Sandler: Right. So related to the audio is a whole new set of APIs to effectively replace RemoteControlClient. If you’ve ever built a media player and you’ve dealt with transport controls, you know about RemoteControlClient.

Here to the rescue is MediaSession and its friend MediaController. These are two new classes and a bunch of other support code in the platform to allow you to make multiple playback sources, and multiple transport controllers, and wire them all together. The nice thing about MediaController is that it works from a non-UI process, so you can be running this entirely in the background if you need to do control of an ongoing playback stream from there, extract metadata, things like that. And we use that in the system UI as well.

We’ll talk about that a little bit later in the talk. MediaSession hooks up to your playback stream and essentially handles those transport control requests in much the same way that you’re already accustomed to. And you’ll talk to the MediaSession manager to work with those. Great new tools, all on So you want to check those out.

I will segue from here into all the other good stuff that’s in the framework, if it doesn’t fall under the green category.

Chet Haase: It’s the non-visual framework stuff.

Dan Sandler: That’s right, except that I’m sure there’s going to some visual stuff here, like right here at the beginning. So recent documents and tasks. You saw this in the keynote.

Our recent apps interface in the system UI is not just for apps anymore. We’re now encouraging your app to break apart different components of the experience into different tasks or documents. We sort of call this a more document-centric interface. So you see in the screen shot here we’ve got a couple of different web sessions, essentially web documents that are showing up as different cards in the recents experience. And so this gives the user a much greater ability to shift back and forth between different tasks on one device. Go ahead.

Chet Haase: Yeah, I was going to say, this came up in the keynote when they were talking about Chrome. There was a web session where they were talking about different tabs in the recents, and this is the capability that enables that.

Dan Sandler: That’s right. So you can start a new document at any time by throwing in Flag_Activity_New_Document into that intent. You can also mark your activity that way in the manifest

16:22to say this is always going to start a new document. Lots of other APIs that we didn’t have room for on the slide. You should definitely check it out. Also in the system UI, you can now do more with the status bar.


Leave a Reply