Home » Facebook CEO Mark Zuckerberg’s Keynote at F8 2017 Conference (Full Transcript)

Facebook CEO Mark Zuckerberg’s Keynote at F8 2017 Conference (Full Transcript)

We recently launched a new camera that made sharing photos on Facebook even more fun. Today we’re widely releasing the Camera Effects platform. This gives people, artists and developers powerful tools to create frame and effects. The Camera Effects platform actually has two tools. The first is Frame Studio. Frame Studio gives anyone with creative skills the ability to make fun photo frames to share on Facebook and it’s available today globally. To see this, we put Frame Studio in the hands of artists all over the world and they build frame for their local communities, so you visit a new place you can actually see these frames from the artists there.

The frames can make even every day photos more engaging. I posted a photo of Bethany on her birthday recently. By adding frames to this, this annual post becomes just a little bit more special. And now I can post a video with a personalized frame to make it even more engaging. And by adding augmented effect, I can make her live birthday video even more meaningful.

This new content type brings AR to everyday life. It connects art and technology to create new immersive experiences and these are just a few of the effects we’ve created for camera so far. With the Camera Effects platform, artists and developers can create so much more.

And we’re excited to announce a second tool: AR Studio that makes this all possible. But imagine you had to build all of these things on your own, imagine the kind of large engineering and design team you would have to bring together to make that possible. But AR Studio simplified all of this. It allows you to create animated masks and interactive effects that respond to actual motion and data.

So let’s take a look at a couple of my favorites. This isn’t widely known but I’m a gamer and yes, my gamer tag actually is [debanator]. Here’s an example of EA PC and console game Mass Effect Andromeda, something I spend way too many hours on or any, it uses our camera to build an immersive experience to talk about my gameplay. Let’s take a look behind the scenes at AR Studio. With the real-time face tracking you can layer 3D masks to fit any face and you can have that mask actually respond to facial motion. So when a [mouse sensor had moved], you can have it automatically respond without writing a line of code. And we flip the camera, you can see stats for my latest mission and we use using the scripting API to create a leaderboard and dynamically put it in 3D space.

ALSO READ:   David Autor: Will Automation Take Away All Our Jobs? (Full Transcript)

On top of that you can pan the phone and actually using sensor data you can experience the game visuals in augmented reality. EA used AR Studio to make this engaging effect possible. For you soccer, or the rest of the world say football fan, I would love to show you these effects from Manchester United. It brings real time data from an actual match that’s happening and adds it to your video. So when Manchester United scores, you see the effects come up as goal and you hear the cheering, there’s confetti. So imagine having this effect available for your favorite sports team in the next game.

This even works on live video. Giphy is a way to express yourself through images. So they say an image is worth a thousand words and animated one must be worth 10,000. So you can take this effect and it does something really interesting; it actually responds to the interactions from viewers. So when someone takes a hashtag or votes by a poll, you can actually make it respond in the video itself. Giphy is making this available soon so make sure you try it out.

This is what we’re releasing today with AR Studio, but it is the very first step in a long-term journey and we’re just getting started. So let’s take a look at the future together. Mark painted a picture of how AR works on the camera, and Schroep talked about taking the amazing power of AI technology and adding it to that. But I’m going to share with you a vision of how we’ll realize this with the developer community so that you can code against the real world.

Imagine you’re at a local coffee shop. When people see the world, they can intuitively understand exactly what’s happening in the scene. When a camera sees the world, it sees things in 2D; it takes input and turns it into the pixel. That means making AR experience is really hard to conceptualize and bring to real life. But AI changes that. It turns 2D imagery to three-dimensional structured data the developers can build again. AR actually creates rich content so you can understand the surrounding. So in this case, we’re using a deep neural network to infer that you’re indoors and in a dining establishment. So we tell you you’re inside of a restaurant. Then using AI, we can identify the surfaces in the scene and we can turn them into structured data. We do this by using plain detection, where we’re looking for flat surfaces and we’re doing all of this in real time. So you can see we’re identifying the floor, the wall, maybe the front of the counter.

ALSO READ:   Avi Rubin: All Your Devices Can Be Hacked at TEDxMidAtlantic 2011 (Transcript)

We can take this step further by actually recognizing, identifying objects and people in the scene itself. As Schroep mentioned we train our models of billions of pictures to help us match items to known objects. And so we can tell you where the coffee grinder, the plants and the people are and we also share confidence level so you can take that into account.

Now imagine all the possibilities, if you could take the structured data from any scene, combine it with your code and our tools. This will enable you to create an augmented reality experience that people can interact with in the real world, in real time. Here are some examples of what you could build. Imagine you could allow people to leave directions or notes for their friends. Or you can give people more information about where they are. Or let them drop digital objects that other people can find interact with right in that spot. All of this is possible because the camera turns the scene into structured data, you add your code and you can build this experience. We will enable this in our SDK in our tools in the months to come.

Pages: First | ← Previous | ... | 5 |6 | 7 | ... | Next → | Last | Single Page View