Oculus CEO Brendan Iribe gave his keynote address at the recently held Oculus Connect 2014 conference, its first ever developer conference in Hollywood on September 19, 2014. Nate Mitchell, Oculus VP Product and Michael Abrash, Oculus Chief Scientist also gave their talks in this opening session. Below is the full transcript of the opening session.
Speakers:
Brendan Iribe – Oculus CEO
Nate Mitchell – VP Product
Michael Abrash – Oculus Chief Scientist
Operator: Ladies and gentlemen, please welcome CEO of Oculus, Brendan Iribe.
Brendan Iribe – Oculus CEO
Wow! Good morning and welcome to the first Oculus Connect Developer Conference. It’s awesome to see all of you out there. This is a really special moment and this is a special day for everyone.
Introduction
Today after just two short years of launching the Kickstarter, I truly believe that you will also join us in the belief that VR is finally ready. We have some very exciting things to tell you and show you.
We’re gathered at this time where we thought about VR for decades. We dreamed the VR. It’s been in books and movies. If you’re a sci-fi enthusiast, this is the Holy Grail. This is the thing that you’ve always imagined would finally happen. It’s not 2030, it’s not 2050, it’s 2014.
And today it is happening. Virtual reality is here. Just let that sink in. We thought about flying cars, maybe hoverboards. We thought about virtual reality and it’s now here.
There are more developers in the world creating virtual reality with dev kits than ever before. You guys – the community – made this happen. There is over a 100,000 Rift developer kits, shipped to more than 130 countries around the world. We launched two years ago – that’s incredible.
I never expected it to go this quickly. None of us did.
Today you and the VR community have an opportunity to come together and take VR to the next level. That’s what this conference is all about. That’s what today is all about.
Now I hope you have a lot of fun while you’re doing this. We always pride ourselves and say at Oculus, while we’re on this crazy journey, this amazing journey, one of the most important, if not the most important parts of it, is to have freaking fun. We’re going out there; we’re changing the world. We’re realizing this dream. Let’s enjoy it. Let’s embrace it and let’s do it together as a community.
We’re not 10, we’re not a 100. We’re literally over a 100,000. I think it’s 130,000 now on our developer forum. That’s incredible. There has never been a hardware platform with that many developer kits.
Together we have a chance to create something really special — something that will change the lives of the entire future of human race. It’s going to be that impactful. It really is. I didn’t believe it when we first started. I thought it was a neat science project by a 19-year old. That was potentially one of the coolest experiences I’ve ever seen. But I didn’t realize just how important this was going to be for the future.
Think about it for a second. We’ve been looking at different realities, different fantasies on 2D surfaces for over 100 years, whether it’s books, film, photos, video games — we all love video games — but it’s always a 2D surface and you’re trying to immerse yourself in. You’re trying to believe that you’re in that fantasy, in that world.
Luckily you look around and you’re reminded it’s okay, I’m not really there – until now. With virtual reality, when you put on the headset — and it’s good enough — and you’re going to see something very special today.
When it’s good enough, suddenly the back of your brain believes you are there. You now are reminding yourself, no, no, no, I’m not really there. It’s a pretty amazing experience. You start fighting this.
We went from trying to convince ourselves we’re there, to try to convince ourselves we’re not there. So we’re not all going to be running around Call of Duty, you don’t really want to be in Call of Duty. You don’t want bullets going by your head. I have a feeling we’d all be crunched down in a bunker really scared. I know I would be. But that’s what virtual reality can bring. It is truly amazing.
Oculus Mission – Virtual Reality
Our mission is to transform gaming entertainment and the way we interact. We truly believe this will happen. And yes, it did start with games and gaming and it is going to continue with games. Because virtual reality for it to truly work, you have to create two virtual cameras that are on the exact place of your eyes. You have to trick the brain that you’re there with all the movement that’s all built at the core in software and a 3D game engine. And it will continue to be. And it’s us – the VR and gaming community – that are going to make that happen.
PC & Mobile VR
To do this and to really get out there and connect a billion people, two billion people in VR, we see two categories: PC and mobile VR. These each have trade-offs, advantages and challenges.
Today on PC, you get this awesome comfort and high fidelity. And you get the sense of presence, which is the magic of VR. Using features like positional tracking, high frame rates, low persistence, incredibly powerful GPUs, you can create unbelievable worlds. You can create believable worlds.
Again this is just the beginning. It’s only been two years. Think about the beginning of PCs and two years after Steve and Bill brought PCs to the world. Think about where we are now with VR. The next few decades are going to be incredible.
With mobile, the magic is accessibility, affordability and ultimately portability. Not having that cable tethered to your head, yanking at your head is an awesome feeling. It is an awesome experience. Being able to pass around of set of VR glasses or goggles to anybody in the audience, anybody in front of you is awesome. Being able to just share it, being able to take it with you, throw it in a backpack. It is very unique.
I was recently at a Hackathon where I was able to take the mobile VR headset and just hand it around and let dozens of people try it in a room. People were blown away. They never expected it to be so good, and I never expected you to be able to do that so easily. It really is awesome.
What Carmack and the team have pulled off on today’s mobile hardware is nothing short of incredible. But again now it’s up to you guys to carry that forward and create the content and make it a reality.
So long term we do see these two categories continuing to converge, overlap but ultimately complement each other. The cell phone is not replacing your laptop. It’s complementing it and you’ll be using it for a lot of different services just like mobile VR will continue to complement PC VR. We strongly believe it’s important to lead in both to do this right. And ultimately to connect billions of people in VR.
Oculus Prototypes – The Beginnings
When we launched Oculus, our prototypes were far from consumer-ready. The early duct tape versions, hot glue, it was early days. We then added plastic, made a little more comfortable. Finally took it to China, built the DK1, shipped the DK1, slightly terrifying moment. Then we created the HD prototype, Crystal Cove and finally DK2.
On the mobile side, which started a little bit later, we got to skip duct tape, hot glue, and go right to 3D printing. We started with the holder, we glued our DK1 sensor on and we started iterating as quick as we could.
Started using a few other DK1 parts and realized that it was probably better to just go straight to a partnership with Samsung to really make this happen.
It also became clear it needed some magic. Welcomed Carmack to the picture. I never expected it to be so good. It really is pretty incredible. And now we have early prototypes of Gear VR which you’ve heard.
So today we have DK2 on the PC, positional tracking, low persistence. It really delivers a great experience. It’s not quite there yet but it is very very good. And it’s a great place to start if you’re creating content on the PC.
We also have Samsung Gear VR Innovator edition which we recently announced onstage with Samsung in front of millions of people. Just think about that. Two years after we launched the Kickstarter for an idea that pretty much never worked. One of the largest hardware companies in the world has announced it as official product that they’re going to be selling. A year or two from now it will be in every phone store out there.
Little scary but it’s going to be you guys creating the content to make this happen. We’re all in this together. We really are. This is a community. This is not 10 or 100; it is a 100,000. And all of this started with the original Rift DK1, being kick-started.
I’m really excited about the announcement that [Nural] made last night. We decided to give back to the community and fully open source the entire DK1 — all the hardware design, firmware tracker, the PCB. I can’t wait to see all the Chinese knockoffs. God, help us all.
But we do believe this is important. It’s important for the hackers and the makers out there to be able to pick this up, take that tracker, glue it on two different things, take some source code, change it, modify it. Ship it, make something awesome. All the source is on GitHub right now. Have fun, China!
Oculus History – How it all got started
So as we look back at the history of how this all got started, it’s just awesome. There really is. It’s still kind of all surreal.
I got a phone call in around June 2012 that I needed to meet this guy named Palmer Luckey. And I thought, “Okay, Palmer Luckey”, and they said, “Yeah. He is on to something awesome. It’s virtual reality and it’s finally going to work.” And I thought, “Ah, VR has never worked. I’m too busy. I’m sorry”.
Then I looked up on the internet and saw some interesting news coming out of E3 and thought, okay, uh, might as well go. Meet this Palmer Luckey. I wouldn’t want to be the guy that misses the thing that changes the world. That would be bad. Pretty depressing.
So we packed up, grabbed Michael Antonov and Nate Mitchell. We were all working together at the time. We drove up to LA, booked a nice restaurant, steakhouse, and invited Palmer lucky to meet us for dinner.
In walks, Palmer Luckey in an Atari t-shirt, shorts, and his awesome flip flops, my first thought is, “Wow, he is a lot younger than I expected”. He looked like 16, 17 at the time. I think he was 19. And I had slight concern over whether they’d actually even seed us. They did luckily. We got talking and we had this incredible evening, basically just listening to Palmer talk about the future of VR. His headset collection, his infectious vision for where this was all going to go.
We were still pretty skeptical but it was just awesome to hear him talk and we wanted to see more. We wanted to see the demo.
So a few weeks later, we met him on July 4 at a hotel in Long Beach. He lives in Long Beach at the time at his parents’ garage. So we meet him at this hotel and he walks with this little tub like the trays under your school desk, dangling wires and the circuit boards, we are looking at it going, “Okay, what – this is going to be interesting”.
He puts it all together and turns it on and at the time turns out the light, you had to hold the thing in your hand, it was like a viewfinder. But when you look through those lenses, you saw this bold new world. You saw this incredible virtual world that was believable. It was a hint of it. It felt like it. It wasn’t all there but it was there enough that it got us excited. We were hooked. We wanted to help.
This is something I think happens with everybody who tries VR. You see it, you believe and then you want to help. You want to get involved.
So within days, Oculus was born. The Kickstarter was launched and the journey had begun. Literally a few weeks after we met Palmer and saw this demo. This happened overnight. None of us expected this to go as fast as it went. Again it’s because of you guys, because of the community, because everybody rallied together.
So most of you know the story from here on now. You’re part of the story from here on now. We iterated on the hardware and the prototypes and you guys bought it. Thank you.
Really without your support, it just wouldn’t have happened this fast. So we worked across all these different disciplines. We had to recruit people for hardware, electronics. These are things we had never done before. We’re software programmers for the most part and Palmer is a hardware hacker. We had to get all these incredible disciplines together to make that Developer Kit really work.
What was awesome was everybody who was believing. Everybody wanted the sign up and get involved. We were able to recruit one of the most talented engineering teams of all time. I mean Carmack and Abrash back together again. Pretty awesome. These are the guys that brought us 3D and now these are the guys – and this is the team with you guys, the community, that’s going to go out and bring virtual reality and change the world.
When we finally shipped DK1, I can tell you it wasn’t always smooth sailing. We really didn’t know how the community would react to this. To be honest, we were pretty terrified. It had its issues. It was kind of like a lunchbox strapped onto your face, with 1990s graphics and a whole lot of motion work.
It was early days; that’s for sure. But when you used it, you saw the potential. You got excited. Everybody did. YouTube really got excited. Grandma got excited. The community rallied and VR had a second hope, a second chance. It just hadn’t worked before. The timing wasn’t right. Hardware wasn’t ready. But now it was and the community was there.
Valve – “major contributor”
But the story wouldn’t be complete without mentioning another major contributor. It really is important. This is one of the most important parts of my journey along the way.
While we were working on iterating the Rift, getting the Developer Kit out, scrambling as fast as we could to ship you something, Michael Abrash, Atman Binstock in the Valve VR team were hard at work on R&D. They were working on trying to solve the elephant in the room: motion sickness.
In September 2013, Abrash called and said, “Brendan, we have something I think you’re going to want to see”. When Abrash calls and says that, you go. I jumped on a plane right away, hopped off to Valve. Listened Abrash, Atman and their crew talk about the different challenges on how they were approaching solving them. And it was incredibly fascinating. It was a little over my head but I was trying to grab it all at once.
We spent a few hours there. It is very exciting.
Finally, Abrash and Atman let me enter a room covered with black and white traditional markers, VR wallpaper. And on the way in, Abrash said, “You know, no one has gotten sick yet. I’m curious to see what you think”. I said, “Not even you”. And he said, no.
And Michael and I are the most sensitive guys out there. Literally a few head turns and we’re out. So I was intrigued but very skeptical. Because it had still been hard for me to experience VR for long periods of time or even short period of time to be honest.
So they strapped on this kind of clunky 3D printed headset, circuit boards exposed, dangling wires. It was really reminiscent of Palmer’s prototype. A little bit better. No hot glue or duct tape.
Then came this game-changing moment – a moment that I will absolutely never forget. When I knew VR was really going to work and it was going to work for more than just enthusiasts and nerds like us, like me. It was going to work for the entire world.
As I looked around, I felt great and I felt like I was there. All of a sudden, the switch in the back of my head flipped. Instead of thinking, wow, this is a really neat VR demo, bam I was in it and I believed I was there. And I started trying to convince myself, wait, wait, no this isn’t real. You’re still in that room with those markers on it.
After about 10-20 minutes, I even had to look up at one point just to make sure the world was still out there. So yup I’m still in that crazy room. Okay, I put it back down and suddenly I was somewhere else. That’s the magic of presence. I hadn’t felt it before until that moment, and it felt great.
I left the room more motivated than ever. I left the room feeling good, pretty unique. The bar had been set. This was it. This was what we had to deliver for the consumer VR to really work. You had to feel like you were there comfortably. You had to get presence.
We continue to work on the hardware, building all kinds of prototypes, ramping up as fast as we could, racing towards this goal. As soon as the prototypes are working, we’d either show them or even in many cases ship them out to people, you guys, the community, let you experiment with them, give us feedback.
We did our best to balance the trade-offs of moving fast without breaking too much. We still broke sometimes. I think one of the best articles was DK2 is a hot mess and that’s okay. So we apologize for just how hot it is but we are trying to ship things as fast as we can on this race towards presence. It’s really important to get it out to you guys – the community.
So to make this work, we needed the best and brightest. We need to ramp up, we need to build the team that could really deliver on this. We couldn’t sell VR wallpaper. We couldn’t sell this dual-paneled crazy 3D printed headset. We had to sell something that was lightweight that felt great, that just needed a single camera or two. Something simple, consumerizable.
So along the way we tried to balance what was the right path for us, how do we achieve this vision and really do VR right. So we decided after a lot of debate internally and really this was a decision that we didn’t take lightly. It was a decision that we felt was the best for the future of VR. I can only imagine what Andy Rubin thought was best for the future of Android.
Facebook Partnership
We decided to partner with Facebook. We know this deal created some intense debate in the community. We had some of our biggest followers become our less biggest followers. I think notch is coming back. Congrats to Microsoft, bye, good job. How ironic!
So a lot of you were concerned about what this meant for the future of VR and whether we’re going to slow down or change course, what effect this is going to have?
Ultimately it’s on us to show you that the Facebook partnership is the right thing for VR and the right thing for Oculus and the community. I can say it’s only been six months but since we signed with Facebook, we have supercharged recruiting. We actually used part of our acquisition to go out there and try to entice and incentivize some of the best and brightest to come to Oculus.
Michael Abrash joined; that was awesome. Along with over a hundred of the top engineers out there, we literally more than doubled our staff with incredible dream team recruits in just six months. So, so far so good. Again it’s on us. We’ll see how this goes. We’re trying to do what’s right.
What’s next for Oculus?
So what’s next for Oculus? That is a big question everybody has. We’re really sprinting towards the consumer version. Everything we do is in this pursuit of presence. We need to deliver that experience that we saw in the Valve room. We need to deliver that to everybody that buys an Oculus Rift.
You should be completely immersed in this experience. You should believe you’re there and you should feel great even if you’re super sensitive like me.
But presence is a bit like house of cards. Without any one component the whole thing kind of falls –falls apart and comes tumbling down.
So how do we get presence right? Let’s start by looking at the full chain. This is important. This is how you deliver virtual vision. You have motion of your head. You’re moving around. It gets captured by a tracking system which feeds into the CPU of your game engine you’re processing, you’re running all your algorithms to determine that new position. You’re then generating the commands to be sent to the GPU to render the new left and right stereo image, that then goes to the display. The display scans out and emits photons that flow to the optics. We finally understand this full path.
Had we known what we needed to pull off to really deliver presence in the beginning, we may have never started. Palmer probably would have. I don’t know if we would have. It was much harder than we thought.
5 essential ingredients
So there’s five essential ingredients to getting this right. And we try to be as open as we can about these. You’ve heard some of these talked about in the past.
Tracking
Tracking is one of the most important. You got to have six degree of freedom tracking. You have to have full head tracking. Everywhere you’re moving, those virtual cameras have to be in the same place as your eyes. They have to be there in 360 with sub-millimeter accuracy. They can’t jitter, they can’t shake around.
We have a test internally that we call the tap test. And you’re sitting there tapping the headset to see if the image stays perfectly still. It has to.
You also need a comfortable volumes so you can lean around, lean down, of course all sitting down right. We’re always sitting.
Latency
The other part that you’ve heard talk about a lot is latency. Latency is very very important. The latency at every step of the way on that chain has to be as low as possible. Ultimately to make this work, you need to get to sub 20 milliseconds of latency from motion of you moving your head to the last photon hitting your eye. That’s incredibly hard.
But if you can get to less than 20 milliseconds, you can actually use the optical tracking fused with the IMU [inertial measurement unit] to protect the last 20 ms. Prediction doesn’t really work and you hear a lot of people say, “oh, we have prediction, and we have 0 milliseconds latency”. It doesn’t really come together unless you can get down to 20 milliseconds of real latency. Then you can predict the last 20 and you don’t get jitter or noise.
So minimizing this loop is incredibly hard. But it’s actually not good enough.
Believe it or not, if you leave an image sitting on your e for more than just a few milliseconds, it starts to smear across your retina. What you have to do is get the persistence of that image down to just two to three milliseconds.
So either you have two options: either you’re running at 300 to 400 frames a second, 300 to 400 hertz display updating. You guys excited about making 300 to 400 FPS game? I think Carmack is. I know he loves high FPS.
Low persistence
But to really get this something that everybody can pull off, the guys at Valve figured out a trick: Low persistence. Instead of leaving the display on for 10, 12 milliseconds, 16 milliseconds at 60 hertz, turn it off. Turn it off and only leave it on for 2 to 3 milliseconds.
Now the image sits in your eye for very short amount of time and then you get black in the headset. And believe it or not, if you can do this fast enough at around 90 hertz, you can get rid of the flicker and it all comes together. So low persistence is an essential ingredient. It actually ends up requiring some pretty custom hardware along the way.
Resolution
You also need resolution. DK1 resolution maybe not so great. You need at least 1K by 1K per eye. That’s a lot of resolution. You need that in stereo. Left and right image have to be — left and right eye have to be rendered separately. They have to be completely correct. You can’t use tricks, or you will proceed them and it won’t work.
And you can’t see the pixels. If you see the pixels and I am super — you know about this. If you see the pixels, it breaks presence. Suddenly you’re reminded, oh, I got this thing on my face. I got screens sitting in front of my eyes. If you don’t see the pixel structure, you can stay focused on the object out in the world.
Optics
Finally, you need the optics that last bit as the display scans out and flows into — the photons flow into your eye, you need optics to deliver a wide field of view. If you have a small field of view, it feels like you have a screen in front of you. And your brain just says, oh, I just have this. I’m looking through this window screen thing and the whole effect is lost. If you have over 90 degree field of view, suddenly you’re there.
You also need a comfortable eye-box. This is something we’ve wrestled with. Believe it or not, optics have been one of the hardest challenges, we’re still up against right now. We’re still battling optics.
The eyebox allows you to put on the headset and get comfortable pretty quickly. It also allows you to look around the scene and track an object. If you have an object out in the world which – and naturally in the world as you’re moving around and walking around, you’re always staring at objects. And then you’re moving from one object to the next object.
As you’re doing that and yet you’re still moving if – when you move and you look through, it starts to blur and get distorted. The experience falls apart. Having that wide eyebox is really important.
Calibration of these optics is incredibly hard. We have somebody at the office, Brent Lewis, we nicknamed goldeneye. He sits with the headset and he manually calibrates these things for hours. I can imagine what it feels like, and it works well but not quite well enough. You have to do it correct perfectly. So we’ve had to invest this huge amount in a system that would actually go out there and capture the screen through the lens and give the exact right distortion model to be corrected.
If you don’t do it, when you’re looking around, you’ll see this kind of warpy effect and swimming effect and it will not feel good. So all of these things have to come together to deliver presence.
These are the five core components. When you put these together and you get it right, and you get the content right, and it all pulls together, suddenly you’re there, the switch flips and you’re in a new virtual world.
Crescent Bay
Today I’m excited to show the progress that we’ve made heading to the consumer Rift with a new feature prototype: Crescent Bay. It’s a beautiful beach, very close to where we’re located.
This prototype shows off the features, the quality, the presence that we need to deliver for consumer VR. This is along the path, much like Crystal Cove was. This is not the consumer product. It’s much closer but it’s not the final thing. It is a massive leap from DK2. It’s as big of a leap as we made from DK1 to DK2, as DK2 to Crescent Bay. It is awesome.
It has updated display technology, including a higher refresh rate, higher resolution, improved optics. It all says 360-degree tracking, you can see the LEDs on the back.
That was not easy. It’s still not perfect. None of this is perfect yet but it is much much better.
It also has improved weight and ergonomics over DK1 and DK2. It is much lighter; thank God.
And finally you’ll notice integrated audio. Still optional integrated audio, you can move them out of the way. Many of you’re going to want to use your own plugs but audio is a big multiplier for presence. And we’re committed to delivering great audio, not just in the hardware but also on the software. And this is all part of that commitment.
But the most important thing about Crescent Bay is that this allows for sustained presence. This allows you to achieve the impossible and believe you’re truly in a world comfortably. It’s not perfect but it is a huge step forward. And it is getting much much closer to consumer Rift.
This is still very early hardware and software but it’s in a state that we’re ready to show you today.
Literally right now, Crescent Bay demos are being set up. I hope they all work. It’s a little dicey. It is very very early but they are being set up to show you. Please be very careful with them. They are fragile. They were handmade at Oculus HQ just the last few days. I broke one last night – two nights ago. They’re really delicate. So take them on and off really slowly. But when they work the magic comes alive.
Along with the new hardware, our in-house content team made a set of magical experiences to go along with this new hardware. They are awesome. The experiences are designed to demonstrate the sense of presence – the power of presence. We hope you — this gives you a glimpse of the future of virtual reality.
The next decade is going to be incredible. To see the progress that we made in just two short years, to get to a point where we’re able to deliver presence is amazing. What is going to happen over the next few decades; I have no idea. It’s wild.
All of you here are going to be the first to try Crescent Bay at Oculus Connect. Can’t wait to see what you think.
As I said, Crescent Bay includes audio. We’ve come back to that audio. Audio is very very important. It’s essential to delivering this experience. We’re very passionate about it. In fact, Palmer has one of the biggest audio files in the office. He has this incredible headphone collection. Not quite as significant as VR headset collection but it’s big. Headphone’s everywhere. He’s got a lot of particulars about what it needs to do.
It is going to be awesome. Along with the integrated headphones, we have a huge focus on getting the audio software right, getting 360 VR head-track software right. This is something that ultimately we should be able to simulate exactly the sound you expect to hear. The cues have to be right for you to believe it’s real. That’s hardware and software coming together.
As part of this initiative, we’ve licensed RealSpace 3D’s audio technology. A high fidelity VR audio system developed over 10 years at one of the best universities in the country – University of Maryland. Yeah. Go Terps.
Literally today we’re working on audio as aggressively as we’re working on the vision side. We have a whole team ramped up. Brian Hook is going to be discussing the unique audio challenges and what we’re doing to solve them. 3D audio specialization, sound design, there’s all these different aspects that have to come together. A lot of it is going to come on you guys too. Audio is important. Much more important than it’s ever been.
So we’re going to get it right. We’re going to get the vision right. We’re going to get the audio right. This is our commitment to you guys. This is our commitment to VR, and this is our commitment to consumer Rift V1.
Now I’d like to welcome Nate Mitchell, head of product to talk about what we’re doing on the platform: user experience and content. Please welcome Nate Mitchell.
Nate Mitchell – VP Product
Good morning. Thank you, Brendan. How is everyone doing? Still wake out there feeling good? A lot of good stuff to show you guys after this.
So Brendan talked a lot about how building the hardware and software that enables great VR, it’s really what we’re focused on at Oculus. But that’s really only part of the challenge.
People need to be able to discover great content and developers need the tools and systems to reach users around the world. Over the last two years, our focus has actually been more on connecting developers with developers, like everyone here in this room.
Oculus Developer Center
And that initiative really started with Oculus Developer Center and oculus share. So as Brendan mentioned, we now have 130,000 developers signed up on the Oculus Developer Center which is pretty remarkable. And to date we’ve seen over 550,000 downloads of the Oculus SDK which is obviously incredibly exciting for us.
Share
The other big piece of that equation has been Share. Share has really grown immensely since we launched it last August. And today there are over 325 VR games, applications and experiences available on Share.
We’re actually seeing new submissions every day, somewhere in the range of 5 to 10, sometimes even 15.
What you guys have done in the last two years — I mean we’ve seen more content built for VR in the last two years than we’ve seen in the last two decades. It’s remarkable. You guys continue to innovate and trailblaze all across the medium and it really does inspire us at Oculus – some of my favorite games out there.
And even more ridiculous number is that we’ve seen nearly 700,000 downloads on Share to date. Pretty ridiculous especially for a developer site designed for really developers and enthusiasts to share content. Of course, this was at 700,000 when I prep-ed my keynote. I think it’s crossed it by now. But we really are excited by this, and this is really just the beginning.
Looking forward, we want to extend Share’s core feature set to build out newt tools and services that developers can leverage to engage with a much broader audience. These include services like identity, authentication, entitlement, distribution and perhaps most importantly, monetization.
The foundations of the Oculus platform are actually going to be powering the Gear VR device this fall and we’re actually going to be bringing these services to PC in the future. And where other platforms are really solely focused on content distribution, we’re constantly thinking about the experiences that you’re going to have in VR inside the Oculus platform.
Oculus Home
As part of that initiative, I admit some of you may have tried this experience here at the show. When you turn on the Gear VR device and put it on you actually drop into an experience, a VR experience that we call Oculus Home.
Some of you may have tried it here. And Home is designed specifically to allow users interact with the Oculus platform in a variety of different ways, all from within VR. We really, really wanted to make this a seamless experience where you can transition between content, find new content, and in the future do even more.
Our team is really focused on making Home as comfortable and intuitive as possible so that anyone can dive in and get started exploring the Oculus ecosystem right from the start.
Right now the primary function of Home is absolutely content discovery and acquisition, all within VR. But you’re going to see Home grow and evolve over the coming months.
With Gear VR, for example, we’re expecting to launch a dozens of experiences all available from within the Oculus Home on day one, many of which were built by the developers in this room today.
I want to emphasize that all the systems we’ve put together, they’re not strictly only for VR. As part of Gear VR’s launch, we’re also going to be shipping a 2D mobile application, a native mobile app where users can access the Oculus platform straight from their phone.
We also have plans to let users discover and browse content on the web in a portal very similar to Share but obviously more evolved. And we’ll have more news on that in the coming months.
Along with Home in the mobile client we’re also shipping two VR applications as part of the Oculus platform. These are VR cinema and – well, Oculus Cinema and Oculus 360 both of which are launching as part of Gear VR.
Let me back that up one slide, or forwards.
So we will be releasing actually the full source code for Cinema and 360 as part of the Mobile SDK to help you guys jump start your Gear VR development on the platform. Yeah, we’re really excited about this and this is really just the beginning.
So that’s a small peek into the Oculus platform and some of the things we’ve been working on behind the scenes. And that’s going to come to life first on Gear VR later this year. And then – well first with Gear VR at launch and then coming to PC later.
In my opinion, one of the absolute best things about the partnership with Samsung is that it lets us bring the Oculus ecosystem and platform to a much, much broader audience, much more quickly. And with that in mind, there are a few other partnerships we like to highlight today that have really been impactful for Oculus.
Epic Partnership – Unreal Engine 4
Epic’s Unreal Engine 4 continues to be one of the absolute best engines for developing VR content out there. And that’s really because some – you definitely deserve — Yeah and that’s because Epic has dedicated to huge amount of resources into making it incredibly easy to use, to build the awe inspiring VR experiences mostly for the Rift.
Most of the Crescent Bay, the first party Crescent Bay demos that Brendan talked about are actually built with Unreal Engine 4. It’s an incredibly stellar toolset. So almost every major Oculus prototype announcement has been accompanied by a new Unreal Engine demo. It started all the way back at CES 2013 when we showed Epic Citadel, that snowy flying craziness. It moved on to Elemental VR, and we did Strategy VR, and then Couch Knights. And for each one of these demos that we worked on with the Epic team, they absolutely stole the show.
Keeping that streak alive, Epic has put together an entirely new experience for Oculus Connect that they call Showdown. We’re really, really excited to be able to share this with you guys today running on Crescent Bay, alongside our own suite of demos.
Showdown is hands down the best demo that we put together with Epic and at least for me it’s one of the experiences that I’m going to remember forever in VR. I think you guys are going to like it.
Unity Partnership
Another really key partner for us has been Unity. Unity has one of the absolute biggest Oculus developer communities that’s active today. When we ran the VR Jam in 2013, we actually saw that about 90% of the 225 games and experiences that were submitted for the final build, were all built using Unity.
We’ve always had a major focus on trying to make Unity as accessible as possible for Oculus developers. And today we’re thrilled to announce that we’ve partnered with Unity to make Oculus an official platform and built target for Unity 5.
We know this is something you guys have wanted for a really long time. We’re really excited to make it a reality. This also means that Unity will now fully support Oculus and the Rift with a dedicated add-on that includes optimization to stereo imaging, 3D audio support and other engine level and editor level integrations. But best of all it means that Oculus will now be supported in both the free and Pro versions of Unity for everyone.
We’ve seen a huge amount of incredible groundbreaking content go with Unity and Oculus over the last two years. Some of my favorite experiences like Lucky’s Stale, I know a playful teams in the audience somewhere; Dumpy Going Elephants… SUPERHOT. But ultimately we really hope that this partnership and this step forward really enables more developers to leverage Unity’s already super accessible toolset to build the next-generation of VR experiences that help to find the platform.
So with that, I’ll bring Brendan back out here. It really is incredible to see all you guys here today and thank you again for being part of this.
Brendan Iribe – Oculus CEO
All right. So we’re almost done here.
We’re setting up Crescent Bay as we speak. We’re getting it ready for you guys to see. It has these really magical experiences – one surprise that we haven’t talked about yet that you will have to see to believe, and then it ends with Showdown. And Showdown is one of the most incredible experiences out there. It’s as close to Call of Duty as I want to get.
Things will be whizzing by you. Good luck.
This is an incredible time. It’s a time where we’re all coming together and we are all a part of this. I want to give a big thank you to the Epic team for all of their work on the demos; the Unity team for coming together on this partnership, and Nvidia for providing all the GPU hardware and making a bit of a surprise announcement that we just heard about — on their commitment to VR in their hardware, the advances they’re going to make.
It really takes everybody coming together. As we say at Oculus, this is a team effort. And you are all part of that team. So you can grab the Oculus Connect mobile app and you can start registering for your demo: your Crescent Bay demo.
Now as you’re registering, and this is getting set up, let me remind you that we have a few more speakers coming on. So the shoots of the audience can get up and leave and go see the demos while the rest of us stay and listen to none other than Michael Abrash, Chief Scientist at Oculus and one of the most inspirational geniuses I’ve ever met.
Please welcome Michael Abrash to stage.
Michael Abrash – Oculus Chief Scientist
Wow! I could never have imagined anything like this three years ago when I started down the path to VR. And it’s fantastic. I’m super excited to be talking to you today. So let’s dive in and talk about VR: The Future and You.
VR: THE FUTURE AND YOU
I think we’ve all had the experience of walking over a great science fiction movie or finishing a great science fiction book, and thinking why can’t real life be like that?
So here’s an interesting question: what is it that science fiction has, that real life doesn’t have? To me, it’s a sense of the future hinges in important ways on individual actions. Isn’t that what we all want to matter — to make our mark on the world and to change the future for the better?
Consider the three novels that I think of as the touchstones of VR and AR: Snow Crash, Ready Player One and Rainbow’s End.
Snow Crash is about the battle over Sumerian by linguistic viruses which basically means control of the entire world. Ready Player One is about the battle for control of OASIS which basically means control of the entire world. And Rainbow’s End is about the battle over You-Gotta-Believe-Me mind control, which basically means control of the entire world.
And do I even need to mention the Matrix? Of course, none of us are going to wake up tomorrow and find ourselves stopping bullets in mid-air and kung-fuing with Agent Smith. But we, all of us in this room, are among the most fortunate people on earth because we have the opportunity right now to change the future in a big way — an opportunity that not one person in 1000 has in their lifetime.
Each of us has been hard working, smart and lucky enough to have a front-row seat as VR becomes real. And to be in a position to help make it real. And you are more critical to making that future happen than you might imagine.
Genesis of Network 3D Gaming
Consider John Carmack in the genesis of network 3D gaming. In 1994, John and I had dinner at this Thai restaurant in Bellevue. I knew John was going to offer me a job at Id [Id Software] and I knew I was going to turn it down because I was doing an important work on Windows NT at Microsoft.
And John did indeed offer me that job. But before he did, he talked for a couple of hours about his vision of the future in which anyone could build persistent world on the Internet and anyone could jump into in between those worlds, and a kind of cyberspace would creep from the actions of a whole community.
By the time he was done, I knew I had to join John to make that happen. And happened it did in a big way, which brings us to the key question: would it have happened the same way without John? What if John had just churned out more Doom style games rather than seeing the potential of the internet for client/server gaming and moving to full 3D? What if he hadn’t had a strong philosophy of openness, if he hadn’t shared these techniques freely, if he hadn’t encouraged modding? Would the world of network 3D gaming have been the same?
To me, it’s obvious that things would have worked out very differently. If you think otherwise, I’d say you’re falling for what Atman Binstock calls the myth of technological inevitability — the idea that just because technology is possible, it will just naturally happen. And point of fact, the future depends on the actions of individuals and can follow wildly diverging paths depending on those actions.
If John hadn’t done Quake, not only gaming but quite possibly GPUs themselves would have been very different. If Steve Jobs hadn’t come back to Apple, who knows what cell phones would be like? And if Palmer Luckey hadn’t prototyped the Rift, VR wouldn’t be on the verge of taking off and we wouldn’t be gathered here today. But Palmer did do that and VR is about to happen.
You, likewise, have a remarkable opportunity to alter the course of the future. VR is potentially the biggest transformation in our relationship with technology at least since the personal computer, and possibly much more. It’s the completion of a path that started with the invention of language and drawing and continued with writing, printing, telephony, radios, radio movies, television, and computer games. VR will finally allow us to interact with the information in the way we’re built to interact with reality.
But there is no inevitability to how that happens, how long it takes or how it alters our lives. VR is coming but the path it takes depends on the actions of a few thousand key people, potentially including you. That may be hard to believe but I’m completely serious. Working on VR is probably the closest to any of us will ever come to living in a science fiction novel.
Having said that this seems like a good time at which to insert the official most famous VR photo ever. Partly because this happens to be my cousin Sally Rosenthal but also to remind ourselves of how often VR has been touted to take off in the past and how consistently that has failed.
Why is it going to be different this time?
Let’s start with what Chris Anderson called the peace dividend of the smartphone war. Good enough VR requires the sizable number of aspects to be above the bar simultaneously. And if even one isn’t good enough, the experience just won’t come together.
These aspects include: optics calibration, ergonomics, rendering and optical tracking, all of which are tractable for small teams with relatively modest budgets. But there is also a need for small lightweight high-resolution screens along with tiny accurate high-frequency gyroscopes and accelerometers and those are definitely not tractable without very considerable resources.
No one has ever been willing to spend billions of dollars to develop those technologies just to find out if there was a VR market. But many companies have been willing to spend that kind of money in order to develop the smartphone market.
Similarly development of GPUs powerful enough to handle the warping needed to correct for fisheye lenses and to handle the heavy rendering demand of VR happened courtesy of the game industry, and tiny inexpensive cameras were likewise developed for other purposes.
The second reason is that consumer VR good enough to be broadly successful is clearly within reach. The DK2 is a big step down that path but there is a lot more going on right now in both VR and AR which share a lot of the same technology.
Google is working on Glass. Sony is doing Morpheus. Valve has done some great prototyping work. Tactical Illusions is doing castAR. There are many companies doing cell phone AR. And they have been hints of other major projects that are running dark.
These are not isolated or wishful efforts. There is a full on race to establish the next big platform be it AR or VR, and a tremendous amount of horsepower is being brought to bear on optics, tracking and small displays. VR related technology is driving forward and it won’t be long before we have an existence proof for consumer VR.
And existence proof changes everything. Something I learned long ago. Back when I worked at Microsoft, I wrote what I think was my third series software Rasteriser. The key code was a texture mapping of just a dozen or so instructions to consume something like half the entire time when running a game. Obviously there was tremendous leverage to removing even one instruction from the loop.
So I went over the code again and again until I was sure it was completely optimized. Then just in case I’d missed something, I ran up to my friend David Stafford, the superb optimizer. 2David said he didn’t see anything off the top of his head but he’d think about it and he’d let me know.
When I got home that evening, there was a message from David saying he’d gotten not one but two instructions out of my loop. I called him back but I couldn’t get hold of him. So I started thinking about what he might have done to get those two instructions out, thought about it during dinner, I thought about it while I brushed my teeth. I thought about it when I should’ve been sleeping.
And eventually I did manage to eliminate one instruction from that loop but I just couldn’t get the second one. That bothered me all night and it bothered me the next day right up until the moment when I got hold of David at which he said, oh, by the way there was a bug in my code.
David hadn’t figured out how to get even one cycle out of the loop but I had. So think about that for a second.
I was sure I had an optimal solution but just believing that someone had a better solution even though they actually didn’t enabled me to break through my preconceptions and do something I thought was impossible. That’s how it’s going to be for VR. Once great consumer VR are shown to be possible in shipping and quantity, many companies, both hardware and software, are suddenly going to become believers.
Increased competition and investment in VR will result in more innovation and better VR experiences which will lead to broader uptick, which will lead to more investment resulting in a virtuous cycle just as happened with smartphones.
All it will take to kick this off is consumer VR hardware enough, good enough to spur widespread adoption and we’re almost there.
And then of course there is Facebook’s $2 billion acquisition of Oculus which guarantees that VR is going to get the resources and the runway that it needs to prove itself. So VR will have plenty of time and resources, can leverage work done for cell phones and other industries is right on the verge of being technically good enough and will kick off a virtuous cycle once it starts to be successful.
That answers the question of why VR can be successful this time around. But it doesn’t answer the question of what it is about VR that’s so compelling that it will be successful?
There are many possible places to start that discussion. But I think the best place to begin is with my first meeting with Mark Zuckerberg.
To jump right to the punch line, no pun intended, here is how I looked the next day. I should make it clear that — while this is in fact how I looked after meeting Mark, Mark had nothing to do with causing this.
What’s interesting though is what did cause it. Here is what I reconstructed after the fact.
I’ve been waiting for the meeting in a conference room. About 10 minutes before the meeting, I got up, left the room, walked around for a bit, then headed back.
The conference room’s interior wall consisted of floor to ceiling re-lights like this with a door at the end. Deep in thought I walked toward the door with the re-lights passing by on my right.
Now when I say deep in thought, I mean really deep in thought. I wasn’t consciously paying any attention to where I was going. That doesn’t mean I was wandering randomly though. Without consciously thinking about it, I’d hand the navigation over to a simple processor, operating below the conscious level. The instructions for that processor were as follows:
Keep walking until you come to the last rectangle on the right that has light coming through it. That will be the door, so turn right. Straightforward enough and pretty much foolproof unless someone had shut the door since I had left which they had.
I walked head first into that last re-light bounce producing a sound a lot like — well a lot like a hundred eighty-pound person walking into a glass panel, which mercifully was capable of withstanding a three mile per hour impact. I think they heard me out on the 101.
I quickly learned that Facebook security people are really really good at first-aid. They got the bleeding stopped quickly. I successfully told them what day it was and where I was. The meeting went ahead without a hitch, although Mark does remind me of it every time I see him.
So that’s a story I can tell for the rest of my life any time I need to embarrass myself. But it’s also a nice lead into the core reason I think VR will actually succeed this time around, which is that VR drives the human perceptual system in the way it’s built to be driven. As a result, VR can produce experiences that feel deeply real and that will result in a fundamental change in the way we interact with technology.
The key to this is understanding that while we think our conscious minds experience reality, in truth they don’t interact with reality at all. What they do interact with is a hierarchy of processors that operate at an unconscious level like the one that steered me into the relight, which then delivered their best guess at what’s out there in the real world based on the input they’ve received.
Importantly this data shows up these things we just know. We believe them at a deep level that’s often impossible to override consciously. For instance, try to look at this slide and not see an impossible object but rather just shapes and lines and edges with no overall structure or meaning. I bet you can’t do it because your visual system is doing that work well below the conscious level. Object identification is already complete by the time the information reaches your conscious mind and it appears it was something you just know with such certainty that you don’t even have a way to question it.
Here is another example to provide the experimental evidence of how powerfully unconscious processing creates our sense of reality. Take a moment to figure out which of the on-screen images of these two spheres is larger. To be clear I’m talking about 2D size, not 3D.
To me and to most people, it’s obvious that the farther circle is larger. However this is a scene with a number of distance cues, forced perspective, distance blur, shadows, lighting. Then imply that the farther sphere considered as the object in a 3D scene is farther away than it would be without those cues. The farther away it is the larger its 3D size must be relative to its size on the screen. The brain factors those cues into your perception of size long before the images reach your consciousness.
In fact, as you can see here the 2D circles are exactly the same size. The surprising finding here is that fMRI studies show that the farther sphere has a larger projection on the visual cortex. That is to say even in the brain’s lowest level model of the image on the retina, the visual systems representation of the farther sphere is already larger than that of a closer one, long before it reaches conscious control.
The visual system integrates all of the data available to it and alters the relative sizes and that is the reality that we experience.
We might as while the brains hooked up to wires in glass jars for all that we truly experience reality, our reality is nothing more or less than the sum of conclusions reached by a variety of unconscious processors driven by a body’s worth of sensors. Our coherent view of the world emerges from the integration of the outputs of those processors in the lower levels of our brain.
That might sound obvious but because it’s perceptual the only way to truly understand what it implies is to experience it. For me, that experience came standing on a virtual edge in the Valve demo room. I don’t like heights and to my surprise my knees locked up and I had the same sense of unease on that ledge as I do looking over the edge of a real cliff.
Consciously I know I was nowhere near a drop of any kind but enough of my unconscious processors were convinced so that I had exactly the same reaction, as if I had been. I could make myself step off the ledge but it required a lot of conscious effort. And what I couldn’t do was convince my body of what I knew for a fact, which was that I was in no danger.
To me, the reality was that I was standing on a ledge regardless of what my conscious mind thought. This is what the term presence means. That your perceptual system believes a virtual experience at a fundamental level.
I’d love to give you a more precise definition but because presence is a perceptual phenomenon, all I can do is quote Supreme Court Justice Potter Stewart and say I know it when I see it.
I’m sure many of you have experienced presence to at least some degree. So you will understand how powerful it is and why it’s so hard to define. And if you haven’t experienced it, the Crescent Bay demo today could be an eye opener.
McGurk effect
Unfortunately, I can’t give to you the ledge experience right this moment but I would like to give you an example of how your perceptual system constructs reality for you. So let’s take a look at something called the McGurk effect.
Here we have a video of someone saying the phoneme bar
[bar, bar, bar, bar, bar, bar, bar…]
Unsurprisingly what we hear is bar. Now let’s make it more interesting. Suppose it’s a sound of someone saying bar but the visuals are of someone saying far. What sound do you think we’ll hear?
[far, far, far, far, far… ]
Astonishingly we hear far – a sound which doesn’t actually exist. In case, you don’t believe that the audio is actually far, let’s look at another video.
As this video plays, move your eyes back and forth between the two halves of the screen. As you do so, you will hear the sound change even though the only sound in this room is bar.
[bar, bar, bar, bar, far, far, far, far, far…]
It’s impossible to make the case that we experience some reality with the McGurk effect, because the sound we hear simply isn’t present. For your analysis, the audio would really reveal no trace of the sound bar. What’s happening here is that the senses, including hearing and vision, are unreliable. So the perceptual system constantly has to make clear judgments from noisy signals. And this is the best guess in this case.
The sound we heard in this video was inarguably nothing more or less than a signal from the speech recognition center, no more a reflection of acoustic reality than if it was produced by applying an electrode directly to your brain.
The McGurk effect is just one of a long list of visual, auditory, tactile and kinesthetic illusions that expose the underlying mechanisms of our perceptual system. And it is those mechanisms that construct the reality we experience.
If they’re driven properly as they were when I stood on that ledge, they can be made to report any reality we’re capable of experiencing. And it is precisely that ability to drive the perceptual system directly that makes VR uniquely powerful and compelling.
Of course it’s hard to hear that, not to think of the Matrix. Maybe that’s somewhere in the far future but I don’t expect anything like that in my lifetime. We don’t have anything close to the understanding let alone the technology that would be required to make that work.
What can now work is understanding the key cues and driving enough parts of the perceptual system from the outside through the eyes, ears, skin and so on to create experiences that induce presence, the sense that you actually are someplace rather than just looking at a picture of someplace. It’s just barely possible right now but that’s enough.
Using VR to drive the perceptual system directly is far more revolutionary than it seems. It’s not just pacing up bigger and better images like a 360-degree IMAX theater, it’s a difference of kind. Traditional media present images and sounds that are descriptions of experiences. Done properly, VR presents experiences directly in the way that our bodies have evolved to accept information.
If I have seen a movie of someone standing on a ledge, that would have been one thing. I would’ve understood what was going on. I might have had an emotional reaction and so on. But standing on a ledge in virtual reality was another matter entirely, because as far as my unconscious was concerned, it was actually happening to me.
Once again this is a perceptual matter. So it’s hard to explain except by putting you in a VR headset but think of the McGurk effect where you experience the sound that didn’t exist and then extrapolate that to full immersion.
Or think of it as the difference between showing you this picture and letting a tiger loose in this room. I’m pretty confident I know which of those two events would more fully engage your perceptual system.
The bottom line is that VR isn’t just another platform, because it’s information driving the perceptual system the way it’s built to be driven. It’s entirely different and more powerful way to interface with information and computers than we’re used to, and in a very real sense it’s the final platform — the one that wraps our senses and will ultimately be able to deliver any experience that we’re capable of having.
At this point, I hope I’ve convinced you that VR is potentially hugely important technology that’s about to become widely used. I realize though that some of you may be thinking that I drunk too much for my own choroid.
I’m confident that if you’re still skeptic, that’s because you haven’t yet experienced real presence. My prediction is that you retain that skepticism right up to the day you use a VR system that does induce strong presence for you and have your own ledge experience. And then you’ll understand.
And the Crescent Bay demo today could make that — today that day.
Okay. You may or may not be convinced but it’s my belief that now that VR is technically doable, economically feasible, and capable of driving the human perceptual system convincingly enough to transport you to virtual world, in all likelihood VR will take off in a big way over the upcoming years.
So what does that mean for you? In a word: opportunity. This is like when John and I started on Quake but on steroids, pretty much everything still needs to be figured out. The killer app for VR hasn’t been created yet. You could be the one to come up with it and start the next great genre.
Likewise VR art, animation and game design are ripe for invention. Even the mechanics of VR have yet to be invented.
I remember when John came up with mouse look and no one knew which way it should move your viewpoint. Strafing, HUDs, movement speed, everything that had to be figured out for FPSs, will have to be figured out again for VR and much much more.
Consider graphics. Right now graphics is a mature area featuring steady but incremental improvement. The graph of effort and sophistication in versus perceived result out over time looks something like this. VR is going to do this to that curve. The reason that graphics is going to have off its game in a hurry is that because VR engages so much more of the perceptual system than a monitor does, it’s held to a much higher standard. Parallax, accurate positioning, wide field of view and stereo vision together provide vastly more information to the perceptual system, which responds very strongly to VR done right and complains just as strongly about VR that isn’t quite right.
And the trick is that right now no one knows what it is that will make VR graphics great and many of the tried and true rules of screen graphics no longer apply in VR. For example, aliasing is much worse in VR because the aliasing mismatches between the two eyes introduce stereo disparity, which is far more apparent and disturbing than mere jaggies on the screen. This is particularly noticeable at specular highlights.
Similarly fastening is much easier to see, so geometric detail becomes more important. All the more so because tricks used to fake geometric detail often work laughably wrong. Texture map sometimes work but sometimes look like painted plywood and bond map surfaces simply don’t look right.
What all this means is that we’re all going to have to invent new rendering, art and animation approaches that work well in VR and rethink tool pipelines and eventually GPU architectures to match.
Graphics is also going to have to up its game because VR’s rendering demands are considerable. In order to avoid blur, great VR requires strobe pixel illumination as Brendan mentioned. Then in order to completely eliminate flicker from strobing it requires a 90 hertz frame rate in stereo. Effectively about six times the rendering rate of current games.
What’s more as I will discuss shortly those demands ramp up enormously over time.
I can summarize VR graphics and forwards, a lot more everything. We’re going to need higher quality graphics. We’re going to need more graphics and we’re going to need faster graphics.
Hardware and software throughout the pipeline will have to change massively, causing a reevaluation of the techniques that have been worked out so carefully over the last 30 years. For a while, graphics will be the wild west again as a slew of experiments get run to figure out the new graphic sweet spots for the VR world.
That shouldn’t be surprising. It’s what happens every time there’s a major hardware shift. Many years ago, John noted that when the hardware platform changes it opens up new opportunities for software, things that were simply not possible before. For example, CD-Rom drives enabled Myst and the Wii controller enabled WI Sports.
With VR, we’re about to make one of those hardware changes in a big way and we’re all going to have a crazy exciting time over the next few years, figuring out what’s now possible thanks to the Rift.
And it’s not going to stop there. Consider how technology enabled the evolution of first-person shooters over time. Fast enough CPUs enabled Wolfenstein 3D. Faster CPUs and peer-to-peer networking enabled Doom. Still faster CPUs and the internet enabled Quake. And ever more powerful GPUs enabled Quake II and its successors. VR is going to be like that.
Given the right combination of hardware and software, every aspect of VR will improve hugely over time.
Of course, the VR platform isn’t going to advance on its own. Someone has to make that happen. That someone is Oculus Research. We’re putting together a broad mix of researchers, engineers and programmers to form the first complete well-funded VR research team in close to 20 years. We have the full backing of Oculus and Facebook for doing the deep long-term work needed to make VR as good as it can be.
Our mission is to keep advancing the VR platform so that you can develop new great VR experiences on top of it. And we’re fully aware that VR is more than just Oculus. We will be publishing our findings and working with University researchers, so the whole VR community can grow and move forward.
So let’s take a quick look at some of the ways in which we think we and others can make VR much better over time. Bear in mind, though, that this is research. So there are no guarantees.
Improving visual quality is the obvious place to start. The Rift DK2’s resolution is about a megapixel per eye, spread over a field of view of hundred degrees, which means that the pixel density is very low indeed, less in fact than the original Quake running on a monitor at 320 by 240 resolution. To get the same pixel density per degree as a modern desktop monitor, you’d need more than 50 times as many pixels, something on the order of 8k by 8k per eye.
Achieving retinal resolution would require somewhere in the neighborhood of 16K by 16K and that’s what the 100 degree field of view. The full dynamic feel of view of the human eye is on the order of 200 by 150 degrees, taking us to something like 32k by 24k resolution per eye. That means an eye tracking will almost certainly be a part of the future of VR. We wouldn’t actually draw anything close to 32k by 24k pixels. We take advantage of the steep drop in resolution away from the phobia, the tiny high-resolution area for the retina to render only as much resolution as the eye can detect.
But of course we need to know where the phobia was pointed every frame, which means we will need to integrate eye tracking with new display, optics, GPU and rendering technology in order to make a truly great VR visual experience possible.
We will also be looking at depth of field. In order to explain this, let me define a couple of terms.
The word, vergence refers to how the eyes cross to provide stereo vision. The more — the closer to the object of interest, the more crossed the eyes need to be.
Accommodation, on the other hand, refers to how the lens of the eye deforms to allow you focus at different distances. Most existing head-mounted displays support stereo images that allow for cracked vergence but all head-mounted displays that I’m aware of support only one focal depth for all objects in a scene, usually placing everything at near infinite focal depth.
Immersion in a scene in which all objects are accommodated at infinity is not perceptually ideal because the accommodation and vergence reflexes are linked. The lack of proper depth of field can cause discomfort, can prevent stereo fusion and may make VR feel subtly less real.
There are several possible ways in which this might be addressed but one thing they all have in common is that they require new hardware and significant changes to the rendering model.
Once the visuals are good enough, many other areas take on new importance in VR, including tracking, audio propagation and specialization, input, user experience, haptics and perceptual psychology. I wish I had time today to talk about the tremendous potential for all those areas in VR.
But one particularly exciting area is scene capture and reconstruction. VR is going to need a great deal of realistic content and the most promising base for that is deconstructing and reconstructing the real world. Over time this will evolve to incorporate dynamic elements such as weather, lighting, scraps of paper blowing in the wind, cars and of course people.
That brings us to what I think will be the core of VR. Once we have virtual places to be in, we will want to see ourselves and others in that space. Sensors that can capture the interpersonal cues, [human ski off of] head pose, eye movement, facial expression, hand gesture, body posture and movement and then map them onto avatars in real time will enable social interaction in these virtual spaces.
Bell Labs used to talk about enabling any two people on the face of the earth to communicate — a powerful vision that has pretty much come true.
Now imagine anyone on the face of the earth being able to be anywhere with anyone doing anything whenever they want. Humans are highly social creatures and I believe that sharing virtual spaces with other people will ultimately be the most powerful and widely used aspect of VR.
The art about driving the perceptual system, the more of it the better, so eventually it will involve all the senses: audio, tactile, balance, kinesthetic, maybe even smell and taste. The entire body will become the sensor, not just the eyes and the display will consist of not just pixels but of everything that drives perception.
VR also includes the other part of perceptual loop: Interaction as well as sensing. So it will include the ability to manipulate the virtual world as well. Over time VR will wrap more and more of our perceptual system and will truly become a personalized alternate reality.
It’ll be many years before that transformation is complete but VR is already changing what is possible. Just as new hardware long ago enabled Myst, Doom and Quake. Soon there will be VR games, apps and experiences that will revolutionize the way we interact with information and with each other.
We’re building the platform but what those games, apps and experiences will be is for you to decide.
Imagine you had a pair of enchanted goggles could teleport you to any place you could imagine, where would you go? What would you do? It’s a powerful thought. And that’s exactly the magic that VR makes possible.
Right now we are at the very beginning of the VR revolution and ahead of us lies a long incredibly exciting time of groundbreaking research and development by smart, talented people — people like you.
At Oculus, we’re all in on building the VR platform but that’s only part of the picture. It takes a whole community working on top of that platform to make VR happen. This is what it looks like when opportunity knocks. I hope you’ll take this rare chance to shape the future and come along on the adventure of a lifetime as we make science fiction real.
Thank you.
So let me just say that the energy in this room around VR is unbelievable. I never thought I would have a day like this. That’s fantastic. So get back to the real world, be back here around 11:30 for John Carmack’s talk.
Related Posts
- Protect Your Data In 2026: A Comprehensible Guide To Set Strong passwords
- The Diary Of A CEO: with AI Pioneer Yoshua Bengio (Transcript)
- Transcript: Intercom’s Eoghan McCabe on Triggernometry Podcast
- Mustafa Suleyman on Silicon Valley Girl Podcast (Transcript)
- NVIDIA CEO Jensen Huang on China, AI & U.S. Competitiveness at CSIS (Transcript)