John Carmack’s Keynote at Oculus Connect 2014 (Transcript)

October 12, 2014 11:33 am | By More

Full Transcript of Oculus CTO John Carmack’s keynote at Oculus Connect 2014 where he discusses the Gear VR and shares development stories…

 

 

Operator: Ladies and gentlemen, please welcome CTO Oculus John Carmack.

 

John Carmack – Oculus CTO

All right. So I don’t actually have a presentation but I can stand up here and talk about interesting things till they run me off the stage.

So mostly that’s going to be about Gear VR because that’s what I’ve spent most of my effort this last year on. So for the too-long into listening crowd, we will start with where we are today and then we will go into the history and the path it took us there which offers some insight for the current state of things.

So I believe pretty strongly in being very frank and open about flaws and the limitations. So this is kind of where I go off message a little bit from the standard PR plan and talk very frankly about things.

So the current killer limitations on Gear VR are the fact that it’s a 60 Hz low persistence display which has flicker problems for a lot of people and it has no positional tracking, which is one of the critical aspects for DK2 in future products.

So there are plans and mitigation strategies for what we can do around that now and how we want to improve that in the future. The 60 Hz low persistence turns out to be — it’s not as tragic as a lot of people were expecting it. There were a lot of people just like that’s completely unusable and certainly Oculus has been talking about minimum frequencies for low persistence displays. And a lot of people were surprised that it wasn’t as bad as they thought it was going to be. But there are still broad ranges of sensitivities among people where for some people it really is bad and some people can’t even notice it, and problems with flicker sensitivity, because all the corners of the display screens both on DK2, basically all of our screens that are OLED based.

They have a cork in them where they don’t blur so much but there is a two frame rise problem. And many of you have probably seen this where it’s a type of ghosting but it’s not smearing like a full persistence display. But if you see especially like a dark tree limb in tuscani and you move your head very rapidly you will see one ghost of it, offset by certain amount proportional of your head movement speed. And that’s a problem that gets worse as the color palette gets dimmer. The very dark colors have more of a problem with smear.

This is something that we dearly want Samsung to fix in future displays but it hasn’t been their top priority to address but it matters in VR. We have made mitigation strategies for that where you can de-ghost where it’s kind of a software overdrive. If you know what color you put in there, you know what color you’re going to, if it’s higher you can actually drive it higher than what you want it to be to compensate for that problem.

And we’ve done that on both PC and mobile but it’s really another extra expense that’s hard to justify on mobile relative to all of our other problems and things that we want to spend our resources on. But still in general, darker games are an improvement. But unlike the problem with PCs and console games in the classic Doom 3 problem where if you make a really dark game, at least in VR when you can block out almost all the outside light that can be a much much more pragmatic workable solution than it is for a typical AAA game, where you have to worry about ambient lighting in the living room and so on.

So that’s one of the takeaways for Gear VR is the super bright pastel colored worlds. While they play well in VR from at a higher refresh rate, they have a drawback that you have to deal with and work around a little bit at the 60 Hz side of things.

The big one that is harder to mitigate though is the lack of position tracking. I mean we do all know, we’ve been talking about presence and how important it is to get that sub-millimeter tracking accuracy and we just — we don’t have it or any analog on mobile right now. And there are things that we’ve taken early steps on this – well, can’t you use like the AR applications to use the outward facing camera? And I have done integrations with QUALCOMM’s Vuforia and tried different things with that.

But some of the things that people miss when — if you pay attention carefully to these slides, one of the things on there is submillimeter no jitter — submillimeter accuracy with no jitter. And the current things that people can do with outward facing cameras for absolute positioning are really not close to that.

So we have — there’s not a great mitigation strategy for this other than all the things that people would do on DK1 to just try to not have the problem by not having things in the real near field where your body is not interacting with them the way you want them to. But there’s no really killer strategy to avoid that unless you’ve just got everything in the distance field, no stereoscopy and no near field effects. So that’s something that we just kind of have to grit our teeth and live with for now.

Now I have a scheme for both of these. I have things that I think are workable solutions that involve changes of the architecture that we may address these in the future. Because one of the things about kind of hitching our train with Samsung here is that they – Samsung’s technology ticks twice a year. They have big product rollouts two times a year and we expect Gear VR to be kind of following this path. So it’s not like it’s going to be years between updates here. There’s going to be hardware changes and updates and we can look at rolling out major new features as it goes on.

So the paths that I think can address these problems are: for the low persistence display, right now we achieved 75 Hz on DK2. Now DK2, if you’ve ever taken one of them apart, it’s basically a Note 3 screen. You know, this is a Note 4 screen with Gear VR. So they are very similar.

And the question might be asked: Well, why can’t we just run the Note 4 screen at 75 Hz like DK2?

So there’s couple aspects to that. There’s two things that we do to get the refresh rate on DK2. One of them is pulling out all of the blanking lines at the end of the blank. It’s a weird archaeology of technology thing that even LCD and OLED panels they still have vertical blanking lines as if they are CRT waiting for the raster to go back up to the top. And this is just, as display technologies have evolved, they have kept these historical artifacts. So there’s a little bit of margin where you can take all of those out and we would’ve been able to – and we did have one of the Galaxy — one of our earlier prototypes we could run it at 70Hz.

So the question was: Well, is it okay to run at 70Hz? Is that worth the improvement there? And the problem is a lot of things that we were looking at are going to be media related where you have – you want to be able to playback something that was captured with cameras, which is going to be usually a 60 Hz input.

In fact, one of the big achievements I think over the last year was getting a lot of the work in panoramic photography focused on 60 Hz instead of just 30 Hz capture. But that would’ve been probably a net negative for a lot of these things if we made a 70 Hz display that would make — it would add beat frequencies to anything playback at 60 Hz and it would make all the normal games about 15% harder to hit that target frame rate and that was a significant concern for us.

And then the last 5% five frames per second that we get out of DK2 is actually by kind of overclocking it beyond what Samsung wants to do to the millions and millions of Note 4s that are going to ship out there. So I don’t think that even in the coming year — coming years that we’re going to see these standard displays that you’re just going to be able to turn the clock up to 90 or 120 Hz.

And in fact, while we’re talking about 90 Hz as the level where most people don’t see any flicker, a lot of people can still tell on DK2 that it’s flickering especially if you do the bad things, you put white at the outside edge of the screen and that’s where a lot of people in their peripheral vision will still be able to see it.

90 Hz is where probably 95-99% of the people really don’t see it. We still run across a few people that they can perceive it at in the Valve room or at one of our 90 Hz displays but that mostly solves it.

But I would argue that there’s still a strong reason to go beyond that all the way to 120 Hz and the reason for that 120 Hz gives you the even divisions where you can run 60 Hz content with double frames. You can 24 Hz content and even – or you can run full 120 Hz new frames.

But there is very little likelihood that we’re going to see these small displays actually running at 120 or even 90 Hz at the size they’re designed for. There’s a lot of inertia in the cell phone industry, small display industry and they don’t currently see a real need for doing that. Although I would argue and I am making this case to Samsung that they can probably get more perceptible benefit to the users right now by instead of revving the resolution even more, because if you put 1440 phone next to the 1080 phone, you have to look kind of carefully or interact with it for a long time to tell the difference in quality, and it’s at least arguable that if you went to a low persistence 90 Hz display, the scrolling test which everybody does when they get a new cell phone, you how fast and smooth does it scroll, is the Apple better or the Android system that they could get more of a perceptible win by doing that. So that’s one tag.

I mean we are at least saying there are benefits to doing this at the higher refresh rates. But the scheme that I am proposing that I think is doable, that I think there’s a real chance that we can get Samsung or some other display manufacturer if they want to step up on this, is the return of interlaced scanning where interlaced scanning and programmers are groaning to themselves right now probably about this.

But interlaced scanning was invented for a reason and back in the 40s and 50s when TV was developed because we knew that we needed at least 60 frames per second to avoid flicker. But they really could only get the resolution that was acceptable at about half that. So the ideas instead of scanning all the lines down one time after another which at 30 frames per second would flicker horribly, you instead go every other line and then every other line on the way back, which means that it’s kind of like low persistence issue which CRTs are even lower persistence than OLEDs.

They have nano seconds, that actually illumination depending on the phosphor there. But what that does is it gives you the same kind of fooling effect of low persistence, lets you imagine the motions that’s there but alternating the lines there, your brain will imagine that it’s a solid screen even though you’re only updating every other half of it.

Now this is something that I am hand-waving at Samsung saying, I don’t think this is going to be that hard. You probably still have little crafty bits in your LC, in your OLED controllers that are actually made for doing interlaced displays on NTSCs. I can’t believe this is going to be that hard. So if we can get that, then we can run a 60 Hz display wit 120 fields per second, So this is not like NTSC where you can remember the flicker of that. That’s a 60 Hz field and it’s not great but if you double that up and you are running 120 Hz fields, I think that can work.

And if I can get that bit and I think that’s a winnable battle. I think that, that can be done. There is limits to what we can try to influence Samsung on the display side because obviously they care about that hundreds of millions of cell phone displays, and however much we expect VR to be successful in the next couple years, it’s not in that level of units.

So we might be able to get them to do few minor things for it. But what I’ve learned from dealing with Samsung certainly through all the Gear VR side, on the software side is there is an icebreaker moment where as soon as they push back and push back until something breaks through and they do something and it works out really well based on what I’ve suggested, and after that, then the doors are open and we can get all sorts of things.

Because from what I think will be that icebreaker moment with two-way interlaced displays, there are two steps beyond that, that I think can provide really significant benefits.

The next one would be going to a deeper interlaced display. Instead of going every other line, start going every eight lines that you have 8 times as many fields, or every 16 for 16 times as many fields. That would give us the kilohertz display that we would really like to see where it would scan down and you would have kind of like a venetian blind effect of one image there and then 1 ms, at a kilohertz rate it would have the next one, and the next one probably going in some kind of interlaced pattern rather than a straight going across there.

So then we could have every IMU update, updating the entire visual field of view. We can do something like that now with sliced renderings, sliced time warps where we can divide the screen up into currently eight bands. And we can render that band just in front of a raster. And that’s how we can get down to this sub 4 millisecond motion to photons latency. But that has the issue of — it is motion-to-photons but you’re only seeing it in a band of your view and it moves around. And we still have questions about some of little things like when your eye does a fast contract from one side to the other, if it starts or ends in an area where there’s actually nothing in the display is that causing some subliminal issues for us. And I think that there would be this real benefit to having the entire screen always illuminated with some set of bands that’s just kind of moving around as it goes.

Pages: 1 2 3 4 5 6

Category: Technology

Comments are closed.