Postby TorTorden » Thu Feb 04, 2016 6:18 pm
Actually I have been thinking a bit about this one.
And I believe you can get away with just as much rendering work with this as with regular vr using simple meta data tags.
You simply render the scene as now and add a meta bit to pixel data that the device then reads. This bit tags which screen that should output said pixel and builds the light field image.
Much the same way as UHD will handle HDR meta data etc so you only render two images as with vr.
And using already available z buffer dat,a output can be adjusted after rendering in the display.
You only really need two. One for super close, and one for not close.
Dammit I want this too...
Hey I'm Thor -
People call me Bob.
Rule 1: Pillage. Then burn.
Rule 2: No such thing as overkill, as long as there are reloads.