BTW, here is an alleged "leaked" set of benchmarks ...
-
http://www.mobipicker.com/nvidia-gtx-10 ... ks-leaked/Looking just at 4K (almost 50% faster for GTX 1080 than 980 Ti, although the former was OC'd, not the latter), it obviously make sense given the bump to 10GiB GDDR5X in the GTX 1080 from 6GiB GDDR5 in the GTX 980 Ti, and even the 8GiB GDDR5 in the GTX 1070 (resolution requires more memory). I'm surprised no one has showed the Titan X 12GiB GDDR5 at 4K in comparison, although one can interpolate things.
In any non-VR case, this seems about right ...
'According to Nvidia’s official marketing material the GeForce GTX 1080 at stock will be roughly 20% faster than the GTX 980 Ti and the GTX 1070 will be slightly faster than reference GTX 980 Ti cards and on par with factory overclocked variants.'
Now, again, that's
not VR performance.
Framebuffer for 4K >> VR > 2K, and it will be interesting to see where the "threshhold" is on performance and memory at the added buffer of VR, but the jump to 4K being way, way more than VR from 2K. The GP104 clock is way, way up in both the GTX 1070 and 1080, even though it has less units than the GTX 980 Ti.
So at this point, even putting VR aside, it's really worth waiting until the summer availability of the GTX 1070 8GiB GDDR5 than buying a GTX 980 Ti 6GiB GDDR5, on power and the reality that even OC versions will probably not be much more than US$400 in the worst case. Here in the US, if one cannot wait, try to get a 980 Ti for US $400-something. Paying over US$500 just isn't worth it at all, and even much over $450 is difficult to condone, unless the 1070 availability really slips until fall.
And if you want faster, of course, the GTX 1080. Nothing else will come close.
Oh, it is interesting that even beyond the feature size shrink to the 16nm FET, the reduced units means it's cheaper to make the equivalent 1000 series than the 900 series now (even if at the same feature size -- the advantage of higher clock largely due to the feature size shrink). That is, of course, assuming the yields are good. TSMC 28nm has been around quite awhile now, and they are exceedingly good at making the layers. But if they've gotten the yields up with 16nm FET, they'll want to get them out as fast as possible. Although it all depends on how much premium nVidia has to pay TSMC (the bane of nVidia being fabless), they should be cheaper to make, even with lower yields as there will be far more dies per set of wafers.