QUOTE(targon @ Feb 29 2016, 09:50 PM)
Things that are surely to be certain are:
-Product cycles are getting longer these days, hence you can expect new launches between 2 generation levels Longer timeframe.
-Technology companies are not stupid. they will milk out (max) cash flow out of each generational products before launching a new one. Like this Mr Huang of NVidia. (he got the business brain in him)
On the technical side,
1) Chip design nowadays are getting more and more complicated (billions transistor count) and there are limitations in fabrication process and also heat issue.
2) Debugging these chips will take longer time. If they fast track it and release a buggy chip, they stand to lose more from RMA and bad rep.
3) It is right to salvage chip defect and reuse as lower end parts. Less wastage, recover R&D cost and fabrication cost.
4) Architecture performance improvement and/or power saving is normally achieved by fine tuning the architecture, better power gating or new ways to perform the task. Over the years, technology company already fine tuned a lot of aspect in their architecture, leaving less and less things to fine tune. Intel's CPU core improvement is generally only ~10% from architecture to architecture (instead of having more cores, it's really hard to increase performance for each core given the same thermal envelope).
From the above points, you can see design process is getting longer and longer, this also translate to R&D cost getting higher and higher. This is why some company choose to rebrand their products. IMO, Nvidia is already doing a lot of enhancement on both hardware architecture designs and software side (CUDA/Gameworks) in order to optimize the performance possible. (Anyone may argue open source is better, but let's face it, almost no investor and hardware vendor will be willing to invest too much time/effort on open source things. Open source = broad compatibility = hard & longer to optimize.)
TLDR, long product cycle for GPU is good since this forces the developer to use smarter technique and optimize to achieve better results instead of just brute-forcing it. This is why console graphics are generally on par/or better with PC graphics, even at a (much) lower performance capability.