QUOTE(Demonic Wrath @ Jul 15 2016, 08:07 AM)
I don't know what rumour or hype you heard, but this is the first time I heard someone say that Fury X was expected to zip through GTX 1070. Its expecting too much from a card built on 28 nm tech, a generation behind GTX 1070 16 nm architecture. Fury X should be compared to its generation, likeGTX 980 Ti or GTX Titan X, Trust me, tech reviewers will be talking about what performance gain the cards got with DX12 rather than where it sits in the Time Spy benchmark table.
If you say Fury X (a card that retails for USD 589 around Xmas 2015) performance is
lacking in this DX12 Time Spy results, than what word can you use to describe the performance of GTX 980 Ti (USD 599 around the same time) or the beast GTX Titan X (USD 999).
Again, point of my earlier post, what are you willing to spend for the performance you want. Those benchmark table are nice if you can get all the cards for free. Put current price (used or new) of those card in that table, and means a lot more to a person a building a new PC with RM 3K budget, than a person who have no qualms plonking RM 3K+ just for brand new shiny GTX 1080 GPU alone. Head on to LYN market, there's a used R9 290X asking for RM 500, and compare its position in those Time Spy bench, to GTX 970 GPU with asking price between RM 900 to RM 1200 used. So does that mean that all GTX 970 owner should sell their GPUs, and get a used R9 290X and still have extra money for a 256GB SSD? Definately no. I say go GTX 1070.
Give credit where credit is due. Either by luck or by purpose, AMD's decision to keep faith with their GCN years ago paid off. So now, they might still be able to optimize and extract a bit more from their older gen cards to add on their already impressive gains with the new APIs, but there's only so much more they can get out of that 28 nm tech. Focus will be more on their Polaris now and the upcoming Vega.
As for Nvidia, I think its pretty much at the end of what they can get out of their Maxwell GPUs. Those are still good cards with DX11 games, which will still be released alongside DX12 version of the game for at least the foreseeable future. Nvidia should pour all their effort into Pascal now. It has proved me wrong because it does gain from DX12 Async Compute. There's potential for Pascal to make the same amount of performance gain like AMD if they really try.
So maybe its time to move on from this Async Compute debate. I'm so looking forward to DX12 EMA with SFR implementation, because that will be the best thing gamer can expect from the DX12 shift.
This post has been edited by adilz: Jul 15 2016, 10:09 AM